Fun with maths: exploring implications of mathematical models for malaria eradication.
Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A
2014-12-11
Mathematical analyses and modelling have an important role informing malaria eradication strategies. Simple mathematical approaches can answer many questions, but it is important to investigate their assumptions and to test whether simple assumptions affect the results. In this note, four examples demonstrate both the effects of model structures and assumptions and also the benefits of using a diversity of model approaches. These examples include the time to eradication, the impact of vaccine efficacy and coverage, drug programs and the effects of duration of infections and delays to treatment, and the influence of seasonality and migration coupling on disease fadeout. An excessively simple structure can miss key results, but simple mathematical approaches can still achieve key results for eradication strategy and define areas for investigation by more complex models.
Microarray-based cancer prediction using soft computing approach.
Wang, Xiaosheng; Gotoh, Osamu
2009-05-26
One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.
Investigating decoherence in a simple system
NASA Technical Reports Server (NTRS)
Albrecht, Andreas
1991-01-01
The results of some simple calculations designed to study quantum decoherence are presented. The physics of quantum decoherence are briefly reviewed, and a very simple 'toy' model is analyzed. Exact solutions are found using numerical techniques. The type of incoherence exhibited by the model can be changed by varying a coupling strength. The author explains why the conventional approach to studying decoherence by checking the diagonality of the density matrix is not always adequate. Two other approaches, the decoherence functional and the Schmidt paths approach, are applied to the toy model and contrasted to each other. Possible problems with each are discussed.
NASA Astrophysics Data System (ADS)
Donahue, William; Newhauser, Wayne D.; Ziegler, James F.
2016-09-01
Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.
Donahue, William; Newhauser, Wayne D; Ziegler, James F
2016-09-07
Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.
A Latent Variable Approach to the Simple View of Reading
ERIC Educational Resources Information Center
Kershaw, Sarah; Schatschneider, Chris
2012-01-01
The present study utilized a latent variable modeling approach to examine the Simple View of Reading in a sample of students from 3rd, 7th, and 10th grades (N = 215, 188, and 180, respectively). Latent interaction modeling and other latent variable models were employed to investigate (a) the functional form of the relationship between decoding and…
Model compilation: An approach to automated model derivation
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo
1990-01-01
An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.
Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
Modeling shared resources with generalized synchronization within a Petri net bottom-up approach.
Ferrarini, L; Trioni, M
1996-01-01
This paper proposes a simple and effective way to represent shared resources in manufacturing systems within a Petri net model previously developed. Such a model relies on the bottom-up and modular approach to synthesis and analysis. The designer may define elementary tasks and then connect them with one another with three kinds of connections: self-loops, inhibitor arcs and simple synchronizations. A theoretical framework has been established for the analysis of liveness and reversibility of such models. The generalized synchronization, here formalized, represents an extension of the simple synchronization, allowing the merging of suitable subnets among elementary tasks. It is proved that under suitable, but not restrictive, hypotheses the generalized synchronization may be substituted for a simple one, thus being compatible with all the developed theoretical body.
Predicting Fish Densities in Lotic Systems: a Simple Modeling Approach
Fish density models are essential tools for fish ecologists and fisheries managers. However, applying these models can be difficult because of high levels of model complexity and the large number of parameters that must be estimated. We designed a simple fish density model and te...
New approach in the quantum statistical parton distribution
NASA Astrophysics Data System (ADS)
Sohaily, Sozha; Vaziri (Khamedi), Mohammad
2017-12-01
An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.
Dynamical minimalism: why less is more in psychology.
Nowak, Andrzej
2004-01-01
The principle of parsimony, embraced in all areas of science, states that simple explanations are preferable to complex explanations in theory construction. Parsimony, however, can necessitate a trade-off with depth and richness in understanding. The approach of dynamical minimalism avoids this trade-off. The goal of this approach is to identify the simplest mechanisms and fewest variables capable of producing the phenomenon in question. A dynamical model in which change is produced by simple rules repetitively interacting with each other can exhibit unexpected and complex properties. It is thus possible to explain complex psychological and social phenomena with very simple models if these models are dynamic. In dynamical minimalist theories, then, the principle of parsimony can be followed without sacrificing depth in understanding. Computer simulations have proven especially useful for investigating the emergent properties of simple models.
NASA Astrophysics Data System (ADS)
Dhara, Chirag; Renner, Maik; Kleidon, Axel
2015-04-01
The convective transport of heat and moisture plays a key role in the climate system, but the transport is typically parameterized in models. Here, we aim at the simplest possible physical representation and treat convective heat fluxes as the result of a heat engine. We combine the well-known Carnot limit of this heat engine with the energy balances of the surface-atmosphere system that describe how the temperature difference is affected by convective heat transport, yielding a maximum power limit of convection. This results in a simple analytic expression for convective strength that depends primarily on surface solar absorption. We compare this expression with an idealized grey atmosphere radiative-convective (RC) model as well as Global Circulation Model (GCM) simulations at the grid scale. We find that our simple expression as well as the RC model can explain much of the geographic variation of the GCM output, resulting in strong linear correlations among the three approaches. The RC model, however, shows a lower bias than our simple expression. We identify the use of the prescribed convective adjustment in RC-like models as the reason for the lower bias. The strength of our model lies in its ability to capture the geographic variation of convective strength with a parameter-free expression. On the other hand, the comparison with the RC model indicates a method for improving the formulation of radiative transfer in our simple approach. We also find that the latent heat fluxes compare very well among the approaches, as well as their sensitivity to surface warming. What our comparison suggests is that the strength of convection and their sensitivity in the climatic mean can be estimated relatively robustly by rather simple approaches.
Kinetics of DSB rejoining and formation of simple chromosome exchange aberrations
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Nikjoo, H.; O'Neill, P.; Goodhead, D. T.
2000-01-01
PURPOSE: To investigate the role of kinetics in the processing of DNA double strand breaks (DSB), and the formation of simple chromosome exchange aberrations following X-ray exposures to mammalian cells based on an enzymatic approach. METHODS: Using computer simulations based on a biochemical approach, rate-equations that describe the processing of DSB through the formation of a DNA-enzyme complex were formulated. A second model that allows for competition between two processing pathways was also formulated. The formation of simple exchange aberrations was modelled as misrepair during the recombination of single DSB with undamaged DNA. Non-linear coupled differential equations corresponding to biochemical pathways were solved numerically by fitting to experimental data. RESULTS: When mediated by a DSB repair enzyme complex, the processing of single DSB showed a complex behaviour that gives the appearance of fast and slow components of rejoining. This is due to the time-delay caused by the action time of enzymes in biomolecular reactions. It is shown that the kinetic- and dose-responses of simple chromosome exchange aberrations are well described by a recombination model of DSB interacting with undamaged DNA when aberration formation increases with linear dose-dependence. Competition between two or more recombination processes is shown to lead to the formation of simple exchange aberrations with a dose-dependence similar to that of a linear quadratic model. CONCLUSIONS: Using a minimal number of assumptions, the kinetics and dose response observed experimentally for DSB rejoining and the formation of simple chromosome exchange aberrations are shown to be consistent with kinetic models based on enzymatic reaction approaches. A non-linear dose response for simple exchange aberrations is possible in a model of recombination of DNA containing a DSB with undamaged DNA when two or more pathways compete for DSB repair.
Models for forecasting hospital bed requirements in the acute sector.
Farmer, R D; Emami, J
1990-01-01
STUDY OBJECTIVE--The aim was to evaluate the current approach to forecasting hospital bed requirements. DESIGN--The study was a time series and regression analysis. The time series for mean duration of stay for general surgery in the age group 15-44 years (1969-1982) was used in the evaluation of different methods of forecasting future values of mean duration of stay and its subsequent use in the formation of hospital bed requirements. RESULTS--It has been suggested that the simple trend fitting approach suffers from model specification error and imposes unjustified restrictions on the data. Time series approach (Box-Jenkins method) was shown to be a more appropriate way of modelling the data. CONCLUSION--The simple trend fitting approach is inferior to the time series approach in modelling hospital bed requirements. PMID:2277253
Wiederholt, Ruscena; Bagstad, Kenneth J.; McCracken, Gary F.; Diffendorfer, Jay E.; Loomis, John B.; Semmens, Darius J.; Russell, Amy L.; Sansone, Chris; LaSharr, Kelsie; Cryan, Paul; Reynoso, Claudia; Medellin, Rodrigo A.; Lopez-Hoffman, Laura
2017-01-01
Given rapid changes in agricultural practice, it is critical to understand how alterations in ecological, technological, and economic conditions over time and space impact ecosystem services in agroecosystems. Here, we present a benefit transfer approach to quantify cotton pest-control services provided by a generalist predator, the Mexican free-tailed bat (Tadarida brasiliensis mexicana), in the southwestern United States. We show that pest-control estimates derived using (1) a compound spatial–temporal model – which incorporates spatial and temporal variability in crop pest-control service values – are likely to exhibit less error than those derived using (2) a simple-spatial model (i.e., a model that extrapolates values derived for one area directly, without adjustment, to other areas) or (3) a simple-temporal model (i.e., a model that extrapolates data from a few points in time over longer time periods). Using our compound spatial–temporal approach, the annualized pest-control value was \\$12.2 million, in contrast to an estimate of \\$70.1 million (5.7 times greater), obtained from the simple-spatial approach. Using estimates from one year (simple-temporal approach) revealed large value differences (0.4 times smaller to 2 times greater). Finally, we present a detailed protocol for valuing pest-control services, which can be used to develop robust pest-control transfer functions for generalist predators in agroecosystems.
Peer pressure and Generalised Lotka Volterra models
NASA Astrophysics Data System (ADS)
Richmond, Peter; Sabatelli, Lorenzo
2004-12-01
We develop a novel approach to peer pressure and Generalised Lotka-Volterra (GLV) models that builds on the development of a simple Langevin equation that characterises stochastic processes. We generalise the approach to stochastic equations that model interacting agents. The agent models recently advocated by Marsilli and Solomon are motivated. Using a simple change of variable, we show that the peer pressure model (similar to the one introduced by Marsilli) and the wealth dynamics model of Solomon may be (almost) mapped one into the other. This may help shed light in the (apparently) different wealth dynamics described by GLV and the Marsili-like peer pressure models.
A Multivariate Model for the Study of Parental Acceptance-Rejection and Child Abuse.
ERIC Educational Resources Information Center
Rohner, Ronald P.; Rohner, Evelyn C.
This paper proposes a multivariate strategy for the study of parental acceptance-rejection and child abuse and describes a research study on parental rejection and child abuse which illustrates the advantages of using a multivariate, (rather than a simple-model) approach. The multivariate model is a combination of three simple models used to study…
NASA Technical Reports Server (NTRS)
Sayood, K.; Chen, Y. C.; Wang, X.
1992-01-01
During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.
QSAR modelling using combined simple competitive learning networks and RBF neural networks.
Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E
2018-04-01
The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.
Simple animal models for amyotrophic lateral sclerosis drug discovery.
Patten, Shunmoogum A; Parker, J Alex; Wen, Xiao-Yan; Drapeau, Pierre
2016-08-01
Simple animal models have enabled great progress in uncovering the disease mechanisms of amyotrophic lateral sclerosis (ALS) and are helping in the selection of therapeutic compounds through chemical genetic approaches. Within this article, the authors provide a concise overview of simple model organisms, C. elegans, Drosophila and zebrafish, which have been employed to study ALS and discuss their value to ALS drug discovery. In particular, the authors focus on innovative chemical screens that have established simple organisms as important models for ALS drug discovery. There are several advantages of using simple animal model organisms to accelerate drug discovery for ALS. It is the authors' particular belief that the amenability of simple animal models to various genetic manipulations, the availability of a wide range of transgenic strains for labelling motoneurons and other cell types, combined with live imaging and chemical screens should allow for new detailed studies elucidating early pathological processes in ALS and subsequent drug and target discovery.
Acceleration and Velocity Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Truax, Roger
2015-01-01
A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an autoregressive moving average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. Simple harmonic motion is assumed for the acceleration computations, and the central difference equation with a linear autoregressive model is used for the computations of velocity. A cantilevered rectangular wing model is used to validate the simple approach. Quality of the computed deflection, acceleration, and velocity values are independent of the number of fibers. The central difference equation with a linear autoregressive model proposed in this study follows the target response with reasonable accuracy. Therefore, the handicap of the backward difference equation, phase shift, is successfully overcome.
A Model and Simple Iterative Algorithm for Redundancy Analysis.
ERIC Educational Resources Information Center
Fornell, Claes; And Others
1988-01-01
This paper shows that redundancy maximization with J. K. Johansson's extension can be accomplished via a simple iterative algorithm based on H. Wold's Partial Least Squares. The model and the iterative algorithm for the least squares approach to redundancy maximization are presented. (TJH)
A simple method for EEG guided transcranial electrical stimulation without models
NASA Astrophysics Data System (ADS)
Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q.; Dmochowski, Jacek; Bikson, Marom
2016-06-01
Objective. There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. Approach. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a ‘gold standard’ numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Main results. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Significance. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.
Calibration of Response Data Using MIRT Models with Simple and Mixed Structures
ERIC Educational Resources Information Center
Zhang, Jinming
2012-01-01
It is common to assume during a statistical analysis of a multiscale assessment that the assessment is composed of several unidimensional subtests or that it has simple structure. Under this assumption, the unidimensional and multidimensional approaches can be used to estimate item parameters. These two approaches are equivalent in parameter…
Simple Heuristic Approach to Introduction of the Black-Scholes Model
ERIC Educational Resources Information Center
Yalamova, Rossitsa
2010-01-01
A heuristic approach to explaining of the Black-Scholes option pricing model in undergraduate classes is described. The approach draws upon the method of protocol analysis to encourage students to "think aloud" so that their mental models can be surfaced. It also relies upon extensive visualizations to communicate relationships that are…
NASA Astrophysics Data System (ADS)
Vespe, Francesco; Benedetto, Catia
2013-04-01
The huge amount of GPS Radio Occultation (RO) observations currently available thanks to space mission like COSMIC, CHAMP, GRACE, TERRASAR-X etc., have greatly encouraged the research of new algorithms suitable to extract humidity, temperature and pressure profiles of the atmosphere in a more and more precise way. For what concern the humidity profiles in these last years two different approaches have been widely proved and applied: the "Simple" and the 1DVAR methods. The Simple methods essentially determine dry refractivity profiles from temperature analysis profiles and hydrostatic equation. Then the dry refractivity is subtracted from RO refractivity to achieve the wet component. Finally from the wet refractivity is achieved humidity. The 1DVAR approach combines RO observations with profiles given by the background models with both the terms weighted with the inverse of covariance matrix. The advantage of "Simple" methods is that they are not affected by bias due to the background models. We have proposed in the past the BPV approach to retrieve humidity. Our approach can be classified among the "Simple" methods. The BPV approach works with dry atmospheric CIRA-Q models which depend on latitude, DoY and height. The dry CIRA-Q refractivity profile is selected estimating the involved parameters in a non linear least square fashion achieved by fitting RO observed bending angles through the stratosphere. The BPV as well as all the other "Simple" methods, has as drawback the unphysical occurrence of negative "humidity". Thus we propose to apply a modulated weighting of the fit residuals just to minimize the effects of this inconvenient. After a proper tuning of the approach, we plan to present the results of the validation.
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
Building Regression Models: The Importance of Graphics.
ERIC Educational Resources Information Center
Dunn, Richard
1989-01-01
Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)
Design and analysis of simple choice surveys for natural resource management
Fieberg, John; Cornicelli, Louis; Fulton, David C.; Grund, Marrett D.
2010-01-01
We used a simple yet powerful method for judging public support for management actions from randomized surveys. We asked respondents to rank choices (representing management regulations under consideration) according to their preference, and we then used discrete choice models to estimate probability of choosing among options (conditional on the set of options presented to respondents). Because choices may share similar unmodeled characteristics, the multinomial logit model, commonly applied to discrete choice data, may not be appropriate. We introduced the nested logit model, which offers a simple approach for incorporating correlation among choices. This forced choice survey approach provides a useful method of gathering public input; it is relatively easy to apply in practice, and the data are likely to be more informative than asking constituents to rate attractiveness of each option separately.
Flood Risk and Asset Management
2011-06-15
Model cascade could include HEC - RAS , HR BREACH and Dynamic RFSM. Action HRW to consider model coupling and advise DM. It was felt useful to...simple loss of life approach. WL can provide input and advise on USACE LIFESIM approaches. To enable comparison with HEC FRM approaches, it was
ERIC Educational Resources Information Center
Fasoula, S.; Nikitas, P.; Pappa-Louisi, A.
2017-01-01
A series of Microsoft Excel spreadsheets were developed to simulate the process of separation optimization under isocratic and simple gradient conditions. The optimization procedure is performed in a stepwise fashion using simple macros for an automatic application of this approach. The proposed optimization approach involves modeling of the peak…
A simple method for EEG guided transcranial electrical stimulation without models.
Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q; Dmochowski, Jacek; Bikson, Marom
2016-06-01
There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a 'gold standard' numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.
An analytical approach for predicting pilot induced oscillations
NASA Technical Reports Server (NTRS)
Hess, R. A.
1981-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion or determining the susceptability of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Laminar flamelet modeling of turbulent diffusion flames
NASA Technical Reports Server (NTRS)
Mell, W. E.; Kosaly, G.; Planche, O.; Poinsot, T.; Ferziger, J. H.
1990-01-01
In modeling turbulent combustion, decoupling the chemistry from the turbulence is of great practical significance. In cases in which the equilibrium chemistry model breaks down, laminar flamelet modeling (LFM) is a promising approach to decoupling. Here, the validity of this approach is investigated using direct numerical simulation of a simple chemical reaction in two-dimensional turbulence.
Various approaches and tools exist to estimate local and regional PM2.5 impacts from a single emissions source, ranging from simple screening techniques to Gaussian based dispersion models and complex grid-based Eulerian photochemical transport models. These approache...
On determinant representations of scalar products and form factors in the SoV approach: the XXX case
NASA Astrophysics Data System (ADS)
Kitanine, N.; Maillet, J. M.; Niccoli, G.; Terras, V.
2016-03-01
In the present article we study the form factors of quantum integrable lattice models solvable by the separation of variables (SoVs) method. It was recently shown that these models admit universal determinant representations for the scalar products of the so-called separate states (a class which includes in particular all the eigenstates of the transfer matrix). These results permit to obtain simple expressions for the matrix elements of local operators (form factors). However, these representations have been obtained up to now only for the completely inhomogeneous versions of the lattice models considered. In this article we give a simple algebraic procedure to rewrite the scalar products (and hence the form factors) for the SoV related models as Izergin or Slavnov type determinants. This new form leads to simple expressions for the form factors in the homogeneous and thermodynamic limits. To make the presentation of our method clear, we have chosen to explain it first for the simple case of the XXX Heisenberg chain with anti-periodic boundary conditions. We would nevertheless like to stress that the approach presented in this article applies as well to a wide range of models solved in the SoV framework.
ERIC Educational Resources Information Center
Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill
Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…
Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel
ERIC Educational Resources Information Center
Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.
2007-01-01
A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…
Action Centered Contextual Bandits.
Greenewald, Kristjan; Tewari, Ambuj; Klasnja, Predrag; Murphy, Susan
2017-12-01
Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.
NASA Astrophysics Data System (ADS)
Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.
2018-03-01
Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.
Modeling gene expression measurement error: a quasi-likelihood approach
Strimmer, Korbinian
2003-01-01
Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637
Inferring Soil Moisture Memory from Streamflow Observations Using a Simple Water Balance Model
NASA Technical Reports Server (NTRS)
Orth, Rene; Koster, Randal Dean; Seneviratne, Sonia I.
2013-01-01
Soil moisture is known for its integrative behavior and resulting memory characteristics. Soil moisture anomalies can persist for weeks or even months into the future, making initial soil moisture a potentially important contributor to skill in weather forecasting. A major difficulty when investigating soil moisture and its memory using observations is the sparse availability of long-term measurements and their limited spatial representativeness. In contrast, there is an abundance of long-term streamflow measurements for catchments of various sizes across the world. We investigate in this study whether such streamflow measurements can be used to infer and characterize soil moisture memory in respective catchments. Our approach uses a simple water balance model in which evapotranspiration and runoff ratios are expressed as simple functions of soil moisture; optimized functions for the model are determined using streamflow observations, and the optimized model in turn provides information on soil moisture memory on the catchment scale. The validity of the approach is demonstrated with data from three heavily monitored catchments. The approach is then applied to streamflow data in several small catchments across Switzerland to obtain a spatially distributed description of soil moisture memory and to show how memory varies, for example, with altitude and topography.
Gironés, Xavier; Carbó-Dorca, Ramon; Ponec, Robert
2003-01-01
A new approach allowing the theoretical modeling of the electronic substituent effect is proposed. The approach is based on the use of fragment Quantum Self-Similarity Measures (MQS-SM) calculated from domain averaged Fermi Holes as new theoretical descriptors allowing for the replacement of Hammett sigma constants in QSAR models. To demonstrate the applicability of this new approach its formalism was applied to the description of the substituent effect on the dissociation of a broad series of meta and para substituted benzoic acids. The accuracy and the predicting power of this new approach was tested on the comparison with a recent exhaustive study by Sullivan et al. It has been shown that the accuracy and the predicting power of both procedures is comparable, but, in contrast to a five-parameter correlation equation necessary to describe the data in the study, our approach is more simple and, in fact, only a simple one-parameter correlation equation is required.
NASA Technical Reports Server (NTRS)
Margetan, Frank J.; Leckey, Cara A.; Barnard, Dan
2012-01-01
The size and shape of a delamination in a multi-layered structure can be estimated in various ways from an ultrasonic pulse/echo image. For example the -6dB contours of measured response provide one simple estimate of the boundary. More sophisticated approaches can be imagined where one adjusts the proposed boundary to bring measured and predicted UT images into optimal agreement. Such approaches require suitable models of the inspection process. In this paper we explore issues pertaining to model-based size estimation for delaminations in carbon fiber reinforced laminates. In particular we consider the influence on sizing when the delamination is non-planar or partially transmitting in certain regions. Two models for predicting broadband sonic time-domain responses are considered: (1) a fast "simple" model using paraxial beam expansions and Kirchhoff and phase-screen approximations; and (2) the more exact (but computationally intensive) 3D elastodynamic finite integration technique (EFIT). Model-to-model and model-to experiment comparisons are made for delaminations in uniaxial composite plates, and the simple model is then used to critique the -6dB rule for delamination sizing.
Calculation of density of states for modeling photoemission using method of moments
NASA Astrophysics Data System (ADS)
Finkenstadt, Daniel; Lambrakos, Samuel G.; Jensen, Kevin L.; Shabaev, Andrew; Moody, Nathan A.
2017-09-01
Modeling photoemission using the Moments Approach (akin to Spicer's "Three Step Model") is often presumed to follow simple models for the prediction of two critical properties of photocathodes: the yield or "Quantum Efficiency" (QE), and the intrinsic spreading of the beam or "emittance" ɛnrms. The simple models, however, tend to obscure properties of electrons in materials, the understanding of which is necessary for a proper prediction of a semiconductor or metal's QE and ɛnrms. This structure is characterized by localized resonance features as well as a universal trend at high energy. Presented in this study is a prototype analysis concerning the density of states (DOS) factor D(E) for Copper in bulk to replace the simple three-dimensional form of D(E) = (m/π2 h3)p2mE currently used in the Moments approach. This analysis demonstrates that excited state spectra of atoms, molecules and solids based on density-functional theory can be adapted as useful information for practical applications, as well as providing theoretical interpretation of density-of-states structure, e.g., qualitatively good descriptions of optical transitions in matter, in addition to DFT's utility in providing the optical constants and material parameters also required in the Moments Approach.
ERIC Educational Resources Information Center
Madu, B. C.
2012-01-01
The study explored the efficacy of four-step (4-E) learning cycle approach on students understanding of concepts related to Simple Harmonic Motion (SHM). 124 students (63 for experimental group and 61 for control group) participated in the study. The students' views and ideas in simple Harmonic Achievement test were analyzed qualitatively. The…
Smith predictor based-sliding mode controller for integrating processes with elevated deadtime.
Camacho, Oscar; De la Cruz, Francisco
2004-04-01
An approach to control integrating processes with elevated deadtime using a Smith predictor sliding mode controller is presented. A PID sliding surface and an integrating first-order plus deadtime model have been used to synthesize the controller. Since the performance of existing controllers with a Smith predictor decrease in the presence of modeling errors, this paper presents a simple approach to combining the Smith predictor with the sliding mode concept, which is a proven, simple, and robust procedure. The proposed scheme has a set of tuning equations as a function of the characteristic parameters of the model. For implementation of our proposed approach, computer based industrial controllers that execute PID algorithms can be used. The performance and robustness of the proposed controller are compared with the Matausek-Micić scheme for linear systems using simulations.
Configurational coupled cluster approach with applications to magnetic model systems
NASA Astrophysics Data System (ADS)
Wu, Siyuan; Nooijen, Marcel
2018-05-01
A general exponential, coupled cluster like, approach is discussed to extract an effective Hamiltonian in configurational space, as a sum of 1-body, 2-body up to n-body operators. The simplest two-body approach is illustrated by calculations on simple magnetic model systems. A key feature of the approach is that equations up to a certain rank do not depend on higher body cluster operators.
Modeling and predicting historical volatility in exchange rate markets
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2017-04-01
Volatility modeling and forecasting of currency exchange rate is an important task in several business risk management tasks; including treasury risk management, derivatives pricing, and portfolio risk evaluation. The purpose of this study is to present a simple and effective approach for predicting historical volatility of currency exchange rate. The approach is based on a limited set of technical indicators as inputs to the artificial neural networks (ANN). To show the effectiveness of the proposed approach, it was applied to forecast US/Canada and US/Euro exchange rates volatilities. The forecasting results show that our simple approach outperformed the conventional GARCH and EGARCH with different distribution assumptions, and also the hybrid GARCH and EGARCH with ANN in terms of mean absolute error, mean of squared errors, and Theil's inequality coefficient. Because of the simplicity and effectiveness of the approach, it is promising for US currency volatility prediction tasks.
As part of a broader exploratory effort to develop ecological risk assessment approaches to estimate potential chemical effects on non-target populations, we describe an approach for developing simple population models to estimate the extent to which acute effects on individual...
A powerful and flexible approach to the analysis of RNA sequence count data.
Zhou, Yi-Hui; Xia, Kai; Wright, Fred A
2011-10-01
A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean-variance relationships provides a flexible testing regimen that 'borrows' information across genes, while easily incorporating design effects and additional covariates. We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean-variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary data are available at Bioinformatics online.
Agent Model Development for Assessing Climate-Induced Geopolitical Instability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boslough, Mark B.; Backus, George A.
2005-12-01
We present the initial stages of development of new agent-based computational methods to generate and test hypotheses about linkages between environmental change and international instability. This report summarizes the first year's effort of an originally proposed three-year Laboratory Directed Research and Development (LDRD) project. The preliminary work focused on a set of simple agent-based models and benefited from lessons learned in previous related projects and case studies of human response to climate change and environmental scarcity. Our approach was to define a qualitative model using extremely simple cellular agent models akin to Lovelock's Daisyworld and Schelling's segregation model. Such modelsmore » do not require significant computing resources, and users can modify behavior rules to gain insights. One of the difficulties in agent-based modeling is finding the right balance between model simplicity and real-world representation. Our approach was to keep agent behaviors as simple as possible during the development stage (described herein) and to ground them with a realistic geospatial Earth system model in subsequent years. This work is directed toward incorporating projected climate data--including various C02 scenarios from the Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report--and ultimately toward coupling a useful agent-based model to a general circulation model.3« less
Stochastic modeling of consumer preferences for health care institutions.
Malhotra, N K
1983-01-01
This paper proposes a stochastic procedure for modeling consumer preferences via LOGIT analysis. First, a simple, non-technical exposition of the use of a stochastic approach in health care marketing is presented. Second, a study illustrating the application of the LOGIT model in assessing consumer preferences for hospitals is given. The paper concludes with several implications of the proposed approach.
Understanding valence-shell electron-pair repulsion (VSEPR) theory using origami molecular models
NASA Astrophysics Data System (ADS)
Endah Saraswati, Teguh; Saputro, Sulistyo; Ramli, Murni; Praseptiangga, Danar; Khasanah, Nurul; Marwati, Sri
2017-01-01
Valence-shell electron-pair repulsion (VSEPR) theory is conventionally used to predict molecular geometry. However, it is difficult to explore the full implications of this theory by simply drawing chemical structures. Here, we introduce origami modelling as a more accessible approach for exploration of the VSEPR theory. Our technique is simple, readily accessible and inexpensive compared with other sophisticated methods such as computer simulation or commercial three-dimensional modelling kits. This method can be implemented in chemistry education at both the high school and university levels. We discuss the example of a simple molecular structure prediction for ammonia (NH3). Using the origami model, both molecular shape and the scientific justification can be visualized easily. This ‘hands-on’ approach to building molecules will help promote understanding of VSEPR theory.
A simple microviscometric approach based on Brownian motion tracking.
Hnyluchová, Zuzana; Bjalončíková, Petra; Karas, Pavel; Mravec, Filip; Halasová, Tereza; Pekař, Miloslav; Kubala, Lukáš; Víteček, Jan
2015-02-01
Viscosity-an integral property of a liquid-is traditionally determined by mechanical instruments. The most pronounced disadvantage of such an approach is the requirement of a large sample volume, which poses a serious obstacle, particularly in biology and biophysics when working with limited samples. Scaling down the required volume by means of microviscometry based on tracking the Brownian motion of particles can provide a reasonable alternative. In this paper, we report a simple microviscometric approach which can be conducted with common laboratory equipment. The core of this approach consists in a freely available standalone script to process particle trajectory data based on a Newtonian model. In our study, this setup allowed the sample to be scaled down to 10 μl. The utility of the approach was demonstrated using model solutions of glycerine, hyaluronate, and mouse blood plasma. Therefore, this microviscometric approach based on a newly developed freely available script can be suggested for determination of the viscosity of small biological samples (e.g., body fluids).
Masquerade Detection Using a Taxonomy-Based Multinomial Modeling Approach in UNIX Systems
2008-08-25
primarily the modeling of statistical features , such as the frequency of events, the duration of events, the co- occurrence of multiple events...are identified, we can extract features representing such behavior while auditing the user’s behavior. Figure1: Taxonomy of Linux and Unix...achieved when the features are extracted just from simple commands. Method Hit Rate False Positive Rate ocSVM using simple cmds (freq.-based
Single-particle dynamics of the Anderson model: a local moment approach
NASA Astrophysics Data System (ADS)
Glossop, Matthew T.; Logan, David E.
2002-07-01
A non-perturbative local moment approach to single-particle dynamics of the general asymmetric Anderson impurity model is developed. The approach encompasses all energy scales and interaction strengths. It captures thereby strong coupling Kondo behaviour, including the resultant universal scaling behaviour of the single-particle spectrum; as well as the mixed valence and essentially perturbative empty orbital regimes. The underlying approach is physically transparent and innately simple, and as such is capable of practical extension to lattice-based models within the framework of dynamical mean-field theory.
Soulis, Konstantinos X; Valiantzas, John D; Ntoulas, Nikolaos; Kargas, George; Nektarios, Panayiotis A
2017-09-15
In spite of the well-known green roof benefits, their widespread adoption in the management practices of urban drainage systems requires the use of adequate analytical and modelling tools. In the current study, green roof runoff modeling was accomplished by developing, testing, and jointly using a simple conceptual model and a physically based numerical simulation model utilizing HYDRUS-1D software. The use of such an approach combines the advantages of the conceptual model, namely simplicity, low computational requirements, and ability to be easily integrated in decision support tools with the capacity of the physically based simulation model to be easily transferred in conditions and locations other than those used for calibrating and validating it. The proposed approach was evaluated with an experimental dataset that included various green roof covers (either succulent plants - Sedum sediforme, or xerophytic plants - Origanum onites, or bare substrate without any vegetation) and two substrate depths (either 8 cm or 16 cm). Both the physically based and the conceptual models matched very closely the observed hydrographs. In general, the conceptual model performed better than the physically based simulation model but the overall performance of both models was sufficient in most cases as it is revealed by the Nash-Sutcliffe Efficiency index which was generally greater than 0.70. Finally, it was showcased how a physically based and a simple conceptual model can be jointly used to allow the use of the simple conceptual model for a wider set of conditions than the available experimental data and in order to support green roof design. Copyright © 2017 Elsevier Ltd. All rights reserved.
Evaporation estimation of rift valley lakes: comparison of models.
Melesse, Assefa M; Abtew, Wossenu; Dessalegne, Tibebe
2009-01-01
Evapotranspiration (ET) accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method) of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE) methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.
Estimation of kinematic parameters in CALIFA galaxies: no-assumption on internal dynamics
NASA Astrophysics Data System (ADS)
García-Lorenzo, B.; Barrera-Ballesteros, J.; CALIFA Team
2016-06-01
We propose a simple approach to homogeneously estimate kinematic parameters of a broad variety of galaxies (elliptical, spirals, irregulars or interacting systems). This methodology avoids the use of any kinematical model or any assumption on internal dynamics. This simple but novel approach allows us to determine: the frequency of kinematic distortions, systemic velocity, kinematic center, and kinematic position angles which are directly measured from the two dimensional-distributions of radial velocities. We test our analysis tools using the CALIFA Survey
NASA Technical Reports Server (NTRS)
Holmes, Thomas; Owe, Manfred; deJeu, Richard
2007-01-01
Two data sets of experimental field observations with a range of meteorological conditions are used to investigate the possibility of modeling near-surface soil temperature profiles in a bare soil. It is shown that commonly used heat flow methods that assume a constant ground heat flux can not be used to model the extreme variations in temperature that occur near the surface. This paper proposes a simple approach for modeling the surface soil temperature profiles from a single depth observation. This approach consists of two parts: 1) modeling an instantaneous ground flux profile based on net radiation and the ground heat flux at 5cm depth; 2) using this ground heat flux profile to extrapolate a single temperature observation to a continuous near surface temperature profile. The new model is validated with an independent data set from a different soil and under a range of meteorological conditions.
Models for Models: An Introduction to Polymer Models Employing Simple Analogies
NASA Astrophysics Data System (ADS)
Tarazona, M. Pilar; Saiz, Enrique
1998-11-01
An introduction to the most common models used in the calculations of conformational properties of polymers, ranging from the freely jointed chain approximation to Monte Carlo or molecular dynamics methods, is presented. Mathematical formalism is avoided and simple analogies, such as human chains, gases, opinion polls, or marketing strategies, are used to explain the different models presented. A second goal of the paper is to teach students how models required for the interpretation of a system can be elaborated, starting with the simplest model and introducing successive improvements until the refinements become so sophisticated that it is much better to use an alternative approach.
Modeling Translation in Protein Synthesis with TASEP: A Tutorial and Recent Developments
NASA Astrophysics Data System (ADS)
Zia, R. K. P.; Dong, J. J.; Schmittmann, B.
2011-07-01
The phenomenon of protein synthesis has been modeled in terms of totally asymmetric simple exclusion processes (TASEP) since 1968. In this article, we provide a tutorial of the biological and mathematical aspects of this approach. We also summarize several new results, concerned with limited resources in the cell and simple estimates for the current (protein production rate) of a TASEP with inhomogeneous hopping rates, reflecting the characteristics of real genes.
A powerful and flexible approach to the analysis of RNA sequence count data
Zhou, Yi-Hui; Xia, Kai; Wright, Fred A.
2011-01-01
Motivation: A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean–variance relationships provides a flexible testing regimen that ‘borrows’ information across genes, while easily incorporating design effects and additional covariates. Results: We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean–variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. Availability: An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq Contact: yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21810900
USDA-ARS?s Scientific Manuscript database
Given a time series of potential evapotranspiration and rainfall data, there are at least two approaches for estimating vertical percolation rates. One approach involves solving Richards' equation (RE) with a plant uptake model. An alternative approach involves applying a simple soil moisture accoun...
Manipulators with flexible links: A simple model and experiments
NASA Technical Reports Server (NTRS)
Shimoyama, Isao; Oppenheim, Irving J.
1989-01-01
A simple dynamic model proposed for flexible links is briefly reviewed and experimental control results are presented for different flexible systems. A simple dynamic model is useful for rapid prototyping of manipulators and their control systems, for possible application to manipulator design decisions, and for real time computation as might be applied in model based or feedforward control. Such a model is proposed, with the further advantage that clear physical arguments and explanations can be associated with its simplifying features and with its resulting analytical properties. The model is mathematically equivalent to Rayleigh's method. Taking the example of planar bending, the approach originates in its choice of two amplitude variables, typically chosen as the link end rotations referenced to the chord (or the tangent) motion of the link. This particular choice is key in establishing the advantageous features of the model, and it was used to support the series of experiments reported.
Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.
Chatzis, Sotirios P; Andreou, Andreas S
2015-11-01
Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.
Comparison of different approaches of modelling in a masonry building
NASA Astrophysics Data System (ADS)
Saba, M.; Meloni, D.
2017-12-01
The present work has the objective to model a simple masonry building, through two different modelling methods in order to assess their validity in terms of evaluation of static stresses. Have been chosen two of the most commercial software used to address this kind of problem, which are of S.T.A. Data S.r.l. and Sismicad12 of Concrete S.r.l. While the 3Muri software adopts the Frame by Macro Elements Method (FME), which should be more schematic and more efficient, Sismicad12 software uses the Finite Element Method (FEM), which guarantees accurate results, with greater computational burden. Remarkably differences of the static stresses, for such a simple structure between the two approaches have been found, and an interesting comparison and analysis of the reasons is proposed.
Simple, Flexible, Trigonometric Taper Equations
Charles E. Thomas; Bernard R. Parresol
1991-01-01
There have been numerous approaches to modeling stem form in recent decades. The majority have concentrated on the simpler coniferous bole form and have become increasingly complex mathematical expressions. Use of trigonometric equations provides a simple expression of taper that is flexible enough to fit both coniferous and hard-wood bole forms. As an illustration, we...
Speededness and Adaptive Testing
ERIC Educational Resources Information Center
van der Linden, Wim J.; Xiong, Xinhui
2013-01-01
Two simple constraints on the item parameters in a response--time model are proposed to control the speededness of an adaptive test. As the constraints are additive, they can easily be included in the constraint set for a shadow-test approach (STA) to adaptive testing. Alternatively, a simple heuristic is presented to control speededness in plain…
Conceptual uncertainty in crystalline bedrock: Is simple evaluation the only practical approach?
Geier, J.; Voss, C.I.; Dverstorp, B.
2002-01-01
A simple evaluation can be used to characterize the capacity of crystalline bedrock to act as a barrier to release radionuclides from a nuclear waste repository. Physically plausible bounds on groundwater flow and an effective transport-resistance parameter are estimated based on fundamental principles and idealized models of pore geometry. Application to an intensively characterized site in Sweden shows that, due to high spatial variability and uncertainty regarding properties of transport paths, the uncertainty associated with the geological barrier is too high to allow meaningful discrimination between good and poor performance. Application of more complex (stochastic-continuum and discrete-fracture-network) models does not yield a significant improvement in the resolution of geological barrier performance. Comparison with seven other less intensively characterized crystalline study sites in Sweden leads to similar results, raising a question as to what extent the geological barrier function can be characterized by state-of-the art site investigation methods prior to repository construction. A simple evaluation provides a simple and robust practical approach for inclusion in performance assessment.
Conceptual uncertainty in crystalline bedrock: Is simple evaluation the only practical approach?
Geier, J.; Voss, C.I.; Dverstorp, B.
2002-01-01
A simple evaluation can be used to characterise the capacity of crystalline bedrock to act as a barrier to releases of radionuclides from a nuclear waste repository. Physically plausible bounds on groundwater flow and an effective transport-resistance parameter are estimated based on fundamental principles and idealised models of pore geometry. Application to an intensively characterised site in Sweden shows that, due to high spatial variability and uncertainty regarding properties of transport paths, the uncertainty associated with the geological barrier is too high to allow meaningful discrimination between good and poor performance. Application of more complex (stochastic-continuum and discrete-fracture-network) models does not yield a significant improvement in the resolution of geologic-barrier performance. Comparison with seven other less intensively characterised crystalline study sites in Sweden leads to similar results, raising a question as to what extent the geological barrier function can be characterised by state-of-the art site investigation methods prior to repository construction. A simple evaluation provides a simple and robust practical approach for inclusion in performance assessment.
Prediction of the dollar to the ruble rate. A system-theoretic approach
NASA Astrophysics Data System (ADS)
Borodachev, Sergey M.
2017-07-01
Proposed a simple state-space model of dollar rate formation based on changes in oil prices and some mechanisms of money transfer between monetary and stock markets. Comparison of predictions by means of input-output model and state-space model is made. It concludes that with proper use of statistical data (Kalman filter) the second approach provides more adequate predictions of the dollar rate.
Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M
2018-07-01
Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).
A simple integrated assessment approach to global change simulation and evaluation
NASA Astrophysics Data System (ADS)
Ogutu, Keroboto; D'Andrea, Fabio; Ghil, Michael
2016-04-01
We formulate and study the Coupled Climate-Economy-Biosphere (CoCEB) model, which constitutes the basis of our idealized integrated assessment approach to simulating and evaluating global change. CoCEB is composed of a physical climate module, based on Earth's energy balance, and an economy module that uses endogenous economic growth with physical and human capital accumulation. A biosphere model is likewise under study and will be coupled to the existing two modules. We concentrate on the interactions between the two subsystems: the effect of climate on the economy, via damage functions, and the effect of the economy on climate, via a control of the greenhouse gas emissions. Simple functional forms of the relation between the two subsystems permit simple interpretations of the coupled effects. The CoCEB model is used to make hypotheses on the long-term effect of investment in emission abatement, and on the comparative efficacy of different approaches to abatement, in particular by investing in low carbon technology, in deforestation reduction or in carbon capture and storage (CCS). The CoCEB model is very flexible and transparent, and it allows one to easily formulate and compare different functional representations of climate change mitigation policies. Using different mitigation measures and their cost estimates, as found in the literature, one is able to compare these measures in a coherent way.
NASA Technical Reports Server (NTRS)
Ghil, M.
1980-01-01
A unified theoretical approach to both the four-dimensional assimilation of asynoptic data and the initialization problem is attempted. This approach relies on the derivation of certain relationships between geopotential tendencies and tendencies of the horizontal velocity field in primitive-equation models of atmospheric flow. The approach is worked out and analyzed in detail for some simple barotropic models. Certain independent results of numerical experiments for the time-continuous assimilation of real asynoptic meteorological data into a complex, baroclinic weather prediction model are discussed in the context of the present approach. Tentative inferences are drawn for practical assimilation procedures.
A simple inertial model for Neptune's zonal circulation
NASA Technical Reports Server (NTRS)
Allison, Michael; Lumetta, James T.
1990-01-01
Voyager imaging observations of zonal cloud-tracked winds on Neptune revealed a strongly subrotational equatorial jet with a speed approaching 500 m/s and generally decreasing retrograde motion toward the poles. The wind data are interpreted with a speculative but revealingly simple model based on steady gradient flow balance and an assumed global homogenization of potential vorticity for shallow layer motion. The prescribed model flow profile relates the equatorial velocity to the mid-latitude shear, in reasonable agreement with the available data, and implies a global horizontal deformation scale L(D) of about 3000 km.
Constructing a simple parametric model of shoulder from medical images
NASA Astrophysics Data System (ADS)
Atmani, H.; Fofi, D.; Merienne, F.; Trouilloud, P.
2006-02-01
The modelling of the shoulder joint is an important step to set a Computer-Aided Surgery System for shoulder prosthesis placement. Our approach mainly concerns the bones structures of the scapulo-humeral joint. Our goal is to develop a tool that allows the surgeon to extract morphological data from medical images in order to interpret the biomechanical behaviour of a prosthesised shoulder for preoperative and peroperative virtual surgery. To provide a light and easy-handling representation of the shoulder, a geometrical model composed of quadrics, planes and other simple forms is proposed.
Cellular Automata with Anticipation: Examples and Presumable Applications
NASA Astrophysics Data System (ADS)
Krushinsky, Dmitry; Makarenko, Alexander
2010-11-01
One of the most prospective new methodologies for modelling is the so-called cellular automata (CA) approach. According to this paradigm, the models are built from simple elements connected into regular structures with local interaction between neighbours. The patterns of connections usually have a simple geometry (lattices). As one of the classical examples of CA we mention the game `Life' by J. Conway. This paper presents two examples of CA with anticipation property. These examples include a modification of the game `Life' and a cellular model of crowd movement.
ERIC Educational Resources Information Center
Krajewski, Grzegorz; Theakston, Anna L.; Lieven, Elena V. M.; Tomasello, Michael
2011-01-01
The two main models of children's acquisition of inflectional morphology--the Dual-Mechanism approach and the usage-based (schema-based) approach--have both been applied mainly to languages with fairly simple morphological systems. Here we report two studies of 2-3-year-old Polish children's ability to generalise across case-inflectional endings…
Bourne, Tom; De Rijdt, Sylvie; Van Holsbeke, Caroline; Sayasneh, Ahmad; Valentin, Lil; Van Calster, Ben; Timmerman, Dirk
2015-01-01
Abstract The principal aim of the IOTA project has been to develop approaches to the evaluation of adnexal pathology using ultrasound that can be transferred to all examiners. Creating models that use simple, easily reproducible ultrasound characteristics is one approach. PMID:28191150
Okada, Morihiro; Miller, Thomas C; Roediger, Julia; Shi, Yun-Bo; Schech, Joseph Mat
2017-09-01
Various animal models are indispensible in biomedical research. Increasing awareness and regulations have prompted the adaptation of more humane approaches in the use of laboratory animals. With the development of easier and faster methodologies to generate genetically altered animals, convenient and humane methods to genotype these animals are important for research involving such animals. Here, we report skin swabbing as a simple and noninvasive method for extracting genomic DNA from mice and frogs for genotyping. We show that this method is highly reliable and suitable for both immature and adult animals. Our approach allows a simpler and more humane approach for genotyping vertebrate animals.
NASA Technical Reports Server (NTRS)
Hess, R. A.; Wheat, L. W.
1975-01-01
A control theoretic model of the human pilot was used to analyze a baseline electronic cockpit display in a helicopter landing approach task. The head down display was created on a stroke written cathode ray tube and the vehicle was a UH-1H helicopter. The landing approach task consisted of maintaining prescribed groundspeed and glideslope in the presence of random vertical and horizontal turbulence. The pilot model was also used to generate and evaluate display quickening laws designed to improve pilot vehicle performance. A simple fixed base simulation provided comparative tracking data.
A VARIABLE REACTIVITY MODEL FOR ION BINDING TO ENVIRONMENTAL SORBENTS
The conceptual and mathematical basis for a new general-composite modeling approach for ion binding to environmental sorbents is presented. The work extends the Simple Metal Sorption (SiMS) model previously presented for metal and proton binding to humic substances. A surface com...
Continuum modeling of large lattice structures: Status and projections
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Mikulas, Martin M., Jr.
1988-01-01
The status and some recent developments of continuum modeling for large repetitive lattice structures are summarized. Discussion focuses on a number of aspects including definition of an effective substitute continuum; characterization of the continuum model; and the different approaches for generating the properties of the continuum, namely, the constitutive matrix, the matrix of mass densities, and the matrix of thermal coefficients. Also, a simple approach is presented for generating the continuum properties. The approach can be used to generate analytic and/or numerical values of the continuum properties.
An Activation-Based Model of Routine Sequence Errors
2015-04-01
part of the ACT-R frame- work (e.g., Anderson, 1983), we adopt a newer, richer no- tion of priming as part of our approach ( Harrison & Trafton, 2010...2014). Other models of routine sequence errors, such as the in- teractive activation network ( IAN ) model (Cooper & Shal- lice, 2006) and the simple...error patterns that results from an interface layout shift. The ideas behind our expanded priming approach, however, could apply to IAN , which uses
A dynamical systems approach to actin-based motility in Listeria monocytogenes
NASA Astrophysics Data System (ADS)
Hotton, S.
2010-11-01
A simple kinematic model for the trajectories of Listeria monocytogenes is generalized to a dynamical system rich enough to exhibit the resonant Hopf bifurcation structure of excitable media and simple enough to be studied geometrically. It is shown how L. monocytogenes trajectories and meandering spiral waves are organized by the same type of attracting set.
The IDEA model: A single equation approach to the Ebola forecasting challenge.
Tuite, Ashleigh R; Fisman, David N
2018-03-01
Mathematical modeling is increasingly accepted as a tool that can inform disease control policy in the face of emerging infectious diseases, such as the 2014-2015 West African Ebola epidemic, but little is known about the relative performance of alternate forecasting approaches. The RAPIDD Ebola Forecasting Challenge (REFC) tested the ability of eight mathematical models to generate useful forecasts in the face of simulated Ebola outbreaks. We used a simple, phenomenological single-equation model (the "IDEA" model), which relies only on case counts, in the REFC. Model fits were performed using a maximum likelihood approach. We found that the model performed reasonably well relative to other more complex approaches, with performance metrics ranked on average 4th or 5th among participating models. IDEA appeared better suited to long- than short-term forecasts, and could be fit using nothing but reported case counts. Several limitations were identified, including difficulty in identifying epidemic peak (even retrospectively), unrealistically precise confidence intervals, and difficulty interpolating daily case counts when using a model scaled to epidemic generation time. More realistic confidence intervals were generated when case counts were assumed to follow a negative binomial, rather than Poisson, distribution. Nonetheless, IDEA represents a simple phenomenological model, easily implemented in widely available software packages that could be used by frontline public health personnel to generate forecasts with accuracy that approximates that which is achieved using more complex methodologies. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions
NASA Astrophysics Data System (ADS)
Soltani, S. S.; Cvetkovic, V.; Destouni, G.
2017-12-01
The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow hillslope compartments are controlled by topography, and therefore application and further development of the simple "kinematic pathway" approach is promising for their modeling.
Gravitational decoupling and the Picard-Lefschetz approach
NASA Astrophysics Data System (ADS)
Brown, Jon; Cole, Alex; Shiu, Gary; Cottrell, William
2018-01-01
In this work, we consider tunneling between nonmetastable states in gravitational theories. Such processes arise in various contexts, e.g., in inflationary scenarios where the inflaton potential involves multiple fields or multiple branches. They are also relevant for bubble wall nucleation in some cosmological settings. However, we show that the transition amplitudes computed using the Euclidean method generally do not approach the corresponding field theory limit as Mp→∞ . This implies that in the Euclidean framework, there is no systematic expansion in powers of GN for such processes. Such considerations also carry over directly to no-boundary scenarios involving Hawking-Turok instantons. In this note, we illustrate this failure of decoupling in the Euclidean approach with a simple model of axion monodromy and then argue that the situation can be remedied with a Lorentzian prescription such as the Picard-Lefschetz theory. As a proof of concept, we illustrate with a simple model how tunneling transition amplitudes can be calculated using the Picard-Lefschetz approach.
Simple turbulence models and their application to boundary layer separation
NASA Technical Reports Server (NTRS)
Wadcock, A. J.
1980-01-01
Measurements in the boundary layer and wake of a stalled airfoil are presented in two coordinate systems, one aligned with the airfoil chord, the other being conventional boundary layer coordinates. The NACA 4412 airfoil is studied at a single angle of attack corresponding to maximum lift, the Reynolds number based on chord being 1.5 x 10 to the 6th power. Turbulent boundary layer separation occurred at the 85 percent chord position. The two-dimensionality of the flow was documented and the momentum integral equation studied to illustrate the importance of turbulence contributions as separation is approached. The assumptions of simple eddy-viscosity and mixing-length turbulence models are checked directly against experiment. Curvature effects are found to be important as separation is approached.
Prediction of aircraft handling qualities using analytical models of the human pilot
NASA Technical Reports Server (NTRS)
Hess, R. A.
1982-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot-induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Prediction of aircraft handling qualities using analytical models of the human pilot
NASA Technical Reports Server (NTRS)
Hess, R. A.
1982-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations is formulated. Finally, a model based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study
Bornschein, Jörg; Henniges, Marc; Lücke, Jörg
2013-01-01
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938
Design of Friction Stir Spot Welding Tools by Using a Novel Thermal-Mechanical Approach
Su, Zheng-Ming; Qiu, Qi-Hong; Lin, Pai-Chen
2016-01-01
A simple thermal-mechanical model for friction stir spot welding (FSSW) was developed to obtain similar weld performance for different weld tools. Use of the thermal-mechanical model and a combined approach enabled the design of weld tools for various sizes but similar qualities. Three weld tools for weld radii of 4, 5, and 6 mm were made to join 6061-T6 aluminum sheets. Performance evaluations of the three weld tools compared fracture behavior, microstructure, micro-hardness distribution, and welding temperature of welds in lap-shear specimens. For welds made by the three weld tools under identical processing conditions, failure loads were approximately proportional to tool size. Failure modes, microstructures, and micro-hardness distributions were similar. Welding temperatures correlated with frictional heat generation rate densities. Because the three weld tools sufficiently met all design objectives, the proposed approach is considered a simple and feasible guideline for preliminary tool design. PMID:28773800
Design of Friction Stir Spot Welding Tools by Using a Novel Thermal-Mechanical Approach.
Su, Zheng-Ming; Qiu, Qi-Hong; Lin, Pai-Chen
2016-08-09
A simple thermal-mechanical model for friction stir spot welding (FSSW) was developed to obtain similar weld performance for different weld tools. Use of the thermal-mechanical model and a combined approach enabled the design of weld tools for various sizes but similar qualities. Three weld tools for weld radii of 4, 5, and 6 mm were made to join 6061-T6 aluminum sheets. Performance evaluations of the three weld tools compared fracture behavior, microstructure, micro-hardness distribution, and welding temperature of welds in lap-shear specimens. For welds made by the three weld tools under identical processing conditions, failure loads were approximately proportional to tool size. Failure modes, microstructures, and micro-hardness distributions were similar. Welding temperatures correlated with frictional heat generation rate densities. Because the three weld tools sufficiently met all design objectives, the proposed approach is considered a simple and feasible guideline for preliminary tool design.
An approach for modeling sediment budgets in supply-limited rivers
Wright, Scott A.; Topping, David J.; Rubin, David M.; Melis, Theodore S.
2010-01-01
Reliable predictions of sediment transport and river morphology in response to variations in natural and human-induced drivers are necessary for river engineering and management. Because engineering and management applications may span a wide range of space and time scales, a broad spectrum of modeling approaches has been developed, ranging from suspended-sediment "rating curves" to complex three-dimensional morphodynamic models. Suspended sediment rating curves are an attractive approach for evaluating changes in multi-year sediment budgets resulting from changes in flow regimes because they are simple to implement, computationally efficient, and the empirical parameters can be estimated from quantities that are commonly measured in the field (i.e., suspended sediment concentration and water discharge). However, the standard rating curve approach assumes a unique suspended sediment concentration for a given water discharge. This assumption is not valid in rivers where sediment supply varies enough to cause changes in particle size or changes in areal coverage of sediment on the bed; both of these changes cause variations in suspended sediment concentration for a given water discharge. More complex numerical models of hydraulics and morphodynamics have been developed to address such physical changes of the bed. This additional complexity comes at a cost in terms of computations as well as the type and amount of data required for model setup, calibration, and testing. Moreover, application of the resulting sediment-transport models may require observations of bed-sediment boundary conditions that require extensive (and expensive) observations or, alternatively, require the use of an additional model (subject to its own errors) merely to predict the bed-sediment boundary conditions for use by the transport model. In this paper we present a hybrid approach that combines aspects of the rating curve method and the more complex morphodynamic models. Our primary objective was to develop an approach complex enough to capture the processes related to sediment supply limitation but simple enough to allow for rapid calculations of multi-year sediment budgets. The approach relies on empirical relations between suspended sediment concentration and discharge but on a particle size specific basis and also tracks and incorporates the particle size distribution of the bed sediment. We have applied this approach to the Colorado River below Glen Canyon Dam (GCD), a reach that is particularly suited to such an approach because it is substantially sediment supply limited such that transport rates are strongly dependent on both water discharge and sediment supply. The results confirm the ability of the approach to simulate the effects of supply limitation, including periods of accumulation and bed fining as well as erosion and bed coarsening, using a very simple formulation. Although more empirical in nature than standard one-dimensional morphodynamic models, this alternative approach is attractive because its simplicity allows for rapid evaluation of multi-year sediment budgets under a range of flow regimes and sediment supply conditions, and also because it requires substantially less data for model setup and use.
The Money-Creation Model: An Alternative Pedagogy.
ERIC Educational Resources Information Center
Thornton, Mark; And Others
1991-01-01
Presents a teaching model that is consistent with the traditional approach to demonstrating the expansion and contraction of the money supply. Suggests that the model provides a simple and convenient visual image of changes in the monetary system. Describes the model as juxtaposing the behavior of the moneyholding public with that of the…
Rotationally invariant clustering of diffusion MRI data using spherical harmonics
NASA Astrophysics Data System (ADS)
Liptrot, Matthew; Lauze, François
2016-03-01
We present a simple approach to the voxelwise classification of brain tissue acquired with diffusion weighted MRI (DWI). The approach leverages the power of spherical harmonics to summarise the diffusion information, sampled at many points over a sphere, using only a handful of coefficients. We use simple features that are invariant to the rotation of the highly orientational diffusion data. This provides a way to directly classify voxels whose diffusion characteristics are similar yet whose primary diffusion orientations differ. Subsequent application of machine-learning to the spherical harmonic coefficients therefore may permit classification of DWI voxels according to their inferred underlying fibre properties, whilst ignoring the specifics of orientation. After smoothing apparent diffusion coefficients volumes, we apply a spherical harmonic transform, which models the multi-directional diffusion data as a collection of spherical basis functions. We use the derived coefficients as voxelwise feature vectors for classification. Using a simple Gaussian mixture model, we examined the classification performance for a range of sub-classes (3-20). The results were compared against existing alternatives for tissue classification e.g. fractional anisotropy (FA) or the standard model used by Camino.1 The approach was implemented on both two publicly-available datasets: an ex-vivo pig brain and in-vivo human brain from the Human Connectome Project (HCP). We have demonstrated how a robust classification of DWI data can be performed without the need for a model reconstruction step. This avoids the potential confounds and uncertainty that such models may impose, and has the benefit of being computable directly from the DWI volumes. As such, the method could prove useful in subsequent pre-processing stages, such as model fitting, where it could inform about individual voxel complexities and improve model parameter choice.
Test of the efficiency of three storm water quality models with a rich set of data.
Ahyerre, M; Henry, F O; Gogien, F; Chabanel, M; Zug, M; Renaudet, D
2005-01-01
The objective of this article is to test the efficiency of three different Storm Water Quality Model (SWQM) on the same data set (34 rain events, SS measurements) sampled on a 42 ha watershed in the center of Paris. The models have been calibrated at the scale of the rain event. Considering the mass of pollution calculated per event, the results on the models are satisfactory but that they are in the same order of magnitude as the simple hydraulic approach associated to a constant concentration. In a second time, the mass of pollutant at the outlet of the catchment at the global scale of the 34 events has been calculated. This approach shows that the simple hydraulic calculations gives better results than SWQM. Finally, the pollutographs are analysed, showing that storm water quality models are interesting tools to represent the shape of the pollutographs, and the dynamics of the phenomenon which can be useful in some projects for managers.
Controlled recovery of phylogenetic communities from an evolutionary model using a network approach
NASA Astrophysics Data System (ADS)
Sousa, Arthur M. Y. R.; Vieira, André P.; Prado, Carmen P. C.; Andrade, Roberto F. S.
2016-04-01
This works reports the use of a complex network approach to produce a phylogenetic classification tree of a simple evolutionary model. This approach has already been used to treat proteomic data of actual extant organisms, but an investigation of its reliability to retrieve a traceable evolutionary history is missing. The used evolutionary model includes key ingredients for the emergence of groups of related organisms by differentiation through random mutations and population growth, but purposefully omits other realistic ingredients that are not strictly necessary to originate an evolutionary history. This choice causes the model to depend only on a small set of parameters, controlling the mutation probability and the population of different species. Our results indicate that for a set of parameter values, the phylogenetic classification produced by the used framework reproduces the actual evolutionary history with a very high average degree of accuracy. This includes parameter values where the species originated by the evolutionary dynamics have modular structures. In the more general context of community identification in complex networks, our model offers a simple setting for evaluating the effects, on the efficiency of community formation and identification, of the underlying dynamics generating the network itself.
Some Simple Formulas for Posterior Convergence Rates
2014-01-01
We derive some simple relations that demonstrate how the posterior convergence rate is related to two driving factors: a “penalized divergence” of the prior, which measures the ability of the prior distribution to propose a nonnegligible set of working models to approximate the true model and a “norm complexity” of the prior, which measures the complexity of the prior support, weighted by the prior probability masses. These formulas are explicit and involve no essential assumptions and are easy to apply. We apply this approach to the case with model averaging and derive some useful oracle inequalities that can optimize the performance adaptively without knowing the true model. PMID:27379278
This paper presents three simple techniques for fusing observations and numerical model predictions. The techniques rely on model/observation bias being considered either as error free, or containing some uncertainty, the latter mitigated with a Kalman filter approach or a spati...
NASA Technical Reports Server (NTRS)
Dabney, Philip W.; Harding, David J.; Valett, Susan R.; Vasilyev, Aleksey A.; Yu, Anthony W.
2012-01-01
The Slope Imaging Multi-polarization Photon-counting Lidar (SIMPL) is a multi-beam, micropulse airborne laser altimeter that acquires active and passive polarimetric optical remote sensing measurements at visible and near-infrared wavelengths. SIMPL was developed to demonstrate advanced measurement approaches of potential benefit for improved, more efficient spaceflight laser altimeter missions. SIMPL data have been acquired for wide diversity of forest types in the summers of 2010 and 2011 in order to assess the potential of its novel capabilities for characterization of vegetation structure and composition. On each of its four beams SIMPL provides highly-resolved measurements of forest canopy structure by detecting single-photons with 15 cm ranging precision using a narrow-beam system operating at a laser repetition rate of 11 kHz. Associated with that ranging data SIMPL provides eight amplitude parameters per beam unlike the single amplitude provided by typical laser altimeters. Those eight parameters are received energy that is parallel and perpendicular to that of the plane-polarized transmit pulse at 532 nm (green) and 1064 nm (near IR), for both the active laser backscatter retro-reflectance and the passive solar bi-directional reflectance. This poster presentation will cover the instrument architecture and highlight the performance of the SIMPL instrument with examples taken from measurements for several sites with distinct canopy structures and compositions. Specific performance areas such as probability of detection, after pulsing, and dead time, will be highlighted and addressed, along with examples of their impact on the measurements and how they limit the ability to accurately model and recover the canopy properties. To assess the sensitivity of SIMPL's measurements to canopy properties an instrument model has been implemented in the FLIGHT radiative transfer code, based on Monte Carlo simulation of photon transport. SIMPL data collected in 2010 over the Smithsonian Environmental Research Center, MD are currently being modelled and compared to other remote sensing and in situ data sets. Results on the adaptation of FLIGHT to model micropulse, single'photon ranging measurements are presented elsewhere at this conference. NASA's ICESat-2 spaceflight mission, scheduled for launch in 2016, will utilize a multi-beam, micropulse, single-photon ranging measurement approach (although non-polarimetric and only at 532 nm). Insights gained from the analysis and modelling of SIMPL data will help guide preparations for that mission, including development of calibration/validation plans and algorithms for the estimation of forest biophysical parameters.
Adequate model complexity for scenario analysis of VOC stripping in a trickling filter.
Vanhooren, H; Verbrugge, T; Boeije, G; Demey, D; Vanrolleghem, P A
2001-01-01
Two models describing the stripping of volatile organic contaminants (VOCs) in an industrial trickling filter system are developed. The aim of the models is to investigate the effect of different operating conditions (VOC loads and air flow rates) on the efficiency of VOC stripping and the resulting concentrations in the gas and liquid phases. The first model uses the same principles as the steady-state non-equilibrium activated sludge model Simple Treat, in combination with an existing biofilm model. The second model is a simple mass balance based model only incorporating air and liquid and thus neglecting biofilm effects. In a first approach, the first model was incorporated in a five-layer hydrodynamic model of the trickling filter, using the carrier material design specifications for porosity, water hold-up and specific surface area. A tracer test with lithium was used to validate this approach, and the gas mixing in the filters was studied using continuous CO2 and O2 measurements. With the tracer test results, the biodegradation model was adapted, and it became clear that biodegradation and adsorption to solids can be neglected. On this basis, a simple dynamic mass balance model was built. Simulations with this model reveal that changing the air flow rate in the trickling filter system has little effect on the VOC stripping efficiency at steady state. However, immediately after an air flow rate change, quite high flux and concentration peaks of VOCs can be expected. These phenomena are of major importance for the design of an off-gas treatment facility.
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
Analysis of aircraft longitudinal handling qualities
NASA Technical Reports Server (NTRS)
Hess, R. A.
1981-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
A Simple Model for Nonlinear Confocal Ultrasonic Beams
NASA Astrophysics Data System (ADS)
Zhang, Dong; Zhou, Lin; Si, Li-Sheng; Gong, Xiu-Fen
2007-01-01
A confocally and coaxially arranged pair of focused transmitter and receiver represents one of the best geometries for medical ultrasonic imaging and non-invasive detection. We develop a simple theoretical model for describing the nonlinear propagation of a confocal ultrasonic beam in biological tissues. On the basis of the parabolic approximation and quasi-linear approximation, the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation is solved by using the angular spectrum approach. Gaussian superposition technique is applied to simplify the solution, and an analytical solution for the second harmonics in the confocal ultrasonic beam is presented. Measurements are performed to examine the validity of the theoretical model. This model provides a preliminary model for acoustic nonlinear microscopy.
Generating self-organizing collective behavior using separation dynamics from experimental data
NASA Astrophysics Data System (ADS)
Dieck Kattas, Graciano; Xu, Xiao-Ke; Small, Michael
2012-09-01
Mathematical models for systems of interacting agents using simple local rules have been proposed and shown to exhibit emergent swarming behavior. Most of these models are constructed by intuition or manual observations of real phenomena, and later tuned or verified to simulate desired dynamics. In contrast to this approach, we propose using a model that attempts to follow an averaged rule of the essential distance-dependent collective behavior of real pigeon flocks, which was abstracted from experimental data. By using a simple model to follow the behavioral tendencies of real data, we show that our model can exhibit a wide range of emergent self-organizing dynamics such as flocking, pattern formation, and counter-rotating vortices.
Generating self-organizing collective behavior using separation dynamics from experimental data.
Dieck Kattas, Graciano; Xu, Xiao-Ke; Small, Michael
2012-09-01
Mathematical models for systems of interacting agents using simple local rules have been proposed and shown to exhibit emergent swarming behavior. Most of these models are constructed by intuition or manual observations of real phenomena, and later tuned or verified to simulate desired dynamics. In contrast to this approach, we propose using a model that attempts to follow an averaged rule of the essential distance-dependent collective behavior of real pigeon flocks, which was abstracted from experimental data. By using a simple model to follow the behavioral tendencies of real data, we show that our model can exhibit a wide range of emergent self-organizing dynamics such as flocking, pattern formation, and counter-rotating vortices.
A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction
NASA Astrophysics Data System (ADS)
Danandeh Mehr, Ali; Kahya, Ercan
2017-06-01
Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.
Ouyang, Ying; Grace, Johnny M; Zipperer, Wayne C; Hatten, Jeff; Dewey, Janet
2018-05-22
Loads of naturally occurring total organic carbons (TOC), refractory organic carbon (ROC), and labile organic carbon (LOC) in streams control the availability of nutrients and the solubility and toxicity of contaminants and affect biological activities through absorption of light and complex metals with production of carcinogenic compounds. Although computer models have become increasingly popular in understanding and management of TOC, ROC, and LOC loads in streams, the usefulness of these models hinges on the availability of daily data for model calibration and validation. Unfortunately, these daily data are usually insufficient and/or unavailable for most watersheds due to a variety of reasons, such as budget and time constraints. A simple approach was developed here to calculate daily loads of TOC, ROC, and LOC in streams based on their seasonal loads. We concluded that the predictions from our approach adequately match field measurements based on statistical comparisons between model calculations and field measurements. Our approach demonstrates that an increase in stream discharge results in increased stream TOC, ROC, and LOC concentrations and loads, although high peak discharge did not necessarily result in high peaks of TOC, ROC, and LOC concentrations and loads. The approach developed herein is a useful tool to convert seasonal loads of TOC, ROC, and LOC into daily loads in the absence of measured daily load data.
ERIC Educational Resources Information Center
Yolles, Maurice
2005-01-01
Purpose: Seeks to explore the notion of organisational intelligence as a simple extension of the notion of the idea of collective intelligence. Design/methodology/approach: Discusses organisational intelligence using previous research, which includes the Purpose, Properties and Practice model of Dealtry, and the Viable Systems model. Findings: The…
DOT National Transportation Integrated Search
2018-01-01
This report explores the application of a discrete computational model for predicting the fracture behavior of asphalt mixtures at low temperatures based on the results of simple laboratory experiments. In this discrete element model, coarse aggregat...
EPA MODELING TOOLS FOR CAPTURE ZONE DELINEATION
The EPA Office of Research and Development supports a step-wise modeling approach for design of wellhead protection areas for water supply wells. A web-based WellHEDSS (wellhead decision support system) is under development for determining when simple capture zones (e.g., centri...
Benchmarking novel approaches for modelling species range dynamics
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.
2016-01-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. PMID:26872305
Benchmarking novel approaches for modelling species range dynamics.
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E
2016-08-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. © 2016 John Wiley & Sons Ltd.
Estimation of vegetation cover at subpixel resolution using LANDSAT data
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Eagleson, Peter S.
1986-01-01
The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleidon, Alex; Kravitz, Benjamin S.; Renner, Maik
2015-01-16
We derive analytic expressions of the transient response of the hydrological cycle to surface warming from an extremely simple energy balance model in which turbulent heat fluxes are constrained by the thermodynamic limit of maximum power. For a given magnitude of steady-state temperature change, this approach predicts the transient response as well as the steady-state change in surface energy partitioning and the hydrologic cycle. We show that the transient behavior of the simple model as well as the steady state hydrological sensitivities to greenhouse warming and solar geoengineering are comparable to results from simulations using highly complex models. Many ofmore » the global-scale hydrological cycle changes can be understood from a surface energy balance perspective, and our thermodynamically-constrained approach provides a physically robust way of estimating global hydrological changes in response to altered radiative forcing.« less
Promoting Teacher Growth through Lesson Study: A Culturally Embedded Approach
ERIC Educational Resources Information Center
Ebaeguin, Marlon
2015-01-01
Lesson Study has captured the attention of many international educators with its promise of improved student learning and sustained teacher growth. Lesson Study, however, has cultural underpinnings that a simple transference model overlooks. A culturally embedded approach attends to the existing cultural orientations and values of host schools.…
McLachlan, G J; Bean, R W; Jones, L Ben-Tovim
2006-07-01
An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
On measuring community participation in research.
Khodyakov, Dmitry; Stockdale, Susan; Jones, Andrea; Mango, Joseph; Jones, Felica; Lizaola, Elizabeth
2013-06-01
Active participation of community partners in research aspects of community-academic partnered projects is often assumed to have a positive impact on the outcomes of such projects. The value of community engagement in research, however, cannot be empirically determined without good measures of the level of community participation in research activities. Based on our recent evaluation of community-academic partnered projects centered around behavioral health issues, this article uses semistructured interview and survey data to outline two complementary approaches to measuring the level of community participation in research-a "three-model" approach that differentiates between the levels of community participation and a Community Engagement in Research Index (CERI) that offers a multidimensional view of community engagement in the research process. The primary goal of this article is to present and compare these approaches, discuss their strengths and limitations, summarize the lessons learned, and offer directions for future research. We find that whereas the three-model approach is a simple measure of the perception of community participation in research activities, CERI allows for a more nuanced understanding by capturing multiple aspects of such participation. Although additional research is needed to validate these measures, our study makes a significant contribution by illustrating the complexity of measuring community participation in research and the lack of reliability in simple scores offered by the three-model approach.
Reducing usage of the computational resources by event driven approach to model predictive control
NASA Astrophysics Data System (ADS)
Misik, Stefan; Bradac, Zdenek; Cela, Arben
2017-08-01
This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.
ERIC Educational Resources Information Center
Muthen, Bengt
This paper investigates methods that avoid using multiple groups to represent the missing data patterns in covariance structure modeling, attempting instead to do a single-group analysis where the only action the analyst has to take is to indicate that data is missing. A new covariance structure approach developed by B. Muthen and G. Arminger is…
NASA Astrophysics Data System (ADS)
Lim, Yeerang; Jung, Youeyun; Bang, Hyochoong
2018-05-01
This study presents model predictive formation control based on an eccentricity/inclination vector separation strategy. Alternative collision avoidance can be accomplished by using eccentricity/inclination vectors and adding a simple goal function term for optimization process. Real-time control is also achievable with model predictive controller based on convex formulation. Constraint-tightening approach is address as well improve robustness of the controller, and simulation results are presented to verify performance enhancement for the proposed approach.
Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach
NASA Technical Reports Server (NTRS)
Mak, Victor W. K.
1986-01-01
Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.
NASA Astrophysics Data System (ADS)
Roubinet, D.; Russian, A.; Dentz, M.; Gouze, P.
2017-12-01
Characterizing and modeling hydrodynamic reactive transport in fractured rock are critical challenges for various research fields and applications including environmental remediation, geological storage, and energy production. To this end, we consider a recently developed time domain random walk (TDRW) approach, which is adapted to reproduce anomalous transport behaviors and capture heterogeneous structural and physical properties. This method is also very well suited to optimize numerical simulations by memory-shared massive parallelization and provide numerical results at various scales. So far, the TDRW approach has been applied for modeling advective-diffusive transport with mass transfer between mobile and immobile regions and simple (theoretical) reactions in heterogeneous porous media represented as single continuum domains. We extend this approach to dual-continuum representations considering a highly permeable fracture network embedded into a poorly permeable rock matrix with heterogeneous geochemical reactions occurring in both geological structures. The resulting numerical model enables us to extend the range of the modeled heterogeneity scales with an accurate representation of solute transport processes and no assumption on the Fickianity of these processes. The proposed model is compared to existing particle-based methods that are usually used to model reactive transport in fractured rocks assuming a homogeneous surrounding matrix, and is used to evaluate the impact of the matrix heterogeneity on the apparent reaction rates for different 2D and 3D simple-to-complex fracture network configurations.
Center for Parallel Optimization.
1996-03-19
A NEW OPTIMIZATION BASED APPROACH TO IMPROVING GENERALIZATION IN MACHINE LEARNING HAS BEEN PROPOSED AND COMPUTATIONALLY VALIDATED ON SIMPLE LINEAR MODELS AS WELL AS ON HIGHLY NONLINEAR SYSTEMS SUCH AS NEURAL NETWORKS.
Life extending control: An interdisciplinary engineering thrust
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Merrill, Walter C.
1991-01-01
The concept of Life Extending Control (LEC) is introduced. Possible extensions to the cyclic damage prediction approach are presented based on the identification of a model from elementary forms. Several candidate elementary forms are presented. These extensions will result in a continuous or differential form of the damage prediction model. Two possible approaches to the LEC based on the existing cyclic damage prediction method, the measured variables LEC and the estimated variables LEC, are defined. Here, damage estimates or measurements would be used directly in the LEC. A simple hydraulic actuator driven position control system example is used to illustrate the main ideas behind LEC. Results from a simple hydraulic actuator example demonstrate that overall system performance (dynamic plus life) can be maximized by accounting for component damage in the control design.
Ahadian, Samad; Mizuseki, Hiroshi; Kawazoe, Yoshiyuki
2010-12-15
A molecular dynamics (MD) approach was employed to simulate the imbibition of a designed nanopore by a simple fluid (i.e., a Lennard-Jones (LJ) fluid). The length of imbibition as a function of time for various interactions between the LJ fluid and the pore wall was recorded for this system (i.e., the LJ fluid and the nanopore). By and large, the kinetics of imbibition was successfully described by the Lucas-Washburn (LW) equation, although deviation from it was observed in some cases. This lack of agreement is due to the neglect of the dynamic contact angle (DCA) in the LW equation. Two commonly used models (i.e., hydrodynamic and molecular-kinetic (MK) models) were thus employed to calculate the DCA. It is demonstrated that the MK model is able to justify the simulation results in which are not in good agreement with the simple LW equation. However, the hydrodynamic model is not capable of doing that. Further investigation of the MD simulation data revealed an interesting fact that there is a direct relationship between the wall-fluid interaction and the speed of the capillary imbibition. More evidence to support this claim is presented. Copyright © 2010 Elsevier Inc. All rights reserved.
Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li
2013-01-21
A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.
Modelling morphology evolution during solidification of IPP in processing conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pantani, R., E-mail: rpantani@unisa.it, E-mail: fedesantis@unisa.it, E-mail: vsperanza@unisa.it, E-mail: gtitomanlio@unisa.it; De Santis, F., E-mail: rpantani@unisa.it, E-mail: fedesantis@unisa.it, E-mail: vsperanza@unisa.it, E-mail: gtitomanlio@unisa.it; Speranza, V., E-mail: rpantani@unisa.it, E-mail: fedesantis@unisa.it, E-mail: vsperanza@unisa.it, E-mail: gtitomanlio@unisa.it
During polymer processing, crystallization takes place during or soon after flow. In most of cases, the flow field dramatically influences both the crystallization kinetics and the crystal morphology. On their turn, crystallinity and morphology affect product properties. Consequently, in the last decade, researchers tried to identify the main parameters determining crystallinity and morphology evolution during solidification In processing conditions. In this work, we present an approach to model flow-induced crystallization with the aim of predicting the morphology after processing. The approach is based on: interpretation of the FIC as the effect of molecular stretch on the thermodynamic crystallization temperature; modelingmore » the molecular stretch evolution by means of a model simple and easy to be implemented in polymer processing simulation codes; identification of the effect of flow on nucleation density and spherulites growth rate by means of simple experiments; determination of the condition under which fibers form instead of spherulites. Model predictions reproduce most of the features of final morphology observed in the samples after solidification.« less
Nonthermal model for ultrafast laser-induced plasma generation around a plasmonic nanorod
NASA Astrophysics Data System (ADS)
Labouret, Timothée; Palpant, Bruno
2016-12-01
The excitation of plasmonic gold nanoparticles by ultrashort laser pulses can trigger interesting electron-based effects in biological media such as production of reactive oxygen species or cell membrane optoporation. In order to better understand the optical and thermal processes at play, we modeled the interaction of a subpicosecond, near-infrared laser pulse with a gold nanorod in water. A nonthermal model is used and compared to a simple two-temperature thermal approach. For both models, the computation of the transient optical response reveals strong plasmon damping. Electron emission from the metal into the water is also calculated in a specific way for each model. The dynamics of the resulting local plasma in water is assessed by a rate equation model. While both approaches provide similar results for the transient optical properties, the simple thermal one is unable to properly describe electron emission and plasma generation. The latter is shown to mostly originate from electron-electron thermionic emission and photoemission from the metal. Taking into account the transient optical response is mandatory to properly calculate both electron emission and local plasma dynamics in water.
Simple models for the simulation of submarine melt for a Greenland glacial system model
NASA Astrophysics Data System (ADS)
Beckmann, Johanna; Perrette, Mahé; Ganopolski, Andrey
2018-01-01
Two hundred marine-terminating Greenland outlet glaciers deliver more than half of the annually accumulated ice into the ocean and have played an important role in the Greenland ice sheet mass loss observed since the mid-1990s. Submarine melt may play a crucial role in the mass balance and position of the grounding line of these outlet glaciers. As the ocean warms, it is expected that submarine melt will increase, potentially driving outlet glaciers retreat and contributing to sea level rise. Projections of the future contribution of outlet glaciers to sea level rise are hampered by the necessity to use models with extremely high resolution of the order of a few hundred meters. That requirement in not only demanded when modeling outlet glaciers as a stand alone model but also when coupling them with high-resolution 3-D ocean models. In addition, fjord bathymetry data are mostly missing or inaccurate (errors of several hundreds of meters), which questions the benefit of using computationally expensive 3-D models for future predictions. Here we propose an alternative approach built on the use of a computationally efficient simple model of submarine melt based on turbulent plume theory. We show that such a simple model is in reasonable agreement with several available modeling studies. We performed a suite of experiments to analyze sensitivity of these simple models to model parameters and climate characteristics. We found that the computationally cheap plume model demonstrates qualitatively similar behavior as 3-D general circulation models. To match results of the 3-D models in a quantitative manner, a scaling factor of the order of 1 is needed for the plume models. We applied this approach to model submarine melt for six representative Greenland glaciers and found that the application of a line plume can produce submarine melt compatible with observational data. Our results show that the line plume model is more appropriate than the cone plume model for simulating the average submarine melting of real glaciers in Greenland.
Tufvesson, Pär; Bach, Christian; Woodley, John M
2014-02-01
Acetone removal by evaporation has been proposed as a simple and cheap way to shift the equilibrium in the biocatalytic asymmetric synthesis of optically pure chiral amines, when 2-propylamine is used as the amine donor. However, dependent on the system properties, this may or may not be a suitable strategy. To avoid excessive laboratory work a model was used to assess the process feasibility. The results from the current study show that a simple model of the acetone removal dependence on temperature and sparging gas flowrate can be developed and fits the experimental data well. The model for acetone removal was then coupled to a simple model for biocatalyst kinetics and also for loss of substrate ketone by evaporation. The three models were used to simulate the effects of varying the critical process parameters and reaction equilibrium constants (K eq) as well as different substrate ketone volatilities (Henry's constant). The simulations were used to estimate the substrate losses and also the maximum yield that could be expected. The approach was seen to give a clear indication for which target amines the acetone evaporation strategy would be feasible and for which amines it would not. The study also shows the value of a modeling approach in conceptual process design prior to entering a biocatalyst screening or engineering program to assess the feasibility of a particular process strategy for a given target product. © 2013 Wiley Periodicals, Inc.
What Can We Learn from a Simple Physics-Based Earthquake Simulator?
NASA Astrophysics Data System (ADS)
Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele
2018-03-01
Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of stochasticity may blur most of the deterministic time features, such as long-term trend and synchronization among nearby coupled faults.
A flowgraph model for bladder carcinoma
2014-01-01
Background Superficial bladder cancer has been the subject of numerous studies for many years, but the evolution of the disease still remains not well understood. After the tumor has been surgically removed, it may reappear at a similar level of malignancy or progress to a higher level. The process may be reasonably modeled by means of a Markov process. However, in order to more completely model the evolution of the disease, this approach is insufficient. The semi-Markov framework allows a more realistic approach, but calculations become frequently intractable. In this context, flowgraph models provide an efficient approach to successfully manage the evolution of superficial bladder carcinoma. Our aim is to test this methodology in this particular case. Results We have built a successful model for a simple but representative case. Conclusion The flowgraph approach is suitable for modeling of superficial bladder cancer. PMID:25080066
A Physics-Based Engineering Approach to Predict the Cross Section for Advanced SRAMs
NASA Astrophysics Data System (ADS)
Li, Lei; Zhou, Wanting; Liu, Huihua
2012-12-01
This paper presents a physics-based engineering approach to estimate the heavy ion induced upset cross section for 6T SRAM cells from layout and technology parameters. The new approach calculates the effects of radiation with junction photocurrent, which is derived based on device physics. The new and simple approach handles the problem by using simple SPICE simulations. At first, the approach uses a standard SPICE program on a typical PC to predict the SPICE-simulated curve of the collected charge vs. its affected distance from the drain-body junction with the derived junction photocurrent. And then, the SPICE-simulated curve is used to calculate the heavy ion induced upset cross section with a simple model, which considers that the SEU cross section of a SRAM cell is more related to a “radius of influence” around a heavy ion strike than to the physical size of a diffusion node in the layout for advanced SRAMs in nano-scale process technologies. The calculated upset cross section based on this method is in good agreement with the test results for 6T SRAM cells processed using 90 nm process technology.
Modelling the Active Hearing Process in Mosquitoes
NASA Astrophysics Data System (ADS)
Avitabile, Daniele; Homer, Martin; Jackson, Joe; Robert, Daniel; Champneys, Alan
2011-11-01
A simple microscopic mechanistic model is described of the active amplification within the Johnston's organ of the mosquito species Toxorhynchites brevipalpis. The model is based on the description of the antenna as a forced-damped oscillator coupled to a set of active threads (ensembles of scolopidia) that provide an impulsive force when they twitch. This twitching is in turn controlled by channels that are opened and closed if the antennal oscillation reaches a critical amplitude. The model matches both qualitatively and quantitatively with recent experiments. New results are presented using mathematical homogenization techniques to derive a mesoscopic model as a simple oscillator with nonlinear force and damping characteristics. It is shown how the results from this new model closely resemble those from the microscopic model as the number of threads approach physiologically correct values.
Variational Approach in the Theory of Liquid-Crystal State
NASA Astrophysics Data System (ADS)
Gevorkyan, E. V.
2018-03-01
The variational calculus by Leonhard Euler is the basis for modern mathematics and theoretical physics. The efficiency of variational approach in statistical theory of liquid-crystal state and in general case in condensed state theory is shown. The developed approach in particular allows us to introduce correctly effective pair interactions and optimize the simple models of liquid crystals with help of realistic intermolecular potentials.
A Model-Driven Approach for Telecommunications Network Services Definition
NASA Astrophysics Data System (ADS)
Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.
Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.
The Q theory of investment, the capital asset pricing model, and asset valuation: a synthesis.
McDonald, John F
2004-05-01
The paper combines Tobin's Q theory of real investment with the capital asset pricing model to produce a new and relatively simple procedure for the valuation of real assets using the income approach. Applications of the new method are provided.
Diffusion of Super-Gaussian Profiles
ERIC Educational Resources Information Center
Rosenberg, C.-J.; Anderson, D.; Desaix, M.; Johannisson, P.; Lisak, M.
2007-01-01
The present analysis describes an analytically simple and systematic approximation procedure for modelling the free diffusive spreading of initially super-Gaussian profiles. The approach is based on a self-similar ansatz for the evolution of the diffusion profile, and the parameter functions involved in the modelling are determined by suitable…
Recovering Parameters of Johnson's SB Distribution
Bernard R. Parresol
2003-01-01
A new parameter recovery model for Johnson's SB distribution is developed. This latest alternative approach permits recovery of the range and both shape parameters. Previous models recovered only the two shape parameters. Also, a simple procedure for estimating the distribution minimum from sample values is presented. The new methodology...
NASA Astrophysics Data System (ADS)
Reid, Lucas; Kittlaus, Steffen; Scherer, Ulrike
2015-04-01
For large areas without highly detailed data the empirical Universal Soil Loss Equation (USLE) is widely used to quantify soil loss. The problem though is usually the quantification of actual sediment influx into the rivers. As the USLE provides long-term mean soil loss rates, it is often combined with spatially lumped models to estimate the sediment delivery ratio (SDR). But it gets difficult with spatially lumped approaches in large catchment areas where the geographical properties have a wide variance. In this study we developed a simple but spatially distributed approach to quantify the sediment delivery ratio by considering the characteristics of the flow paths in the catchments. The sediment delivery ratio was determined using an empirical approach considering the slope, morphology and land use properties along the flow path as an estimation of travel time of the eroded particles. The model was tested against suspended solids measurements in selected sub-basins of the River Inn catchment area in Germany and Austria, ranging from the high alpine south to the Molasse basin in the northern part.
The induced electric field due to a current transient
NASA Astrophysics Data System (ADS)
Beck, Y.; Braunstein, A.; Frankental, S.
2007-05-01
Calculations and measurements of the electric fields, induced by a lightning strike, are important for understanding the phenomenon and developing effective protection systems. In this paper, a novel approach to the calculation of the electric fields due to lightning strikes, using a relativistic approach, is presented. This approach is based on a known current wave-pair model, representing the lightning current wave. The model presented is one that describes the lightning current wave, either at the first stage of the descending charge wave from the cloud or at the later stage of the return stroke. The electric fields computed are cylindrically symmetric. A simplified method for the calculation of the electric field is achieved by using special relativity theory and relativistic considerations. The proposed approach, described in this paper, is based on simple expressions (by applying Coulomb's law) compared with much more complicated partial differential equations based on Maxwell's equations. A straight forward method of calculating the electric field due to a lightning strike, modelled as a negative-positive (NP) wave-pair, is determined by using the special relativity theory in order to calculate the 'velocity field' and relativistic concepts for calculating the 'acceleration field'. These fields are the basic elements required for calculating the total field resulting from the current wave-pair model. Moreover, a modified simpler method using sub models is represented. The sub-models are filaments of either static charges or charges at constant velocity only. Combining these simple sub-models yields the total wave-pair model. The results fully agree with that obtained by solving Maxwell's equations for the discussed problem.
Vehicle Surveillance with a Generic, Adaptive, 3D Vehicle Model.
Leotta, Matthew J; Mundy, Joseph L
2011-07-01
In automated surveillance, one is often interested in tracking road vehicles, measuring their shape in 3D world space, and determining vehicle classification. To address these tasks simultaneously, an effective approach is the constrained alignment of a prior model of 3D vehicle shape to images. Previous 3D vehicle models are either generic but overly simple or rigid and overly complex. Rigid models represent exactly one vehicle design, so a large collection is needed. A single generic model can deform to a wide variety of shapes, but those shapes have been far too primitive. This paper uses a generic 3D vehicle model that deforms to match a wide variety of passenger vehicles. It is adjustable in complexity between the two extremes. The model is aligned to images by predicting and matching image intensity edges. Novel algorithms are presented for fitting models to multiple still images and simultaneous tracking while estimating shape in video. Experiments compare the proposed model to simple generic models in accuracy and reliability of 3D shape recovery from images and tracking in video. Standard techniques for classification are also used to compare the models. The proposed model outperforms the existing simple models at each task.
Dynamics and control of quadcopter using linear model predictive control approach
NASA Astrophysics Data System (ADS)
Islam, M.; Okasha, M.; Idres, M. M.
2017-12-01
This paper investigates the dynamics and control of a quadcopter using the Model Predictive Control (MPC) approach. The dynamic model is of high fidelity and nonlinear, with six degrees of freedom that include disturbances and model uncertainties. The control approach is developed based on MPC to track different reference trajectories ranging from simple ones such as circular to complex helical trajectories. In this control technique, a linearized model is derived and the receding horizon method is applied to generate the optimal control sequence. Although MPC is computer expensive, it is highly effective to deal with the different types of nonlinearities and constraints such as actuators’ saturation and model uncertainties. The MPC parameters (control and prediction horizons) are selected by trial-and-error approach. Several simulation scenarios are performed to examine and evaluate the performance of the proposed control approach using MATLAB and Simulink environment. Simulation results show that this control approach is highly effective to track a given reference trajectory.
NASA Astrophysics Data System (ADS)
Signell, R. P.; Camossi, E.
2015-11-01
Work over the last decade has resulted in standardized web-services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by: (1) making it simple for providers to enable web service access to existing output files; (2) using technology that is free, and that is easy to deploy and configure; and (3) providing tools to communicate with web services that work in existing research environments. We present a simple, local brokering approach that lets modelers continue producing custom data, but virtually aggregates and standardizes the data using NetCDF Markup Language. The THREDDS Data Server is used for data delivery, pycsw for data search, NCTOOLBOX (Matlab®1) and Iris (Python) for data access, and Ocean Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.1 Mention of trade names or commercial products does not constitute endorsement or recommendation for use by the US Government.
Babaei, Behzad; Abramowitch, Steven D.; Elson, Elliot L.; Thomopoulos, Stavros; Genin, Guy M.
2015-01-01
The viscoelastic behaviour of a biological material is central to its functioning and is an indicator of its health. The Fung quasi-linear viscoelastic (QLV) model, a standard tool for characterizing biological materials, provides excellent fits to most stress–relaxation data by imposing a simple form upon a material's temporal relaxation spectrum. However, model identification is challenging because the Fung QLV model's ‘box’-shaped relaxation spectrum, predominant in biomechanics applications, can provide an excellent fit even when it is not a reasonable representation of a material's relaxation spectrum. Here, we present a robust and simple discrete approach for identifying a material's temporal relaxation spectrum from stress–relaxation data in an unbiased way. Our ‘discrete QLV’ (DQLV) approach identifies ranges of time constants over which the Fung QLV model's typical box spectrum provides an accurate representation of a particular material's temporal relaxation spectrum, and is effective at providing a fit to this model. The DQLV spectrum also reveals when other forms or discrete time constants are more suitable than a box spectrum. After validating the approach against idealized and noisy data, we applied the methods to analyse medial collateral ligament stress–relaxation data and identify the strengths and weaknesses of an optimal Fung QLV fit. PMID:26609064
Argasinski, Krzysztof
2006-07-01
This paper contains the basic extensions of classical evolutionary games (multipopulation and density dependent models). It is shown that classical bimatrix approach is inconsistent with other approaches because it does not depend on proportion between populations. The main conclusion is that interspecific proportion parameter is important and must be considered in multipopulation models. The paper provides a synthesis of both extensions (a metasimplex concept) which solves the problem intrinsic in the bimatrix model. It allows us to model interactions among any number of subpopulations including density dependence effects. We prove that all modern approaches to evolutionary games are closely related. All evolutionary models (except classical bimatrix approaches) can be reduced to a single population general model by a simple change of variables. Differences between classic bimatrix evolutionary games and a new model which is dependent on interspecific proportion are shown by examples.
Renton, Michael
2011-01-01
Background and aims Simulations that integrate sub-models of important biological processes can be used to ask questions about optimal management strategies in agricultural and ecological systems. Building sub-models with more detail and aiming for greater accuracy and realism may seem attractive, but is likely to be more expensive and time-consuming and result in more complicated models that lack transparency. This paper illustrates a general integrated approach for constructing models of agricultural and ecological systems that is based on the principle of starting simple and then directly testing for the need to add additional detail and complexity. Methodology The approach is demonstrated using LUSO (Land Use Sequence Optimizer), an agricultural system analysis framework based on simulation and optimization. A simple sensitivity analysis and functional perturbation analysis is used to test to what extent LUSO's crop–weed competition sub-model affects the answers to a number of questions at the scale of the whole farming system regarding optimal land-use sequencing strategies and resulting profitability. Principal results The need for accuracy in the crop–weed competition sub-model within LUSO depended to a small extent on the parameter being varied, but more importantly and interestingly on the type of question being addressed with the model. Only a small part of the crop–weed competition model actually affects the answers to these questions. Conclusions This study illustrates an example application of the proposed integrated approach for constructing models of agricultural and ecological systems based on testing whether complexity needs to be added to address particular questions of interest. We conclude that this example clearly demonstrates the potential value of the general approach. Advantages of this approach include minimizing costs and resources required for model construction, keeping models transparent and easy to analyse, and ensuring the model is well suited to address the question of interest. PMID:22476477
Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach.
Pandey, S; Chadha, V K; Laxminarayan, R; Arinaminpathy, N
2017-04-01
There is an urgent need for improved estimations of the burden of tuberculosis (TB). To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8-156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used.
NASA Astrophysics Data System (ADS)
Avanzi, Francesco; Yamaguchi, Satoru; Hirashima, Hiroyuki; De Michele, Carlo
2016-04-01
Liquid water in snow rules runoff dynamics and wet snow avalanches release. Moreover, it affects snow viscosity and snow albedo. As a result, measuring and modeling liquid water dynamics in snow have important implications for many scientific applications. However, measurements are usually challenging, while modeling is difficult due to an overlap of mechanical, thermal and hydraulic processes. Here, we evaluate the use of a simple one-layer one-dimensional model to predict hourly time-series of bulk volumetric liquid water content in seasonal snow. The model considers both a simple temperature-index approach (melt only) and a coupled melt-freeze temperature-index approach that is able to reconstruct melt-freeze dynamics. Performance of this approach is evaluated at three sites in Japan. These sites (Nagaoka, Shinjo and Sapporo) present multi-year time-series of snow and meteorological data, vertical profiles of snow physical properties and snow melt lysimeters data. These data-sets are an interesting opportunity to test this application in different climatic conditions, as sites span a wide latitudinal range and are subjected to different snow conditions during the season. When melt-freeze dynamics are included in the model, results show that median absolute differences between observations and predictions of bulk volumetric liquid water content are consistently lower than 1 vol%. Moreover, the model is able to predict an observed dry condition of the snowpack in 80% of observed cases at a non-calibration site, where parameters from calibration sites are transferred. Overall, the analysis show that a coupled melt-freeze temperature-index approach may be a valid solution to predict average wetness conditions of a snow cover at local scale.
Consistency of the free-volume approach to the homogeneous deformation of metallic glasses
NASA Astrophysics Data System (ADS)
Blétry, Marc; Thai, Minh Thanh; Champion, Yannick; Perrière, Loïc; Ochin, Patrick
2014-05-01
One of the most widely used approaches to model metallic-glasses high-temperature homogeneous deformation is the free-volume theory, developed by Cohen and Turnbull and extended by Spaepen. A simple elastoviscoplastic formulation has been proposed that allows one to determine various parameters of such a model. This approach is applied here to the results obtained by de Hey et al. on a Pd-based metallic glass. In their study, de Hey et al. were able to determine some of the parameters used in the elastoviscoplastic formulation through DSC modeling coupled with mechanical tests, and the consistency of the two viewpoints was assessed.
Development of mathematical models of environmental physiology
NASA Technical Reports Server (NTRS)
Stolwijk, J. A. J.; Mitchell, J. W.; Nadel, E. R.
1971-01-01
Selected articles concerned with mathematical or simulation models of human thermoregulation are presented. The articles presented include: (1) development and use of simulation models in medicine, (2) model of cardio-vascular adjustments during exercise, (3) effective temperature scale based on simple model of human physiological regulatory response, (4) behavioral approach to thermoregulatory set point during exercise, and (5) importance of skin temperature in sweat regulation.
Model Checking Satellite Operational Procedures
NASA Astrophysics Data System (ADS)
Cavaliere, Federico; Mari, Federico; Melatti, Igor; Minei, Giovanni; Salvo, Ivano; Tronci, Enrico; Verzino, Giovanni; Yushtein, Yuri
2011-08-01
We present a model checking approach for the automatic verification of satellite operational procedures (OPs). Building a model for a complex system as a satellite is a hard task. We overcome this obstruction by using a suitable simulator (SIMSAT) for the satellite. Our approach aims at improving OP quality assurance by automatic exhaustive exploration of all possible simulation scenarios. Moreover, our solution decreases OP verification costs by using a model checker (CMurphi) to automatically drive the simulator. We model OPs as user-executed programs observing the simulator telemetries and sending telecommands to the simulator. In order to assess feasibility of our approach we present experimental results on a simple meaningful scenario. Our results show that we can save up to 90% of verification time.
ALC: automated reduction of rule-based models
Koschorreck, Markus; Gilles, Ernst Dieter
2008-01-01
Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705
Statistical mechanics of simple models of protein folding and design.
Pande, V S; Grosberg, A Y; Tanaka, T
1997-01-01
It is now believed that the primary equilibrium aspects of simple models of protein folding are understood theoretically. However, current theories often resort to rather heavy mathematics to overcome some technical difficulties inherent in the problem or start from a phenomenological model. To this end, we take a new approach in this pedagogical review of the statistical mechanics of protein folding. The benefit of our approach is a drastic mathematical simplification of the theory, without resort to any new approximations or phenomenological prescriptions. Indeed, the results we obtain agree precisely with previous calculations. Because of this simplification, we are able to present here a thorough and self contained treatment of the problem. Topics discussed include the statistical mechanics of the random energy model (REM), tests of the validity of REM as a model for heteropolymer freezing, freezing transition of random sequences, phase diagram of designed ("minimally frustrated") sequences, and the degree to which errors in the interactions employed in simulations of either folding and design can still lead to correct folding behavior. Images FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 6 PMID:9414231
Fuller, Robert William; Wong, Tony E; Keller, Klaus
2017-01-01
The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections.
Simple heuristics and rules of thumb: where psychologists and behavioural biologists might meet.
Hutchinson, John M C; Gigerenzer, Gerd
2005-05-31
The Centre for Adaptive Behaviour and Cognition (ABC) has hypothesised that much human decision-making can be described by simple algorithmic process models (heuristics). This paper explains this approach and relates it to research in biology on rules of thumb, which we also review. As an example of a simple heuristic, consider the lexicographic strategy of Take The Best for choosing between two alternatives: cues are searched in turn until one discriminates, then search stops and all other cues are ignored. Heuristics consist of building blocks, and building blocks exploit evolved or learned abilities such as recognition memory; it is the complexity of these abilities that allows the heuristics to be simple. Simple heuristics have an advantage in making decisions fast and with little information, and in avoiding overfitting. Furthermore, humans are observed to use simple heuristics. Simulations show that the statistical structures of different environments affect which heuristics perform better, a relationship referred to as ecological rationality. We contrast ecological rationality with the stronger claim of adaptation. Rules of thumb from biology provide clearer examples of adaptation because animals can be studied in the environments in which they evolved. The range of examples is also much more diverse. To investigate them, biologists have sometimes used similar simulation techniques to ABC, but many examples depend on empirically driven approaches. ABC's theoretical framework can be useful in connecting some of these examples, particularly the scattered literature on how information from different cues is integrated. Optimality modelling is usually used to explain less detailed aspects of behaviour but might more often be redirected to investigate rules of thumb.
Comparing fire spread algorithms using equivalence testing and neutral landscape models
Brian R. Miranda; Brian R. Sturtevant; Jian Yang; Eric J. Gustafson
2009-01-01
We demonstrate a method to evaluate the degree to which a meta-model approximates spatial disturbance processes represented by a more detailed model across a range of landscape conditions, using neutral landscapes and equivalence testing. We illustrate this approach by comparing burn patterns produced by a relatively simple fire spread algorithm with those generated by...
I present a simple, macroecological model of fish abundance that was used to estimate the total number of non-migratory salmonids within the Willamette River Basin (western Oregon). The model begins with empirical point estimates of net primary production (NPP in g C/m2) in fore...
NASA Astrophysics Data System (ADS)
Eriksen, Trygve E.; Shoesmith, David W.; Jonsson, Mats
2012-01-01
Radiation induced dissolution of uranium dioxide (UO 2) nuclear fuel and the consequent release of radionuclides to intruding groundwater are key-processes in the safety analysis of future deep geological repositories for spent nuclear fuel. For several decades, these processes have been studied experimentally using both spent fuel and various types of simulated spent fuels. The latter have been employed since it is difficult to draw mechanistic conclusions from real spent nuclear fuel experiments. Several predictive modelling approaches have been developed over the last two decades. These models are largely based on experimental observations. In this work we have performed a critical review of the modelling approaches developed based on the large body of chemical and electrochemical experimental data. The main conclusions are: (1) the use of measured interfacial rate constants give results in generally good agreement with experimental results compared to simulations where homogeneous rate constants are used; (2) the use of spatial dose rate distributions is particularly important when simulating the behaviour over short time periods; and (3) the steady-state approach (the rate of oxidant consumption is equal to the rate of oxidant production) provides a simple but fairly accurate alternative, but errors in the reaction mechanism and in the kinetic parameters used may not be revealed by simple benchmarking. It is essential to use experimentally determined rate constants and verified reaction mechanisms, irrespective of whether the approach is chemical or electrochemical.
Utility of an automated thermal-based approach for monitoring evapotranspiration
USDA-ARS?s Scientific Manuscript database
A very simple remote sensing-based model for water use monitoring is presented. The model acronym DATTUTDUT, (Deriving Atmosphere Turbulent Transport Useful To Dummies Using Temperature) is a Dutch word which loosely translates as “It’s unbelievable that it works”. DATTUTDUT is fully automated and o...
Measuring Success: Evaluating Educational Programs
ERIC Educational Resources Information Center
Fisher, Yael
2010-01-01
This paper reveals a new evaluation model, which enables educational program and project managers to evaluate their programs with a simple and easy to understand approach. The "index of success model" is comprised of five parameters that enable to focus on and evaluate both the implementation and results of an educational program. The…
Loglinear Approximate Solutions to Real-Business-Cycle Models: Some Observations
ERIC Educational Resources Information Center
Lau, Sau-Him Paul; Ng, Philip Hoi-Tak
2007-01-01
Following the analytical approach suggested in Campbell, the authors consider a baseline real-business-cycle (RBC) model with endogenous labor supply. They observe that the coefficients in the loglinear approximation of the dynamic equations characterizing the equilibrium are related to the fundamental parameters in a relatively simple manner.…
Karadimas, H.; Hemery, F.; Roland, P.; Lepage, E.
2000-01-01
In medical software development, the use of databases plays a central role. However, most of the databases have heterogeneous encoding and data models. To deal with these variations in the application code directly is error-prone and reduces the potential reuse of the produced software. Several approaches to overcome these limitations have been proposed in the medical database literature, which will be presented. We present a simple solution, based on a Java library, and a central Metadata description file in XML. This development approach presents several benefits in software design and development cycles, the main one being the simplicity in maintenance. PMID:11079915
A new modelling approach for zooplankton behaviour
NASA Astrophysics Data System (ADS)
Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.
We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.
Defining Simple nD Operations Based on Prosmatic nD Objects
NASA Astrophysics Data System (ADS)
Arroyo Ohori, K.; Ledoux, H.; Stoter, J.
2016-10-01
An alternative to the traditional approaches to model separately 2D/3D space, time, scale and other parametrisable characteristics in GIS lies in the higher-dimensional modelling of geographic information, in which a chosen set of non-spatial characteristics, e.g. time and scale, are modelled as extra geometric dimensions perpendicular to the spatial ones, thus creating a higher-dimensional model. While higher-dimensional models are undoubtedly powerful, they are also hard to create and manipulate due to our lack of an intuitive understanding in dimensions higher than three. As a solution to this problem, this paper proposes a methodology that makes nD object generation easier by splitting the creation and manipulation process into three steps: (i) constructing simple nD objects based on nD prismatic polytopes - analogous to prisms in 3D -, (ii) defining simple modification operations at the vertex level, and (iii) simple postprocessing to fix errors introduced in the model. As a use case, we show how two sets of operations can be defined and implemented in a dimension-independent manner using this methodology: the most common transformations (i.e. translation, scaling and rotation) and the collapse of objects. The nD objects generated in this manner can then be used as a basis for an nD GIS.
De Benedetti, Pier G; Fanelli, Francesca
2018-03-21
Simple comparative correlation analyses and quantitative structure-kinetics relationship (QSKR) models highlight the interplay of kinetic rates and binding affinity as an essential feature in drug design and discovery. The choice of the molecular series, and their structural variations, used in QSKR modeling is fundamental to understanding the mechanistic implications of ligand and/or drug-target binding and/or unbinding processes. Here, we discuss the implications of linear correlations between kinetic rates and binding affinity constants and the relevance of the computational approaches to QSKR modeling. Copyright © 2018 Elsevier Ltd. All rights reserved.
On the (In)Validity of Tests of Simple Mediation: Threats and Solutions
Pek, Jolynn; Hoyle, Rick H.
2015-01-01
Mediation analysis is a popular framework for identifying underlying mechanisms in social psychology. In the context of simple mediation, we review and discuss the implications of three facets of mediation analysis: (a) conceptualization of the relations between the variables, (b) statistical approaches, and (c) relevant elements of design. We also highlight the issue of equivalent models that are inherent in simple mediation. The extent to which results are meaningful stem directly from choices regarding these three facets of mediation analysis. We conclude by discussing how mediation analysis can be better applied to examine causal processes, highlight the limits of simple mediation, and make recommendations for better practice. PMID:26985234
Quantitative metrics for evaluating the phased roll-out of clinical information systems.
Wong, David; Wu, Nicolas; Watkinson, Peter
2017-09-01
We introduce a novel quantitative approach for evaluating the order of roll-out during phased introduction of clinical information systems. Such roll-outs are associated with unavoidable risk due to patients transferring between clinical areas using both the old and new systems. We proposed a simple graphical model of patient flow through a hospital. Using a simple instance of the model, we showed how a roll-out order can be generated by minimising the flow of patients from the new system to the old system. The model was applied to admission and discharge data acquired from 37,080 patient journeys at the Churchill Hospital, Oxford between April 2013 and April 2014. The resulting order was evaluated empirically and produced acceptable orders. The development of data-driven approaches to clinical Information system roll-out provides insights that may not necessarily be ascertained through clinical judgment alone. Such methods could make a significant contribution to the smooth running of an organisation during the roll-out of a potentially disruptive technology. Unlike previous approaches, which are based on clinical opinion, the approach described here quantitatively assesses the appropriateness of competing roll-out strategies. The data-driven approach was shown to produce strategies that matched clinical intuition and provides a flexible framework that may be used to plan and monitor Clinical Information System roll-out. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
A Holistic Management Architecture for Large-Scale Adaptive Networks
2007-09-01
transmission and processing overhead required for management. The challenges of building models to describe dynamic systems are well-known to the field of...increases the challenge of finding a simple approach to assessing the state of the network. Moreover, the performance state of one network link may be... challenging . These obstacles indicate the need for a less comprehensive-analytical, more systemic-holistic approach to managing networks. This approach might
Signell, Richard; Camossi, E.
2016-01-01
Work over the last decade has resulted in standardised web services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by (1) making it simple for providers to enable web service access to existing output files; (2) using free technologies that are easy to deploy and configure; and (3) providing standardised, service-based tools that work in existing research environments. We present a simple, local brokering approach that lets modellers continue to use their existing files and tools, while serving virtual data sets that can be used with standardised tools. The goal of this paper is to convince modellers that a standardised framework is not only useful but can be implemented with modest effort using free software components. We use NetCDF Markup language for data aggregation and standardisation, the THREDDS Data Server for data delivery, pycsw for data search, NCTOOLBOX (MATLAB®) and Iris (Python) for data access, and Open Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.
NASA Astrophysics Data System (ADS)
Signell, Richard P.; Camossi, Elena
2016-05-01
Work over the last decade has resulted in standardised web services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by (1) making it simple for providers to enable web service access to existing output files; (2) using free technologies that are easy to deploy and configure; and (3) providing standardised, service-based tools that work in existing research environments. We present a simple, local brokering approach that lets modellers continue to use their existing files and tools, while serving virtual data sets that can be used with standardised tools. The goal of this paper is to convince modellers that a standardised framework is not only useful but can be implemented with modest effort using free software components. We use NetCDF Markup language for data aggregation and standardisation, the THREDDS Data Server for data delivery, pycsw for data search, NCTOOLBOX (MATLAB®) and Iris (Python) for data access, and Open Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.
Why preferring parametric forecasting to nonparametric methods?
Jabot, Franck
2015-05-07
A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting. Copyright © 2015 Elsevier Ltd. All rights reserved.
Generative Models in Deep Learning: Constraints for Galaxy Evolution
NASA Astrophysics Data System (ADS)
Turp, Maximilian Dennis; Schawinski, Kevin; Zhang, Ce; Weigel, Anna K.
2018-01-01
New techniques are essential to make advances in the field of galaxy evolution. Recent developments in the field of artificial intelligence and machine learning have proven that these tools can be applied to problems far more complex than simple image recognition. We use these purely data driven approaches to investigate the process of star formation quenching. We show that Variational Autoencoders provide a powerful method to forward model the process of galaxy quenching. Our results imply that simple changes in specific star formation rate and bulge to disk ratio cannot fully describe the properties of the quenched population.
Trajectory optimization and guidance law development for national aerospace plane applications
NASA Technical Reports Server (NTRS)
Calise, A. J.; Flandro, G. A.; Corban, J. E.
1988-01-01
The work completed to date is comprised of the following: a simple vehicle model representative of the aerospace plane concept in the hypersonic flight regime, fuel-optimal climb profiles for the unconstrained and dynamic pressure constrained cases generated using a reduced order dynamic model, an analytic switching condition for transition to rocket powered flight as orbital velocity is approached, simple feedback guidance laws for both the unconstrained and dynamic pressure constrained cases derived via singular perturbation theory and a nonlinear transformation technique, and numerical simulation results for ascent to orbit in the dynamic pressure constrained case.
Compact divided-pupil line-scanning confocal microscope for investigation of human tissues
NASA Astrophysics Data System (ADS)
Glazowski, Christopher; Peterson, Gary; Rajadhyaksha, Milind
2013-03-01
Divided-pupil line-scanning confocal microscopy (DPLSCM) can provide a simple and low-cost approach for imaging of human tissues with pathology-like nuclear and cellular detail. Using results from a multidimensional numerical model of DPLSCM, we found optimal pupil configurations for improved axial sectioning, as well as control of speckle noise in the case of reflectance imaging. The modeling results guided the design and construction of a simple (10 component) microscope, packaged within the footprint of an iPhone, and capable of cellular resolution. We present the optical design with experimental video-images of in-vivo human tissues.
Jenett, Benjamin; Calisch, Sam; Cellucci, Daniel; Cramer, Nick; Gershenfeld, Neil; Swei, Sean; Cheung, Kenneth C
2017-03-01
We describe an approach for the discrete and reversible assembly of tunable and actively deformable structures using modular building block parts for robotic applications. The primary technical challenge addressed by this work is the use of this method to design and fabricate low density, highly compliant robotic structures with spatially tuned stiffness. This approach offers a number of potential advantages over more conventional methods for constructing compliant robots. The discrete assembly reduces manufacturing complexity, as relatively simple parts can be batch-produced and joined to make complex structures. Global mechanical properties can be tuned based on sub-part ordering and geometry, because local stiffness and density can be independently set to a wide range of values and varied spatially. The structure's intrinsic modularity can significantly simplify analysis and simulation. Simple analytical models for the behavior of each building block type can be calibrated with empirical testing and synthesized into a highly accurate and computationally efficient model of the full compliant system. As a case study, we describe a modular and reversibly assembled wing that performs continuous span-wise twist deformation. It exhibits high performance aerodynamic characteristics, is lightweight and simple to fabricate and repair. The wing is constructed from discrete lattice elements, wherein the geometric and mechanical attributes of the building blocks determine the global mechanical properties of the wing. We describe the mechanical design and structural performance of the digital morphing wing, including their relationship to wind tunnel tests that suggest the ability to increase roll efficiency compared to a conventional rigid aileron system. We focus here on describing the approach to design, modeling, and construction as a generalizable approach for robotics that require very lightweight, tunable, and actively deformable structures.
Digital Morphing Wing: Active Wing Shaping Concept Using Composite Lattice-Based Cellular Structures
Jenett, Benjamin; Calisch, Sam; Cellucci, Daniel; Cramer, Nick; Gershenfeld, Neil; Swei, Sean
2017-01-01
Abstract We describe an approach for the discrete and reversible assembly of tunable and actively deformable structures using modular building block parts for robotic applications. The primary technical challenge addressed by this work is the use of this method to design and fabricate low density, highly compliant robotic structures with spatially tuned stiffness. This approach offers a number of potential advantages over more conventional methods for constructing compliant robots. The discrete assembly reduces manufacturing complexity, as relatively simple parts can be batch-produced and joined to make complex structures. Global mechanical properties can be tuned based on sub-part ordering and geometry, because local stiffness and density can be independently set to a wide range of values and varied spatially. The structure's intrinsic modularity can significantly simplify analysis and simulation. Simple analytical models for the behavior of each building block type can be calibrated with empirical testing and synthesized into a highly accurate and computationally efficient model of the full compliant system. As a case study, we describe a modular and reversibly assembled wing that performs continuous span-wise twist deformation. It exhibits high performance aerodynamic characteristics, is lightweight and simple to fabricate and repair. The wing is constructed from discrete lattice elements, wherein the geometric and mechanical attributes of the building blocks determine the global mechanical properties of the wing. We describe the mechanical design and structural performance of the digital morphing wing, including their relationship to wind tunnel tests that suggest the ability to increase roll efficiency compared to a conventional rigid aileron system. We focus here on describing the approach to design, modeling, and construction as a generalizable approach for robotics that require very lightweight, tunable, and actively deformable structures. PMID:28289574
NASA Technical Reports Server (NTRS)
Randall, David A.
1990-01-01
A bulk planetary boundary layer (PBL) model was developed with a simple internal vertical structure and a simple second-order closure, designed for use as a PBL parameterization in a large-scale model. The model allows the mean fields to vary with height within the PBL, and so must address the vertical profiles of the turbulent fluxes, going beyond the usual mixed-layer assumption that the fluxes of conservative variables are linear with height. This is accomplished using the same convective mass flux approach that has also been used in cumulus parameterizations. The purpose is to show that such a mass flux model can include, in a single framework, the compensating subsidence concept, downgradient mixing, and well-mixed layers.
Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane
2016-09-20
The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.
Fortwaengler, Kurt; Parkin, Christopher G.; Neeser, Kurt; Neumann, Monika; Mast, Oliver
2017-01-01
The modeling approach described here is designed to support the development of spreadsheet-based simple predictive models. It is based on 3 pillars: association of the complications with HbA1c changes, incidence of the complications, and average cost per event of the complication. For each pillar, the goal of the analysis was (1) to find results for a large diversity of populations with a focus on countries/regions, diabetes type, age, diabetes duration, baseline HbA1c value, and gender; (2) to assess the range of incidences and associations previously reported. Unlike simple predictive models, which mostly are based on only 1 source of information for each of the pillars, we conducted a comprehensive, systematic literature review. Each source found was thoroughly reviewed and only sources meeting quality expectations were considered. The approach allows avoidance of unintended use of extreme data. The user can utilize (1) one of the found sources, (2) the found range as validation for the found figures, or (3) the average of all found publications for an expedited estimate. The modeling approach is intended for use in average insulin-treated diabetes populations in which the baseline HbA1c values are within an average range (6.5% to 11.5%); it is not intended for use in individuals or unique diabetes populations (eg, gestational diabetes). Because the modeling approach only considers diabetes-related complications that are positively associated with HbA1c decreases, the costs of negatively associated complications (eg, severe hypoglycemic events) must be calculated separately. PMID:27510441
NASA Astrophysics Data System (ADS)
Song, Huixu; Shi, Zhaoyao; Chen, Hongfang; Sun, Yanqiang
2018-01-01
This paper presents a novel experimental approach and a simple model for verifying that spherical mirror of laser tracking system could lessen the effect of rotation errors of gimbal mount axes based on relative motion thinking. Enough material and evidence are provided to support that this simple model could replace complex optical system in laser tracking system. This experimental approach and model interchange the kinematic relationship between spherical mirror and gimbal mount axes in laser tracking system. Being fixed stably, gimbal mount axes' rotation error motions are replaced by spatial micro-displacements of spherical mirror. These motions are simulated by driving spherical mirror along the optical axis and vertical direction with the use of precision positioning platform. The effect on the laser ranging measurement accuracy of displacement caused by the rotation errors of gimbal mount axes could be recorded according to the outcome of laser interferometer. The experimental results show that laser ranging measurement error caused by the rotation errors is less than 0.1 μm if radial error motion and axial error motion are under 10 μm. The method based on relative motion thinking not only simplifies the experimental procedure but also achieves that spherical mirror owns the ability to reduce the effect of rotation errors of gimbal mount axes in laser tracking system.
A practical model for pressure probe system response estimation (with review of existing models)
NASA Astrophysics Data System (ADS)
Hall, B. F.; Povey, T.
2018-04-01
The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.
A gunner model for an AAA tracking task with interrupted observations
NASA Technical Reports Server (NTRS)
Yu, C. F.; Wei, K. C.; Vikmanis, M.
1982-01-01
The problem of modeling a trained human operator's tracking performance in an anti-aircraft system under various display blanking conditions is discussed. The input to the gunner is the observable tracking error subjected to repeated interruptions (blanking). A simple and effective gunner model was developed. The effect of blanking on the gunner's tracking performance is approached via modeling the observer and controller gains.
A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment
NASA Astrophysics Data System (ADS)
Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.
2017-12-01
Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.
A classical density functional theory of ionic liquids.
Forsman, Jan; Woodward, Clifford E; Trulsson, Martin
2011-04-28
We present a simple, classical density functional approach to the study of simple models of room temperature ionic liquids. Dispersion attractions as well as ion correlation effects and excluded volume packing are taken into account. The oligomeric structure, common to many ionic liquid molecules, is handled by a polymer density functional treatment. The theory is evaluated by comparisons with simulations, with an emphasis on the differential capacitance, an experimentally measurable quantity of significant practical interest.
Perona, Paolo; Dürrenmatt, David J; Characklis, Gregory W
2013-03-30
We propose a theoretical river modeling framework for generating variable flow patterns in diverted-streams (i.e., no reservoir). Using a simple economic model and the principle of equal marginal utility in an inverse fashion we first quantify the benefit of the water that goes to the environment in relation to that of the anthropic activity. Then, we obtain exact expressions for optimal water allocation rules between the two competing uses, as well as the related statistical distributions. These rules are applied using both synthetic and observed streamflow data, to demonstrate that this approach may be useful in 1) generating more natural flow patterns in the river reach downstream of the diversion, thus reducing the ecodeficit; 2) obtaining a more enlightened economic interpretation of Minimum Flow Release (MFR) strategies, and; 3) comparing the long-term costs and benefits of variable versus MFR policies and showing the greater ecological sustainability of this new approach. Copyright © 2013 Elsevier Ltd. All rights reserved.
A simple method for assessing occupational exposure via the one-way random effects model.
Krishnamoorthy, K; Mathew, Thomas; Peng, Jie
2016-11-01
A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.
Real-time monitoring of a microbial electrolysis cell using an electrical equivalent circuit model.
Hussain, S A; Perrier, M; Tartakovsky, B
2018-04-01
Efforts in developing microbial electrolysis cells (MECs) resulted in several novel approaches for wastewater treatment and bioelectrosynthesis. Practical implementation of these approaches necessitates the development of an adequate system for real-time (on-line) monitoring and diagnostics of MEC performance. This study describes a simple MEC equivalent electrical circuit (EEC) model and a parameter estimation procedure, which enable such real-time monitoring. The proposed approach involves MEC voltage and current measurements during its operation with periodic power supply connection/disconnection (on/off operation) followed by parameter estimation using either numerical or analytical solution of the model. The proposed monitoring approach is demonstrated using a membraneless MEC with flow-through porous electrodes. Laboratory tests showed that changes in the influent carbon source concentration and composition significantly affect MEC total internal resistance and capacitance estimated by the model. Fast response of these EEC model parameters to changes in operating conditions enables the development of a model-based approach for real-time monitoring and fault detection.
Effects of host social hierarchy on disease persistence.
Davidson, Ross S; Marion, Glenn; Hutchings, Michael R
2008-08-07
The effects of social hierarchy on population dynamics and epidemiology are examined through a model which contains a number of fundamental features of hierarchical systems, but is simple enough to allow analytical insight. In order to allow for differences in birth rates, contact rates and movement rates among different sets of individuals the population is first divided into subgroups representing levels in the hierarchy. Movement, representing dominance challenges, is allowed between any two levels, giving a completely connected network. The model includes hierarchical effects by introducing a set of dominance parameters which affect birth rates in each social level and movement rates between social levels, dependent upon their rank. Although natural hierarchies vary greatly in form, the skewing of contact patterns, introduced here through non-uniform dominance parameters, has marked effects on the spread of disease. A simple homogeneous mixing differential equation model of a disease with SI dynamics in a population subject to simple birth and death process is presented and it is shown that the hierarchical model tends to this as certain parameter regions are approached. Outside of these parameter regions correlations within the system give rise to deviations from the simple theory. A Gaussian moment closure scheme is developed which extends the homogeneous model in order to take account of correlations arising from the hierarchical structure, and it is shown that the results are in reasonable agreement with simulations across a range of parameters. This approach helps to elucidate the origin of hierarchical effects and shows that it may be straightforward to relate the correlations in the model to measurable quantities which could be used to determine the importance of hierarchical corrections. Overall, hierarchical effects decrease the levels of disease present in a given population compared to a homogeneous unstructured model, but show higher levels of disease than structured models with no hierarchy. The separation between these three models is greatest when the rate of dominance challenges is low, reducing mixing, and when the disease prevalence is low. This suggests that these effects will often need to be considered in models being used to examine the impact of control strategies where the low disease prevalence behaviour of a model is critical.
Electric Conduction in Semiconductors: A Pedagogical Model Based on the Monte Carlo Method
ERIC Educational Resources Information Center
Capizzo, M. C.; Sperandeo-Mineo, R. M.; Zarcone, M.
2008-01-01
We present a pedagogic approach aimed at modelling electric conduction in semiconductors in order to describe and explain some macroscopic properties, such as the characteristic behaviour of resistance as a function of temperature. A simple model of the band structure is adopted for the generation of electron-hole pairs as well as for the carrier…
Computational principles underlying recognition of acoustic signals in grasshoppers and crickets.
Ronacher, Bernhard; Hennig, R Matthias; Clemens, Jan
2015-01-01
Grasshoppers and crickets independently evolved hearing organs and acoustic communication. They differ considerably in the organization of their auditory pathways, and the complexity of their songs, which are essential for mate attraction. Recent approaches aimed at describing the behavioral preference functions of females in both taxa by a simple modeling framework. The basic structure of the model consists of three processing steps: (1) feature extraction with a bank of 'LN models'-each containing a linear filter followed by a nonlinearity, (2) temporal integration, and (3) linear combination. The specific properties of the filters and nonlinearities were determined using a genetic learning algorithm trained on a large set of different song features and the corresponding behavioral response scores. The model showed an excellent prediction of the behavioral responses to the tested songs. Most remarkably, in both taxa the genetic algorithm found Gabor-like functions as the optimal filter shapes. By slight modifications of Gabor filters several types of preference functions could be modeled, which are observed in different cricket species. Furthermore, this model was able to explain several so far enigmatic results in grasshoppers. The computational approach offered a remarkably simple framework that can account for phenotypically rather different preference functions across several taxa.
Petterson, S R; Stenström, T A
2015-09-01
To support the implementation of quantitative microbial risk assessment (QMRA) for managing infectious risks associated with drinking water systems, a simple modeling approach for quantifying Log10 reduction across a free chlorine disinfection contactor was developed. The study was undertaken in three stages: firstly, review of the laboratory studies published in the literature; secondly, development of a conceptual approach to apply the laboratory studies to full-scale conditions; and finally implementation of the calculations for a hypothetical case study system. The developed model explicitly accounted for variability in residence time and pathogen specific chlorine sensitivity. Survival functions were constructed for a range of pathogens relying on the upper bound of the reported data transformed to a common metric. The application of the model within a hypothetical case study demonstrated the importance of accounting for variable residence time in QMRA. While the overall Log10 reduction may appear high, small parcels of water with short residence time can compromise the overall performance of the barrier. While theoretically simple, the approach presented is of great value for undertaking an initial assessment of a full-scale disinfection contactor based on limited site-specific information.
Silva, Fabyano Fonseca; Tunin, Karen P.; Rosa, Guilherme J.M.; da Silva, Marcos V.B.; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto
2011-01-01
Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960
NASA Technical Reports Server (NTRS)
Kowalski, Marc Edward
2009-01-01
A method for the prediction of time-domain signatures of chafed coaxial cables is presented. The method is quasi-static in nature, and is thus efficient enough to be included in inference and inversion routines. Unlike previous models proposed, no restriction on the geometry or size of the chafe is required in the present approach. The model is validated and its speed is illustrated via comparison to simulations from a commercial, three-dimensional electromagnetic simulator.
Impact resistance of fiber composites - Energy-absorbing mechanisms and environmental effects
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1985-01-01
Energy absorbing mechanisms were identified by several approaches. The energy absorbing mechanisms considered are those in unidirectional composite beams subjected to impact. The approaches used include: mechanic models, statistical models, transient finite element analysis, and simple beam theory. Predicted results are correlated with experimental data from Charpy impact tests. The environmental effects on impact resistance are evaluated. Working definitions for energy absorbing and energy releasing mechanisms are proposed and a dynamic fracture progression is outlined. Possible generalizations to angle-plied laminates are described.
Impact resistance of fiber composites: Energy absorbing mechanisms and environmental effects
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1983-01-01
Energy absorbing mechanisms were identified by several approaches. The energy absorbing mechanisms considered are those in unidirectional composite beams subjected to impact. The approaches used include: mechanic models, statistical models, transient finite element analysis, and simple beam theory. Predicted results are correlated with experimental data from Charpy impact tests. The environmental effects on impact resistance are evaluated. Working definitions for energy absorbing and energy releasing mechanisms are proposed and a dynamic fracture progression is outlined. Possible generalizations to angle-plied laminates are described.
A partial Hamiltonian approach for current value Hamiltonian systems
NASA Astrophysics Data System (ADS)
Naz, R.; Mahomed, F. M.; Chaudhry, Azam
2014-10-01
We develop a partial Hamiltonian framework to obtain reductions and closed-form solutions via first integrals of current value Hamiltonian systems of ordinary differential equations (ODEs). The approach is algorithmic and applies to many state and costate variables of the current value Hamiltonian. However, we apply the method to models with one control, one state and one costate variable to illustrate its effectiveness. The current value Hamiltonian systems arise in economic growth theory and other economic models. We explain our approach with the help of a simple illustrative example and then apply it to two widely used economic growth models: the Ramsey model with a constant relative risk aversion (CRRA) utility function and Cobb Douglas technology and a one-sector AK model of endogenous growth are considered. We show that our newly developed systematic approach can be used to deduce results given in the literature and also to find new solutions.
Methods for Maximizing the Learning Process: A Theoretical and Experimental Analysis.
ERIC Educational Resources Information Center
Atkinson, Richard C.
This research deals with optimizing the instructional process. The approach adopted was to limit consideration to simple learning tasks for which adequate mathematical models could be developed. Optimal or suitable suboptimal instructional strategies were developed for the models. The basic idea was to solve for strategies that either maximize the…
Evaluation of Reliability Coefficients for Two-Level Models via Latent Variable Analysis
ERIC Educational Resources Information Center
Raykov, Tenko; Penev, Spiridon
2010-01-01
A latent variable analysis procedure for evaluation of reliability coefficients for 2-level models is outlined. The method provides point and interval estimates of group means' reliability, overall reliability of means, and conditional reliability. In addition, the approach can be used to test simple hypotheses about these parameters. The…
Constrained range expansion and climate change assessments
Yohay Carmel; Curtis H. Flather
2006-01-01
Modeling the future distribution of keystone species has proved to be an important approach to assessing the potential ecological consequences of climate change (Loehle and LeBlanc 1996; Hansen et al. 2001). Predictions of range shifts are typically based on empirical models derived from simple correlative relationships between climatic characteristics of occupied and...
NASA Astrophysics Data System (ADS)
Zhou, Lingfei; Chapuis, Yves-Andre; Blonde, Jean-Philippe; Bervillier, Herve; Fukuta, Yamato; Fujita, Hiroyuki
2004-07-01
In this paper, the authors proposed to study a model and a control strategy of a two-dimensional conveyance system based on the principles of the Autonomous Decentralized Microsystems (ADM). The microconveyance system is based on distributed cooperative MEMS actuators which can produce a force field onto the surface of the device to grip and move a micro-object. The modeling approach proposed here is based on a simple model of a microconveyance system which is represented by a 5 x 5 matrix of cells. Each cell is consisted of a microactuator, a microsensor, and a microprocessor to provide actuation, autonomy and decentralized intelligence to the cell. Thus, each cell is able to identify a micro-object crossing on it and to decide by oneself the appropriate control strategy to convey the micro-object to its destination target. The control strategy could be established through five simple decision rules that the cell itself has to respect at each calculate cycle time. Simulation and FPGA implementation results are given in the end of the paper in order to validate model and control approach of the microconveyance system.
Luo, Haoxiang; Mittal, Rajat; Zheng, Xudong; Bielamowicz, Steven A.; Walsh, Raymond J.; Hahn, James K.
2008-01-01
A new numerical approach for modeling a class of flow–structure interaction problems typically encountered in biological systems is presented. In this approach, a previously developed, sharp-interface, immersed-boundary method for incompressible flows is used to model the fluid flow and a new, sharp-interface Cartesian grid, immersed boundary method is devised to solve the equations of linear viscoelasticity that governs the solid. The two solvers are coupled to model flow–structure interaction. This coupled solver has the advantage of simple grid generation and efficient computation on simple, single-block structured grids. The accuracy of the solid-mechanics solver is examined by applying it to a canonical problem. The solution methodology is then applied to the problem of laryngeal aerodynamics and vocal fold vibration during human phonation. This includes a three-dimensional eigen analysis for a multi-layered vocal fold prototype as well as two-dimensional, flow-induced vocal fold vibration in a modeled larynx. Several salient features of the aerodynamics as well as vocal-fold dynamics are presented. PMID:19936017
Tracking trade transactions in water resource systems: A node-arc optimization formulation
NASA Astrophysics Data System (ADS)
Erfani, Tohid; Huskova, Ivana; Harou, Julien J.
2013-05-01
We formulate and apply a multicommodity network flow node-arc optimization model capable of tracking trade transactions in complex water resource systems. The model uses a simple node to node network connectivity matrix and does not require preprocessing of all possible flow paths in the network. We compare the proposed node-arc formulation with an existing arc-path (flow path) formulation and explain the advantages and difficulties of both approaches. We verify the proposed formulation model on a hypothetical water distribution network. Results indicate the arc-path model solves the problem with fewer constraints, but the proposed formulation allows using a simple network connectivity matrix which simplifies modeling large or complex networks. The proposed algorithm allows converting existing node-arc hydroeconomic models that broadly represent water trading to ones that also track individual supplier-receiver relationships (trade transactions).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kruzic, Jamie J.; Evans, T. Matthew; Greaney, P. Alex
The report describes the development of a discrete element method (DEM) based modeling approach to quantitatively predict deformation and failure of typical nickel based superalloys. A series of experimental data, including microstructure and mechanical property characterization at 600°C, was collected for a relatively simple, model solid solution Ni-20Cr alloy (Nimonic 75) to determine inputs for the model and provide data for model validation. Nimonic 75 was considered ideal for this study because it is a certified tensile and creep reference material. A series of new DEM modeling approaches were developed to capture the complexity of metal deformation, including cubic elasticmore » anisotropy and plastic deformation both with and without strain hardening. Our model approaches were implemented into a commercially available DEM code, PFC3D, that is commonly used by engineers. It is envisioned that once further developed, this new DEM modeling approach can be adapted to a wide range of engineering applications.« less
A Complex-Valued Firing-Rate Model That Approximates the Dynamics of Spiking Networks
Schaffer, Evan S.; Ostojic, Srdjan; Abbott, L. F.
2013-01-01
Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons. PMID:24204236
Learning from physics-based earthquake simulators: a minimal approach
NASA Astrophysics Data System (ADS)
Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele
2017-04-01
Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.
Aggregative Learning Method and Its Application for Communication Quality Evaluation
NASA Astrophysics Data System (ADS)
Akhmetov, Dauren F.; Kotaki, Minoru
2007-12-01
In this paper, so-called Aggregative Learning Method (ALM) is proposed to improve and simplify the learning and classification abilities of different data processing systems. It provides a universal basis for design and analysis of mathematical models of wide class. A procedure was elaborated for time series model reconstruction and analysis for linear and nonlinear cases. Data approximation accuracy (during learning phase) and data classification quality (during recall phase) are estimated from introduced statistic parameters. The validity and efficiency of the proposed approach have been demonstrated through its application for monitoring of wireless communication quality, namely, for Fixed Wireless Access (FWA) system. Low memory and computation resources were shown to be needed for the procedure realization, especially for data classification (recall) stage. Characterized with high computational efficiency and simple decision making procedure, the derived approaches can be useful for simple and reliable real-time surveillance and control system design.
Eggimann, Becky L.; Vostrikov, Vitaly V.; Veglia, Gianluigi; Siepmann, J. Ilja
2013-01-01
We present a fast and simple protocol to obtain moderate-resolution backbone structures of helical proteins. This approach utilizes a combination of sparse backbone NMR data (residual dipolar couplings and paramagnetic relaxation enhancements) or EPR data with a residue-based force field and Monte Carlo/simulated annealing protocol to explore the folding energy landscape of helical proteins. By using only backbone NMR data, which are relatively easy to collect and analyze, and strategically placed spin relaxation probes, we show that it is possible to obtain protein structures with correct helical topology and backbone RMS deviations well below 4 Å. This approach offers promising alternatives for the structural determination of proteins in which nuclear Overha-user effect data are difficult or impossible to assign and produces initial models that will speed up the high-resolution structure determination by NMR spectroscopy. PMID:24639619
Learning molecular energies using localized graph kernels.
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-21
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Learning molecular energies using localized graph kernels
NASA Astrophysics Data System (ADS)
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-01
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
NASA Astrophysics Data System (ADS)
Lenderink, Geert; Attema, Jisk
2015-08-01
Scenarios of future changes in small scale precipitation extremes for the Netherlands are presented. These scenarios are based on a new approach whereby changes in precipitation extremes are set proportional to the change in water vapor amount near the surface as measured by the 2m dew point temperature. This simple scaling framework allows the integration of information derived from: (i) observations, (ii) a new unprecedentedly large 16 member ensemble of simulations with the regional climate model RACMO2 driven by EC-Earth, and (iii) short term integrations with a non-hydrostatic model Harmonie. Scaling constants are based on subjective weighting (expert judgement) of the three different information sources taking also into account previously published work. In all scenarios local precipitation extremes increase with warming, yet with broad uncertainty ranges expressing incomplete knowledge of how convective clouds and the atmospheric mesoscale circulation will react to climate change.
Shahaf, Goded; Pratt, Hillel
2013-01-01
In this work we demonstrate the principles of a systematic modeling approach of the neurophysiologic processes underlying a behavioral function. The modeling is based upon a flexible simulation tool, which enables parametric specification of the underlying neurophysiologic characteristics. While the impact of selecting specific parameters is of interest, in this work we focus on the insights, which emerge from rather accepted assumptions regarding neuronal representation. We show that harnessing of even such simple assumptions enables the derivation of significant insights regarding the nature of the neurophysiologic processes underlying behavior. We demonstrate our approach in some detail by modeling the behavioral go/no-go task. We further demonstrate the practical significance of this simplified modeling approach in interpreting experimental data - the manifestation of these processes in the EEG and ERP literature of normal and abnormal (ADHD) function, as well as with comprehensive relevant ERP data analysis. In-fact we show that from the model-based spatiotemporal segregation of the processes, it is possible to derive simple and yet effective and theory-based EEG markers differentiating normal and ADHD subjects. We summarize by claiming that the neurophysiologic processes modeled for the go/no-go task are part of a limited set of neurophysiologic processes which underlie, in a variety of combinations, any behavioral function with measurable operational definition. Such neurophysiologic processes could be sampled directly from EEG on the basis of model-based spatiotemporal segregation.
Simple Deterministically Constructed Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Rodan, Ali; Tiňo, Peter
A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.
Matsumoto, Atsushi; Miyazaki, Naoyuki; Takagi, Junichi; Iwasaki, Kenji
2017-03-23
In this study, we develop an approach termed "2D hybrid analysis" for building atomic models by image matching from electron microscopy (EM) images of biological molecules. The key advantage is that it is applicable to flexible molecules, which are difficult to analyze by 3DEM approach. In the proposed approach, first, a lot of atomic models with different conformations are built by computer simulation. Then, simulated EM images are built from each atomic model. Finally, they are compared with the experimental EM image. Two kinds of models are used as simulated EM images: the negative stain model and the simple projection model. Although the former is more realistic, the latter is adopted to perform faster computations. The use of the negative stain model enables decomposition of the averaged EM images into multiple projection images, each of which originated from a different conformation or orientation. We apply this approach to the EM images of integrin to obtain the distribution of the conformations, from which the pathway of the conformational change of the protein is deduced.
A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.
Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.
1997-03-01
There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.
Schilstra, Maria J; Martin, Stephen R
2009-01-01
Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.
Topographic filtering simulation model for sediment source apportionment
NASA Astrophysics Data System (ADS)
Cho, Se Jong; Wilcock, Peter; Hobbs, Benjamin
2018-05-01
We propose a Topographic Filtering simulation model (Topofilter) that can be used to identify those locations that are likely to contribute most of the sediment load delivered from a watershed. The reduced complexity model links spatially distributed estimates of annual soil erosion, high-resolution topography, and observed sediment loading to determine the distribution of sediment delivery ratio across a watershed. The model uses two simple two-parameter topographic transfer functions based on the distance and change in elevation from upland sources to the nearest stream channel and then down the stream network. The approach does not attempt to find a single best-calibrated solution of sediment delivery, but uses a model conditioning approach to develop a large number of possible solutions. For each model run, locations that contribute to 90% of the sediment loading are identified and those locations that appear in this set in most of the 10,000 model runs are identified as the sources that are most likely to contribute to most of the sediment delivered to the watershed outlet. Because the underlying model is quite simple and strongly anchored by reliable information on soil erosion, topography, and sediment load, we believe that the ensemble of simulation outputs provides a useful basis for identifying the dominant sediment sources in the watershed.
Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach
Chadha, V. K.; Laxminarayan, R.; Arinaminpathy, N.
2017-01-01
SUMMARY BACKGROUND: There is an urgent need for improved estimations of the burden of tuberculosis (TB). OBJECTIVE: To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. DESIGN: We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. RESULTS: Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8–156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. CONCLUSIONS: Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used. PMID:28284250
Observations and Models of Highly Intermittent Phytoplankton Distributions
Mandal, Sandip; Locke, Christopher; Tanaka, Mamoru; Yamazaki, Hidekatsu
2014-01-01
The measurement of phytoplankton distributions in ocean ecosystems provides the basis for elucidating the influences of physical processes on plankton dynamics. Technological advances allow for measurement of phytoplankton data to greater resolution, displaying high spatial variability. In conventional mathematical models, the mean value of the measured variable is approximated to compare with the model output, which may misinterpret the reality of planktonic ecosystems, especially at the microscale level. To consider intermittency of variables, in this work, a new modelling approach to the planktonic ecosystem is applied, called the closure approach. Using this approach for a simple nutrient-phytoplankton model, we have shown how consideration of the fluctuating parts of model variables can affect system dynamics. Also, we have found a critical value of variance of overall fluctuating terms below which the conventional non-closure model and the mean value from the closure model exhibit the same result. This analysis gives an idea about the importance of the fluctuating parts of model variables and about when to use the closure approach. Comparisons of plot of mean versus standard deviation of phytoplankton at different depths, obtained using this new approach with real observations, give this approach good conformity. PMID:24787740
Proposal for an integrated evaluation model for the study of whole systems health care in cancer.
Jonas, Wayne B; Beckner, William; Coulter, Ian
2006-12-01
For more than 200 years, biomedicine has approached the treatment of disease by studying disease processes (patho-genesis), inferring causal connections and developing specific approaches for therapeutically interfering with those processes. This pathogenic approach has been highly successful in acute and traumatic disease but less successful in chronic disease, primarily because of the complex, multi-factorial nature of most chronic disease, which does not allow for simple causal inference or for simple therapeutic interventions. This article suggests that chronic disease is best approached by enhancing healing processes (salutogenesis) as a whole system. Because of the nature of complex systems in chronic disease, an evaluation model based on integrative medicine is felt to be more appropriate than a disease model. The authors propose and describe an integrated model for the evaluation of healing (IMEH) that collects multilevel "thick case" observational data in assessing complex practices for chronic disease. If successful, this approach could become a blueprint for studying healing capacity in whole medical systems, including complementary medicine, traditional medicine, and conventional primary care. In addition, streamlining data collection and applying rapid informatics management might allow for such data to be used in guiding clinical practice. The IMEH involves collection, integration, and potentially feedback of relevant variables in the following areas: (1) sociocultural, (2) psychological and behavioral, (3) clinical (diagnosis based), and (4) biological. Evaluation and integration of these components would involve specialized research teams that feed their data into a single data management and information analysis center. These data can then be subjected to descriptive and pathway analysis providing "bench and bedside" information.
Dark matter and MOND dynamical models of the massive spiral galaxy NGC 2841
NASA Astrophysics Data System (ADS)
Samurović, S.; Vudragović, A.; Jovanović, M.
2015-08-01
We study dynamical models of the massive spiral galaxy NGC 2841 using both the Newtonian models with Navarro-Frenk-White (NFW) and isothermal dark haloes, as well as various MOND (MOdified Newtonian Dynamics) models. We use the observations coming from several publicly available data bases: we use radio data, near-infrared photometry as well as spectroscopic observations. In our models, we find that both tested Newtonian dark matter approaches can successfully fit the observed rotational curve of NGC 2841. The three tested MOND models (standard, simple and, for the first time applied to another spiral galaxy than the Milky Way, Bekenstein's toy model) provide fits of the observed rotational curve with various degrees of success: the best result was obtained with the standard MOND model. For both approaches, Newtonian and MOND, the values of the mass-to-light ratios of the bulge are consistent with the predictions from the stellar population synthesis (SPS) based on the Salpeter initial mass function (IMF). Also, for Newtonian and simple and standard MOND models, the estimated stellar mass-to-light ratios of the disc agree with the predictions from the SPS models based on the Kroupa IMF, whereas the toy MOND model provides too low a value of the stellar mass-to-light ratio, incompatible with the predictions of the tested SPS models. In all our MOND models, we vary the distance to NGC 2841, and our best-fitting standard and toy models use the values higher than the Cepheid-based distance to the galaxy NGC 2841, and the best-fitting simple MOND model is based on the lower value of the distance. The best-fitting NFW model is inconsistent with the predictions of the Λ cold dark matter cosmology, because the inferred concentration index is too high for the established virial mass.
Stöckl, Anna L; Kihlström, Klara; Chandler, Steven; Sponberg, Simon
2017-04-05
Flight control in insects is heavily dependent on vision. Thus, in dim light, the decreased reliability of visual signal detection also prompts consequences for insect flight. We have an emerging understanding of the neural mechanisms that different species employ to adapt the visual system to low light. However, much less explored are comparative analyses of how low light affects the flight behaviour of insect species, and the corresponding links between physiological adaptations and behaviour. We investigated whether the flower tracking behaviour of three hawkmoth species with different diel activity patterns revealed luminance-dependent adaptations, using a system identification approach. We found clear luminance-dependent differences in flower tracking in all three species, which were explained by a simple luminance-dependent delay model, which generalized across species. We discuss physiological and anatomical explanations for the variance in tracking responses, which could not be explained by such simple models. Differences between species could not be explained by the simple delay model. However, in several cases, they could be explained through the addition on a second model parameter, a simple scaling term, that captures the responsiveness of each species to flower movements. Thus, we demonstrate here that much of the variance in the luminance-dependent flower tracking responses of hawkmoths with different diel activity patterns can be captured by simple models of neural processing.This article is part of the themed issue 'Vision in dim light'. © 2017 The Author(s).
Collector modulation in high-voltage bipolar transistor in the saturation mode: Analytical approach
NASA Astrophysics Data System (ADS)
Dmitriev, A. P.; Gert, A. V.; Levinshtein, M. E.; Yuferev, V. S.
2018-04-01
A simple analytical model is developed, capable of replacing the numerical solution of a system of nonlinear partial differential equations by solving a simple algebraic equation when analyzing the collector resistance modulation of a bipolar transistor in the saturation mode. In this approach, the leakage of the base current into the emitter and the recombination of non-equilibrium carriers in the base are taken into account. The data obtained are in good agreement with the results of numerical calculations and make it possible to describe both the motion of the front of the minority carriers and the steady state distribution of minority carriers across the collector in the saturation mode.
Ritchie, J Brendan; Carlson, Thomas A
2016-01-01
A fundamental challenge for cognitive neuroscience is characterizing how the primitives of psychological theory are neurally implemented. Attempts to meet this challenge are a manifestation of what Fechner called "inner" psychophysics: the theory of the precise mapping between mental quantities and the brain. In his own time, inner psychophysics remained an unrealized ambition for Fechner. We suggest that, today, multivariate pattern analysis (MVPA), or neural "decoding," methods provide a promising starting point for developing an inner psychophysics. A cornerstone of these methods are simple linear classifiers applied to neural activity in high-dimensional activation spaces. We describe an approach to inner psychophysics based on the shared architecture of linear classifiers and observers under decision boundary models such as signal detection theory. Under this approach, distance from a decision boundary through activation space, as estimated by linear classifiers, can be used to predict reaction time in accordance with signal detection theory, and distance-to-bound models of reaction time. Our "neural distance-to-bound" approach is potentially quite general, and simple to implement. Furthermore, our recent work on visual object recognition suggests it is empirically viable. We believe the approach constitutes an important step along the path to an inner psychophysics that links mind, brain, and behavior.
HEADROOM APPROACH TO DEVICE DEVELOPMENT: CURRENT AND FUTURE DIRECTIONS.
Girling, Alan; Lilford, Richard; Cole, Amanda; Young, Terry
2015-01-01
The headroom approach to medical device development relies on the estimation of a value-based price ceiling at different stages of the development cycle. Such price-ceilings delineate the commercial opportunities for new products in many healthcare systems. We apply a simple model to obtain critical business information as the product proceeds along a development pathway, and indicate some future directions for the development of the approach. Health economic modelling in the supply-side development cycle for new products. The headroom can be used: initially as a 'reality check' on the viability of the device in the healthcare market; to support product development decisions using a real options approach; and to contribute to a pricing policy which respects uncertainties in the reimbursement outlook. The headroom provides a unifying thread for business decisions along the development cycle for a new product. Over the course of the cycle attitudes to uncertainty will evolve, based on the timing and manner in which new information accrues. Within this framework the developmental value of new information can justify the costs of clinical trials and other evidence-gathering activities. Headroom can function as a simple shared tool to parties in commercial negotiations around individual products or groups of products. The development of similar approaches in other contexts holds promise for more rational planning of service provision.
Gras, Laure-Lise; Mitton, David; Crevier-Denoix, Nathalie; Laporte, Sébastien
2012-01-01
Most recent finite element models that represent muscles are generic or subject-specific models that use complex, constitutive laws. Identification of the parameters of such complex, constitutive laws could be an important limit for subject-specific approaches. The aim of this study was to assess the possibility of modelling muscle behaviour in compression with a parametric model and a simple, constitutive law. A quasi-static compression test was performed on the muscles of dogs. A parametric finite element model was designed using a linear, elastic, constitutive law. A multi-variate analysis was performed to assess the effects of geometry on muscle response. An inverse method was used to define Young's modulus. The non-linear response of the muscles was obtained using a subject-specific geometry and a linear elastic law. Thus, a simple muscle model can be used to have a bio-faithful, biomechanical response.
On the simple random-walk models of ion-channel gate dynamics reflecting long-term memory.
Wawrzkiewicz, Agata; Pawelek, Krzysztof; Borys, Przemyslaw; Dworakowska, Beata; Grzywna, Zbigniew J
2012-06-01
Several approaches to ion-channel gating modelling have been proposed. Although many models describe the dwell-time distributions correctly, they are incapable of predicting and explaining the long-term correlations between the lengths of adjacent openings and closings of a channel. In this paper we propose two simple random-walk models of the gating dynamics of voltage and Ca(2+)-activated potassium channels which qualitatively reproduce the dwell-time distributions, and describe the experimentally observed long-term memory quite well. Biological interpretation of both models is presented. In particular, the origin of the correlations is associated with fluctuations of channel mass density. The long-term memory effect, as measured by Hurst R/S analysis of experimental single-channel patch-clamp recordings, is close to the behaviour predicted by our models. The flexibility of the models enables their use as templates for other types of ion channel.
Simulation Speed Analysis and Improvements of Modelica Models for Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorissen, Filip; Wetter, Michael; Helsen, Lieve
This paper presents an approach for speeding up Modelica models. Insight is provided into how Modelica models are solved and what determines the tool’s computational speed. Aspects such as algebraic loops, code efficiency and integrator choice are discussed. This is illustrated using simple building simulation examples and Dymola. The generality of the work is in some cases verified using OpenModelica. Using this approach, a medium sized office building including building envelope, heating ventilation and air conditioning (HVAC) systems and control strategy can be simulated at a speed five hundred times faster than real time.
Kopyt, Paweł; Celuch, Małgorzata
2007-01-01
A practical implementation of a hybrid simulation system capable of modeling coupled electromagnetic-thermodynamic problems typical in microwave heating is described. The paper presents two approaches to modeling such problems. Both are based on an FDTD-based commercial electromagnetic solver coupled to an external thermodynamic analysis tool required for calculations of heat diffusion. The first approach utilizes a simple FDTD-based thermal solver while in the second it is replaced by a universal commercial CFD solver. The accuracy of the two modeling systems is verified against the original experimental data as well as the measurement results available in literature.
Data-driven outbreak forecasting with a simple nonlinear growth model
Lega, Joceline; Brown, Heidi E.
2016-01-01
Recent events have thrown the spotlight on infectious disease outbreak response. We developed a data-driven method, EpiGro, which can be applied to cumulative case reports to estimate the order of magnitude of the duration, peak and ultimate size of an ongoing outbreak. It is based on a surprisingly simple mathematical property of many epidemiological data sets, does not require knowledge or estimation of disease transmission parameters, is robust to noise and to small data sets, and runs quickly due to its mathematical simplicity. Using data from historic and ongoing epidemics, we present the model. We also provide modeling considerations that justify this approach and discuss its limitations. In the absence of other information or in conjunction with other models, EpiGro may be useful to public health responders. PMID:27770752
Wong, Tony E.; Keller, Klaus
2017-01-01
The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections. PMID:29287095
Han, Quan Feng; Wang, Ze Wu; Tang, Chak Yin; Chen, Ling; Tsui, Chi Pong; Law, Wing Cheung
2017-07-01
Poly-D-L-lactide/nano-hydroxyapatite (PDLLA/nano-HA) can be used as the biological scaffold material in bone tissue engineering as it can be readily made into a porous composite material with excellent performance. However, constitutive modeling for the mechanical response of porous PDLLA/nano-HA under various stress conditions has been very limited so far. In this work, four types of fundamental compressible hyper-elastic constitutive models were introduced for constitutive modeling and investigation of mechanical behaviors of porous PDLLA/nano-HA. Moreover, the unitary expressions of Cauchy stress tensor have been derived for the PDLLA/nano-HA under uniaxial compression (or stretch), biaxial compression (or stretch), pure shear and simple shear load by using the theory of continuum mechanics. The theoretical results determined from the approach based on the Ogden compressible hyper-elastic constitutive model were in good agreement with the experimental data from the uniaxial compression tests. Furthermore, this approach can also be used to predict the mechanical behaviors of the porous PDLLA/nano-HA material under the biaxial compression (or stretch), pure shear and simple shear. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Osman, Yassin Z.; Bruen, Michael P.
2002-07-01
Seepage from a stream, which partially penetrates an unconfined alluvial aquifer, is studied for the case when the water table falls below the streambed level. Inadequacies are identified in current modelling approaches to this situation. A simple and improved method of incorporating such seepage into groundwater models is presented. This considers the effect on seepage flow of suction in the unsaturated part of the aquifer below a disconnected stream and allows for the variation of seepage with water table fluctuations. The suggested technique is incorporated into the saturated code MODFLOW and is tested by comparing its predictions with those of a widely used variably saturated model, SWMS_2D simulating water flow and solute transport in two-dimensional variably saturated media. Comparisons are made of both seepage flows and local mounding of the water table. The suggested technique compares very well with the results of variably saturated model simulations. Most currently used approaches are shown to underestimate the seepage and associated local water table mounding, sometimes substantially. The proposed method is simple, easy to implement and requires only a small amount of additional data about the aquifer hydraulic properties.
Chesapeake Bay Sediment Flux Model
1993-06-01
1988; Van der Molen , -88- 1991; Yoshida, 1981.) The model developed below is based on both of these approaches. It incorporates the diagenetic...288: pp. 289-333. Van der Molen , D.T. (1991): A simple, dynamic model for the simulation of the release of phosphorus from sediments in shallow...1974; Berner, 1980; van Cappellen and Berner, 1988). These relate the diagenetic production of phosphate to the resulting pore water concentration
NASA Astrophysics Data System (ADS)
Nor, M. K. Mohd; Noordin, A.; Ruzali, M. F. S.; Hussen, M. H.; Mustapa@Othman, N.
2017-04-01
Simple Structural Surfaces (SSS) method is offered as a means of organizing the process for rationalizing the basic vehicle body structure load paths. The application of this simplified approach is highly beneficial in the development of modern passenger car structure design. In Malaysia, the SSS topic has been widely adopted and seems compulsory in various automotive programs related to automotive vehicle structures in many higher education institutions. However, there is no real physical model of SSS available to gain considerable insight and understanding into the function of each major subassembly in the whole vehicle structures. Based on this motivation, a real physical SSS of sedan model and the corresponding model vehicle tests of bending is proposed in this work. The proposed approach is relatively easy to understand as compared to Finite Element Method (FEM). The results prove that the proposed vehicle model test is useful to physically demonstrate the importance of providing continuous load path using the necessary structural components within the vehicle structures. It is clearly observed that the global bending stiffness reduce significantly when more panels are removed from the complete SSS model. The analysis shows the front parcel shelf is an important subassembly to sustain bending load.
NASA Astrophysics Data System (ADS)
Lehmann, Peter; von Ruette, Jonas; Fan, Linfeng; Or, Dani
2014-05-01
Rapid debris flows initiated by rainfall induced shallow landslides present a highly destructive natural hazard in steep terrain. The impact and run-out paths of debris flows depend on the volume, composition and initiation zone of released material and are requirements to make accurate debris flow predictions and hazard maps. For that purpose we couple the mechanistic 'Catchment-scale Hydro-mechanical Landslide Triggering (CHLT)' model to compute timing, location, and landslide volume with simple approaches to estimate debris flow runout distances. The runout models were tested using two landslide inventories obtained in the Swiss Alps following prolonged rainfall events. The predicted runout distances were in good agreement with observations, confirming the utility of such simple models for landscape scale estimates. In a next step debris flow paths were computed for landslides predicted with the CHLT model for a certain range of soil properties to explore its effect on runout distances. This combined approach offers a more complete spatial picture of shallow landslide and subsequent debris flow hazards. The additional information provided by CHLT model concerning location, shape, soil type and water content of the released mass may also be incorporated into more advanced models of runout to improve predictability and impact of such abruptly-released mass.
NASA Technical Reports Server (NTRS)
Shackelford, John H.; Saugen, John D.; Wurst, Michael J.; Adler, James
1991-01-01
A generic planar 3 degree of freedom simulation was developed that supports hardware in the loop simulations, guidance and control analysis, and can directly generate flight software. This simulation was developed in a small amount of time utilizing rapid prototyping techniques. The approach taken to develop this simulation tool, the benefits seen using this approach to development, and on-going efforts to improve and extend this capability are described. The simulation is composed of 3 major elements: (1) Docker dynamics model, (2) Dockee dynamics model, and (3) Docker Control System. The docker and dockee models are based on simple planar orbital dynamics equations using a spherical earth gravity model. The docker control system is based on a phase plane approach to error correction.
Osman, Magda; Wiegmann, Alex
2017-03-01
In this review we make a simple theoretical argument which is that for theory development, computational modeling, and general frameworks for understanding moral psychology researchers should build on domain-general principles from reasoning, judgment, and decision-making research. Our approach is radical with respect to typical models that exist in moral psychology that tend to propose complex innate moral grammars and even evolutionarily guided moral principles. In support of our argument we show that by using a simple value-based decision model we can capture a range of core moral behaviors. Crucially, the argument we propose is that moral situations per se do not require anything specialized or different from other situations in which we have to make decisions, inferences, and judgments in order to figure out how to act.
ERIC Educational Resources Information Center
Spears, Janine L.; Parrish, James L., Jr.
2013-01-01
This teaching case introduces students to a relatively simple approach to identifying and documenting security requirements within conceptual models that are commonly taught in systems analysis and design courses. An introduction to information security is provided, followed by a classroom example of a fictitious company, "Fun &…
A model predictive speed tracking control approach for autonomous ground vehicles
NASA Astrophysics Data System (ADS)
Zhu, Min; Chen, Huiyan; Xiong, Guangming
2017-03-01
This paper presents a novel speed tracking control approach based on a model predictive control (MPC) framework for autonomous ground vehicles. A switching algorithm without calibration is proposed to determine the drive or brake control. Combined with a simple inverse longitudinal vehicle model and adaptive regulation of MPC, this algorithm can make use of the engine brake torque for various driving conditions and avoid high frequency oscillations automatically. A simplified quadratic program (QP) solving algorithm is used to reduce the computational time, and the approach has been applied in a 16-bit microcontroller. The performance of the proposed approach is evaluated via simulations and vehicle tests, which were carried out in a range of speed-profile tracking tasks. With a well-designed system structure, high-precision speed control is achieved. The system can robustly model uncertainty and external disturbances, and yields a faster response with less overshoot than a PI controller.
Nonlinear Modeling by Assembling Piecewise Linear Models
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
A simple dynamic subgrid-scale model for LES of particle-laden turbulence
NASA Astrophysics Data System (ADS)
Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz
2017-04-01
In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.
A simple electric circuit model for proton exchange membrane fuel cells
NASA Astrophysics Data System (ADS)
Lazarou, Stavros; Pyrgioti, Eleftheria; Alexandridis, Antonio T.
A simple and novel dynamic circuit model for a proton exchange membrane (PEM) fuel cell suitable for the analysis and design of power systems is presented. The model takes into account phenomena like activation polarization, ohmic polarization, and mass transport effect present in a PEM fuel cell. The proposed circuit model includes three resistors to approach adequately these phenomena; however, since for the PEM dynamic performance connection or disconnection of an additional load is of crucial importance, the proposed model uses two saturable inductors accompanied by an ideal transformer to simulate the double layer charging effect during load step changes. To evaluate the effectiveness of the proposed model its dynamic performance under load step changes is simulated. Experimental results coming from a commercial PEM fuel cell module that uses hydrogen from a pressurized cylinder at the anode and atmospheric oxygen at the cathode, clearly verify the simulation results.
Mutual Comparative Filtering for Change Detection in Videos with Unstable Illumination Conditions
NASA Astrophysics Data System (ADS)
Sidyakin, Sergey V.; Vishnyakov, Boris V.; Vizilter, Yuri V.; Roslov, Nikolay I.
2016-06-01
In this paper we propose a new approach for change detection and moving objects detection in videos with unstable, abrupt illumination changes. This approach is based on mutual comparative filters and background normalization. We give the definitions of mutual comparative filters and outline their strong advantage for change detection purposes. Presented approach allows us to deal with changing illumination conditions in a simple and efficient way and does not have drawbacks, which exist in models that assume different color transformation laws. The proposed procedure can be used to improve a number of background modelling methods, which are not specifically designed to work under illumination changes.
Simplified and advanced modelling of traction control systems of heavy-haul locomotives
NASA Astrophysics Data System (ADS)
Spiryagin, Maksym; Wolfs, Peter; Szanto, Frank; Cole, Colin
2015-05-01
Improving tractive effort is a very complex task in locomotive design. It requires the development of not only mechanical systems but also power systems, traction machines and traction algorithms. At the initial design stage, traction algorithms can be verified by means of a simulation approach. A simple single wheelset simulation approach is not sufficient because all locomotive dynamics are not fully taken into consideration. Given that many traction control strategies exist, the best solution is to use more advanced approaches for such studies. This paper describes the modelling of a locomotive with a bogie traction control strategy based on a co-simulation approach in order to deliver more accurate results. The simplified and advanced modelling approaches of a locomotive electric power system are compared in this paper in order to answer a fundamental question. What level of modelling complexity is necessary for the investigation of the dynamic behaviours of a heavy-haul locomotive running under traction? The simulation results obtained provide some recommendations on simulation processes and the further implementation of advanced and simplified modelling approaches.
NASA Astrophysics Data System (ADS)
RUIZ, L.; Fovet, O.; Faucheux, M.; Molenat, J.; Sekhar, M.; Aquilina, L.; Gascuel-odoux, C.
2013-12-01
The development of simple and easily accessible metrics is required for characterizing and comparing catchment response to external forcings (climate or anthropogenic) and for managing water resources. The hydrological and geochemical signatures in the stream represent the integration of the various processes controlling this response. The complexity of these signatures over several time scales from sub-daily to several decades [Kirchner et al., 2001] makes their deconvolution very difficult. A large range of modeling approaches intent to represent this complexity by accounting for the spatial and/or temporal variability of the processes involved. However, simple metrics are not easily retrieved from these approaches, mostly because of over-parametrization issues. We hypothesize that to obtain relevant metrics, we need to use models that are able to simulate the observed variability of river signatures at different time scales, while being as parsimonious as possible. The lumped model ETNA (modified from[Ruiz et al., 2002]) is able to simulate adequately the seasonal and inter-annual patterns of stream NO3 concentration. Shallow groundwater is represented by two linear stores with double porosity and riparian processes are represented by a constant nitrogen removal function. Our objective was to identify simple metrics of catchment response by calibrating this lumped model on two paired agricultural catchments where both N inputs and outputs were monitored for a period of 20 years. These catchments, belonging to ORE AgrHys, although underlain by the same granitic bedrock are displaying contrasted chemical signatures. The model was able to simulate the two contrasted observed patterns in stream and groundwater, both on hydrology and chemistry, and at the seasonal and pluri-annual scales. It was also compatible with the expected trends of nitrate concentration since 1960. The output variables of the model were used to compute the nitrate residence time in both the catchments. We used the Global Likelihood Uncertainty Estimations (GLUE) approach [Beven and Binley, 1992] to assess the parameter uncertainties and the subsequent error in model outputs and residence times. Reasonably low parameter uncertainties were obtained by calibrating simultaneously the two paired catchments with two outlets time series of stream flow and nitrate concentrations. Finally, only one parameter controlled the contrast in nitrogen residence times between the catchments. Therefore, this approach provided a promising metric for classifying the variability of catchment response to agricultural nitrogen inputs. Beven, K., and A. Binley (1992), THE FUTURE OF DISTRIBUTED MODELS - MODEL CALIBRATION AND UNCERTAINTY PREDICTION, Hydrological Processes, 6(3), 279-298. Kirchner, J. W., X. Feng, and C. Neal (2001), Catchment-scale advection and dispersion as a mechanism for fractal scaling in stream tracer concentrations, Journal of Hydrology, 254(1-4), 82-101. Ruiz, L., S. Abiven, C. Martin, P. Durand, V. Beaujouan, and J. Molenat (2002), Effect on nitrate concentration in stream water of agricultural practices in small catchments in Brittany : II. Temporal variations and mixing processes, Hydrology and Earth System Sciences, 6(3), 507-513.
The fluid trampoline: droplets bouncing on a soap film
NASA Astrophysics Data System (ADS)
Bush, John; Gilet, Tristan
2008-11-01
We present the results of a combined experimental and theoretical investigation of droplets falling onto a horizontal soap film. Both static and vertically vibrated soap films are considered. A quasi-static description of the soap film shape yields a force-displacement relation that provides excellent agreement with experiment, and allows us to model the film as a nonlinear spring. This approach yields an accurate criterion for the transition between droplet bouncing and crossing on the static film; moreover, it allows us to rationalize the observed constancy of the contact time and scaling for the coefficient of restitution in the bouncing states. On the vibrating film, a variety of bouncing behaviours were observed, including simple and complex periodic states, multiperiodicity and chaos. A simple theoretical model is developed that captures the essential physics of the bouncing process, reproducing all observed bouncing states. Quantitative agreement between model and experiment is deduced for simple periodic modes, and qualitative agreement for more complex periodic and chaotic bouncing states.
Predictions of Bedforms in Tidal Inlets and River Mouths
2016-07-31
that community modeling environment. APPROACH Bedforms are ubiquitous in unconsolidated sediments . They act as roughness elements, altering the...flow and creating feedback between the bed and the flow and, in doing so, they are intimately tied to erosion, transport and deposition of sediments ...With this approach, grain-scale sediment transport is parameterized with simple rules to drive bedform-scale dynamics. Gallagher (2011) developed a
Integrated Reconfigurable Intelligent Systems (IRIS) for Complex Naval Systems
2011-02-23
INTRODUCTION 35 2.2 GENERAL MODEL SETUP 36 2.2.1 Co-Simulation Principles 36 2.2.2 Double pendulum : a simple example 38 2.2.3 Description of numerical... pendulum sample problem 45 2.3 DISCUSSION OF APPROACH WITH RESPECT TO PROPOSED SUBTASKS 49 2.4 RESULTS DISCUSSION AND FUTURE WORK 49 TASK 3...Kim and Praehofer 2000]. 2.2.2 Double pendulum : a simple example In order to be able to evaluate co-simulation principles, specifically an
Modelling the complete operation of a free-piston shock tunnel for a low enthalpy condition
NASA Astrophysics Data System (ADS)
McGilvray, M.; Dann, A. G.; Jacobs, P. A.
2013-07-01
Only a limited number of free-stream flow properties can be measured in hypersonic impulse facilities at the nozzle exit. This poses challenges for experimenters when subsequently analysing experimental data obtained from these facilities. Typically in a reflected shock tunnel, a simple analysis that requires small amounts of computational resources is used to calculate quasi-steady gas properties. This simple analysis requires initial fill conditions and experimental measurements in analytical calculations of each major flow process, using forward coupling with minor corrections to include processes that are not directly modeled. However, this simplistic approach leads to an unknown level of discrepancy to the true flow properties. To explore the simple modelling techniques accuracy, this paper details the use of transient one and two-dimensional numerical simulations of a complete facility to obtain more refined free-stream flow properties from a free-piston reflected shock tunnel operating at low-enthalpy conditions. These calculations were verified by comparison to experimental data obtained from the facility. For the condition and facility investigated, the test conditions at nozzle exit produced with the simple modelling technique agree with the time and space averaged results from the complete facility calculations to within the accuracy of the experimental measurements.
Modeling epidemics on adaptively evolving networks: A data-mining perspective.
Kattis, Assimakis A; Holiday, Alexander; Stoica, Ana-Andreea; Kevrekidis, Ioannis G
2016-01-01
The exploration of epidemic dynamics on dynamically evolving ("adaptive") networks poses nontrivial challenges to the modeler, such as the determination of a small number of informative statistics of the detailed network state (that is, a few "good observables") that usefully summarize the overall (macroscopic, systems-level) behavior. Obtaining reduced, small size accurate models in terms of these few statistical observables--that is, trying to coarse-grain the full network epidemic model to a small but useful macroscopic one--is even more daunting. Here we describe a data-based approach to solving the first challenge: the detection of a few informative collective observables of the detailed epidemic dynamics. This is accomplished through Diffusion Maps (DMAPS), a recently developed data-mining technique. We illustrate the approach through simulations of a simple mathematical model of epidemics on a network: a model known to exhibit complex temporal dynamics. We discuss potential extensions of the approach, as well as possible shortcomings.
Leading temperature dependence of the conductance in Kondo-correlated quantum dots.
Aligia, A A
2018-04-18
Using renormalized perturbation theory in the Coulomb repulsion, we derive an analytical expression for the leading term in the temperature dependence of the conductance through a quantum dot described by the impurity Anderson model, in terms of the renormalized parameters of the model. Taking these parameters from the literature, we compare the results with published ones calculated using the numerical renormalization group obtaining a very good agreement. The approach is superior to alternative perturbative treatments. We compare in particular to the results of a simple interpolative perturbation approach.
Water balance models in one-month-ahead streamflow forecasting
Alley, William M.
1985-01-01
Techniques are tested that incorporate information from water balance models in making 1-month-ahead streamflow forecasts in New Jersey. The results are compared to those based on simple autoregressive time series models. The relative performance of the models is dependent on the month of the year in question. The water balance models are most useful for forecasts of April and May flows. For the stations in northern New Jersey, the April and May forecasts were made in order of decreasing reliability using the water-balance-based approaches, using the historical monthly means, and using simple autoregressive models. The water balance models were useful to a lesser extent for forecasts during the fall months. For the rest of the year the improvements in forecasts over those obtained using the simpler autoregressive models were either very small or the simpler models provided better forecasts. When using the water balance models, monthly corrections for bias are found to improve minimum mean-square-error forecasts as well as to improve estimates of the forecast conditional distributions.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.
Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin
2015-02-01
To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach
Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin
2014-01-01
Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456
Did the ever dead outnumber the living and when? A birth-and-death approach
NASA Astrophysics Data System (ADS)
Avan, Jean; Grosjean, Nicolas; Huillet, Thierry
2015-02-01
This paper is an attempt to formalize analytically the question raised in 'World Population Explained: Do Dead People Outnumber Living, Or Vice Versa?' Huffington Post, Howard (2012). We start developing simple deterministic Malthusian growth models of the problem (with birth and death rates either constant or time-dependent) before running into both linear birth and death Markov chain models and age-structured models.
Acoustical and Other Physical Properties of Marine Sediments
1991-01-01
Granular Structure of Rocks 4. Anisotropic Poroelasticity and Biot’s Parameters PART 1 A simple analytical model has been developed to describe the...mentioned properties. PART 4 Prediction of wave propagation in a submarine environment re- quires modeling the acoustic response of ocean bottom...Biot’s theory is a promising approach for modelling acoustic wave propa- gation in ocean sediments which generally consist of elastic or viscoelastic
Rainfall runoff modelling of the Upper Ganga and Brahmaputra basins using PERSiST.
Futter, M N; Whitehead, P G; Sarkar, S; Rodda, H; Crossman, J
2015-06-01
There are ongoing discussions about the appropriate level of complexity and sources of uncertainty in rainfall runoff models. Simulations for operational hydrology, flood forecasting or nutrient transport all warrant different levels of complexity in the modelling approach. More complex model structures are appropriate for simulations of land-cover dependent nutrient transport while more parsimonious model structures may be adequate for runoff simulation. The appropriate level of complexity is also dependent on data availability. Here, we use PERSiST; a simple, semi-distributed dynamic rainfall-runoff modelling toolkit to simulate flows in the Upper Ganges and Brahmaputra rivers. We present two sets of simulations driven by single time series of daily precipitation and temperature using simple (A) and complex (B) model structures based on uniform and hydrochemically relevant land covers respectively. Models were compared based on ensembles of Bayesian Information Criterion (BIC) statistics. Equifinality was observed for parameters but not for model structures. Model performance was better for the more complex (B) structural representations than for parsimonious model structures. The results show that structural uncertainty is more important than parameter uncertainty. The ensembles of BIC statistics suggested that neither structural representation was preferable in a statistical sense. Simulations presented here confirm that relatively simple models with limited data requirements can be used to credibly simulate flows and water balance components needed for nutrient flux modelling in large, data-poor basins.
Assessment of cardiovascular risk based on a data-driven knowledge discovery approach.
Mendes, D; Paredes, S; Rocha, T; Carvalho, P; Henriques, J; Cabiddu, R; Morais, J
2015-01-01
The cardioRisk project addresses the development of personalized risk assessment tools for patients who have been admitted to the hospital with acute myocardial infarction. Although there are models available that assess the short-term risk of death/new events for such patients, these models were established in circumstances that do not take into account the present clinical interventions and, in some cases, the risk factors used by such models are not easily available in clinical practice. The integration of the existing risk tools (applied in the clinician's daily practice) with data-driven knowledge discovery mechanisms based on data routinely collected during hospitalizations, will be a breakthrough in overcoming some of these difficulties. In this context, the development of simple and interpretable models (based on recent datasets), unquestionably will facilitate and will introduce confidence in this integration process. In this work, a simple and interpretable model based on a real dataset is proposed. It consists of a decision tree model structure that uses a reduced set of six binary risk factors. The validation is performed using a recent dataset provided by the Portuguese Society of Cardiology (11113 patients), which originally comprised 77 risk factors. A sensitivity, specificity and accuracy of, respectively, 80.42%, 77.25% and 78.80% were achieved showing the effectiveness of the approach.
Evaluation of a Linear Cumulative Damage Failure Model for Epoxy Adhesive
NASA Technical Reports Server (NTRS)
Richardson, David E.; Batista-Rodriquez, Alicia; Macon, David; Totman, Peter; McCool, Alex (Technical Monitor)
2001-01-01
Recently a significant amount of work has been conducted to provide more complex and accurate material models for use in the evaluation of adhesive bondlines. Some of this has been prompted by recent studies into the effects of residual stresses on the integrity of bondlines. Several techniques have been developed for the analysis of bondline residual stresses. Key to these analyses is the criterion that is used for predicting failure. Residual stress loading of an adhesive bondline can occur over the life of the component. For many bonded systems, this can be several years. It is impractical to directly characterize failure of adhesive bondlines under a constant load for several years. Therefore, alternative approaches for predictions of bondline failures are required. In the past, cumulative damage failure models have been developed. These models have ranged from very simple to very complex. This paper documents the generation and evaluation of some of the most simple linear damage accumulation tensile failure models for an epoxy adhesive. This paper shows how several variations on the failure model were generated and presents an evaluation of the accuracy of these failure models in predicting creep failure of the adhesive. The paper shows that a simple failure model can be generated from short-term failure data for accurate predictions of long-term adhesive performance.
Spinnato, J; Roubaud, M-C; Burle, B; Torrésani, B
2015-06-01
The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.
Elastic and viscoelastic calculations of stresses in sedimentary basins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warpinski, N.R.
This study presents a method for estimating the stress state within reservoirs at depth using a time-history approach for both elastic and viscoelastic rock behavior. Two features of this model are particularly significant for stress calculations. The first is the time-history approach, where we assume that the present in situ stress is a result of the entire history of the rock mass, rather than due only to the present conditions. The model can incorporate: (1) changes in pore pressure due to gas generation; (2) temperature gradients and local thermal episodes; (3) consolidation and diagenesis through time-varying material properties; and (4)more » varying tectonic episodes. The second feature is the use of a new viscoelastic model. Rather than assume a form of the relaxation function, a complete viscoelastic solution is obtained from the elastic solution through the viscoelastic correspondence principal. Simple rate models are then applied to obtain the final rock behavior. Example calculations for some simple cases are presented that show the contribution of individual stress or strain components. Finally, a complete example of the stress history of rocks in the Piceance basin is attempted. This calculation compares favorably with present-day stress data in this location. This model serves as a predictor for natural fracture genesis and expected rock fracturing from the model is compared with actual fractures observed in this region. These results show that most current estimates of in situ stress at depth do not incorporate all of the important mechanisms and a more complete formulation, such as this study, is required for acceptable stress calculations. The method presented here is general and is applicable to any basin having a relatively simple geologic history. 25 refs., 18 figs.« less
Oakes, J M; Feldman, H A
2001-02-01
Nonequivalent controlled pretest-posttest designs are central to evaluation science, yet no practical and unified approach for estimating power in the two most widely used analytic approaches to these designs exists. This article fills the gap by presenting and comparing useful, unified power formulas for ANCOVA and change-score analyses, indicating the implications of each on sample-size requirements. The authors close with practical recommendations for evaluators. Mathematical details and a simple spreadsheet approach are included in appendices.
2016-12-01
chosen rather than complex ones , and responds to the criticism of the DTA approach. Chapter IV provides three separate case studies in defense R&D...defense R&D projects. To this end, the first section describes the case study method and the advantages of using simple models over more complex ones ...the analysis lacked empirical data and relied on subjective data, the analysis successfully combined the DTA approach with the case study method and
Mechanisms of Neuronal Computation in Mammalian Visual Cortex
Priebe, Nicholas J.; Ferster, David
2012-01-01
Orientation selectivity in the primary visual cortex (V1) is a receptive field property that is at once simple enough to make it amenable to experimental and theoretical approaches and yet complex enough to represent a significant transformation in the representation of the visual image. As a result, V1 has become an area of choice for studying cortical computation and its underlying mechanisms. Here we consider the receptive field properties of the simple cells in cat V1—the cells that receive direct input from thalamic relay cells—and explore how these properties, many of which are highly nonlinear, arise. We have found that many receptive field properties of V1 simple cells fall directly out of Hubel and Wiesel’s feedforward model when the model incorporates realistic neuronal and synaptic mechanisms, including threshold, synaptic depression, response variability, and the membrane time constant. PMID:22841306
Effect of Stability on Mixing in Open Canopies. Chapter 4
NASA Technical Reports Server (NTRS)
Lee, Young-Hee; Mahrt, L.
2005-01-01
In open canopies, the within-canopy flux from the ground surface and understory can account for a significant fraction of the total flux above the canopy. This study incorporates the important influence of within-canopy stability on turbulent mixing and subcanopy fluxes into a first-order closure scheme. Toward this goal, we analyze within-canopy eddy-correlation data from the old aspen site in the Boreal Ecosystem - Atmosphere Study (BOREAS) and a mature ponderosa pine site in Central Oregon, USA. A formulation of within-canopy transport is framed in terms of a stability- dependent mixing length, which approaches Monin-Obukhov similarity theory above the canopy roughness sublayer. The new simple formulation is an improvement upon the usual neglect of the influence of within-canopy stability in simple models. However, frequent well-defined cold air drainage within the pine subcanopy inversion reduces the utility of simple models for nocturnal transport. Other shortcomings of the formulation are discussed.
Network rewiring dynamics with convergence towards a star network
Dick, G.; Parry, M.
2016-01-01
Network rewiring as a method for producing a range of structures was first introduced in 1998 by Watts & Strogatz (Nature 393, 440–442. (doi:10.1038/30918)). This approach allowed a transition from regular through small-world to a random network. The subsequent interest in scale-free networks motivated a number of methods for developing rewiring approaches that converged to scale-free networks. This paper presents a rewiring algorithm (RtoS) for undirected, non-degenerate, fixed size networks that transitions from regular, through small-world and scale-free to star-like networks. Applications of the approach to models for the spread of infectious disease and fixation time for a simple genetics model are used to demonstrate the efficacy and application of the approach. PMID:27843396
Network rewiring dynamics with convergence towards a star network.
Whigham, P A; Dick, G; Parry, M
2016-10-01
Network rewiring as a method for producing a range of structures was first introduced in 1998 by Watts & Strogatz ( Nature 393 , 440-442. (doi:10.1038/30918)). This approach allowed a transition from regular through small-world to a random network. The subsequent interest in scale-free networks motivated a number of methods for developing rewiring approaches that converged to scale-free networks. This paper presents a rewiring algorithm (RtoS) for undirected, non-degenerate, fixed size networks that transitions from regular, through small-world and scale-free to star-like networks. Applications of the approach to models for the spread of infectious disease and fixation time for a simple genetics model are used to demonstrate the efficacy and application of the approach.
NASA Astrophysics Data System (ADS)
Pelamatti, Alice; Goiffon, Vincent; Chabane, Aziouz; Magnan, Pierre; Virmontois, Cédric; Saint-Pé, Olivier; de Boisanger, Michel Breart
2016-11-01
The charge transfer time represents the bottleneck in terms of temporal resolution in Pinned Photodiode (PPD) CMOS image sensors. This work focuses on the modeling and estimation of this key parameter. A simple numerical model of charge transfer in PPDs is presented. The model is based on a Montecarlo simulation and takes into account both charge diffusion in the PPD and the effect of potential obstacles along the charge transfer path. This work also presents a new experimental approach for the estimation of the charge transfer time, called pulsed Storage Gate (SG) method. This method, which allows reproduction of a ;worst-case; transfer condition, is based on dedicated SG pixel structures and is particularly suitable to compare transfer efficiency performances for different pixel geometries.
U.S. ENVIRONMENTAL PROTECTION AGENCY'S LANDFILL GAS EMISSION MODEL (LANDGEM)
The paper discusses EPA's available software for estimating landfill gas emissions. This software is based on a first-order decomposition rate equation using empirical data from U.S. landfills. The software provides a relatively simple approach to estimating landfill gas emissi...
ERIC Educational Resources Information Center
Colicchia, Giuseppe
2007-01-01
The investigation of the focusing in fish eyes, both theoretical and experimental, by using a simple fish eye model, provides an interesting biological context for teaching the introductory principles of optics. Moreover, the students will learn concepts of biology by an approach of cause and effect.
NASA Technical Reports Server (NTRS)
Van Dyke, Michael B.
2014-01-01
During random vibration testing of electronic boxes there is often a desire to know the dynamic response of certain internal printed wiring boards (PWBs) for the purpose of monitoring the response of sensitive hardware or for post-test forensic analysis in support of anomaly investigation. Due to restrictions on internally mounted accelerometers for most flight hardware there is usually no means to empirically observe the internal dynamics of the unit, so one must resort to crude and highly uncertain approximations. One common practice is to apply Miles Equation, which does not account for the coupled response of the board in the chassis, resulting in significant over- or under-prediction. This paper explores the application of simple multiple-degree-of-freedom lumped parameter modeling to predict the coupled random vibration response of the PWBs in their fundamental modes of vibration. A simple tool using this approach could be used during or following a random vibration test to interpret vibration test data from a single external chassis measurement to deduce internal board dynamics by means of a rapid correlation analysis. Such a tool might also be useful in early design stages as a supplemental analysis to a more detailed finite element analysis to quickly prototype and analyze the dynamics of various design iterations. After developing the theoretical basis, a lumped parameter modeling approach is applied to an electronic unit for which both external and internal test vibration response measurements are available for direct comparison. Reasonable correlation of the results demonstrates the potential viability of such an approach. Further development of the preliminary approach presented in this paper will involve correlation with detailed finite element models and additional relevant test data.
A generalized model via random walks for information filtering
NASA Astrophysics Data System (ADS)
Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng
2016-08-01
There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.
A Novel Shape Parameterization Approach
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.
Cellular and dendritic growth in a binary melt - A marginal stability approach
NASA Technical Reports Server (NTRS)
Laxmanan, V.
1986-01-01
A simple model for the constrained growth of an array of cells or dendrites in a binary alloy in the presence of an imposed positive temperature gradient in the liquid is proposed, with the dendritic or cell tip radius calculated using the marginal stability criterion of Langer and Muller-Krumbhaar (1977). This approach, an approach adopting the ad hoc assumption of minimum undercooling at the cell or dendrite tip, and an approach based on the stability criterion of Trivedi (1980) all predict tip radii to within 30 percent of each other, and yield a simple relationship between the tip radius and the growth conditions. Good agreement is found between predictions and data obtained in a succinonitrile-acetone system, and under the present experimental conditions, the dendritic tip stability parameter value is found to be twice that obtained previously, possibly due to a transition in morphology from a cellular structure with just a few side branches, to a more fully developed dendritic structure.
SOME USES OF MODELS OF QUANTITATIVE GENETIC SELECTION IN SOCIAL SCIENCE.
Weight, Michael D; Harpending, Henry
2017-01-01
The theory of selection of quantitative traits is widely used in evolutionary biology, agriculture and other related fields. The fundamental model known as the breeder's equation is simple, robust over short time scales, and it is often possible to estimate plausible parameters. In this paper it is suggested that the results of this model provide useful yardsticks for the description of social traits and the evaluation of transmission models. The differences on a standard personality test between samples of Old Order Amish and Indiana rural young men from the same county and the decline of homicide in Medieval Europe are used as illustrative examples of the overall approach. It is shown that the decline of homicide is unremarkable under a threshold model while the differences between rural Amish and non-Amish young men are too large to be a plausible outcome of simple genetic selection in which assortative mating by affiliation is equivalent to truncation selection.
A new approach to global control of redundant manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1989-01-01
A new and simple approach to configuration control of redundant manipulators is presented. In this approach, the redundancy is utilized to control the manipulator configuration directly in task space, where the task will be performed. A number of kinematic functions are defined to reflect the desirable configuration that will be achieved for a given end-effector position. The user-defined kinematic functions and the end-effector Cartesian coordinates are combined to form a set of task-related configuration variables as generalized coordinates for the manipulator. An adaptive scheme is then utilized to globally control the configuration variables so as to achieve tracking of some desired reference trajectories. This accomplishes the basic task of desired end-effector motion, while utilizing the redundancy to achieve any additional task through the desired time variation of the kinematic functions. The control law is simple and computationally very fast, and does not require the complex manipulator dynamic model.
Polymer Fluid Dynamics: Continuum and Molecular Approaches.
Bird, R B; Giacomin, A J
2016-06-07
To solve problems in polymer fluid dynamics, one needs the equations of continuity, motion, and energy. The last two equations contain the stress tensor and the heat-flux vector for the material. There are two ways to formulate the stress tensor: (a) One can write a continuum expression for the stress tensor in terms of kinematic tensors, or (b) one can select a molecular model that represents the polymer molecule and then develop an expression for the stress tensor from kinetic theory. The advantage of the kinetic theory approach is that one gets information about the relation between the molecular structure of the polymers and the rheological properties. We restrict the discussion primarily to the simplest stress tensor expressions or constitutive equations containing from two to four adjustable parameters, although we do indicate how these formulations may be extended to give more complicated expressions. We also explore how these simplest expressions are recovered as special cases of a more general framework, the Oldroyd 8-constant model. Studying the simplest models allows us to discover which types of empiricisms or molecular models seem to be worth investigating further. We also explore equivalences between continuum and molecular approaches. We restrict the discussion to several types of simple flows, such as shearing flows and extensional flows, which are of greatest importance in industrial operations. Furthermore, if these simple flows cannot be well described by continuum or molecular models, then it is not necessary to lavish time and energy to apply them to more complex flow problems.
IoGET: Internet of Geophysical and Environmental Things
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudunuru, Maruti Kumar
The objective of this project is to provide novel and fast reduced-order models for onboard computation at sensor nodes for real-time analysis. The approach will require that LANL perform high-fidelity numerical simulations, construct simple reduced-order models (ROMs) using machine learning and signal processing algorithms, and use real-time data analysis for ROMs and compressive sensing at sensor nodes.
Design and Training of Limited-Interconnect Architectures
1991-07-16
and signal processing. Neuromorphic (brain like) models, allow an alternative for achieving real-time operation tor such tasks, while having a...compact and robust architecture. Neuromorphic models consist of interconnections of simple computational nodes. In this approach, each node computes a...operational performance. I1. Research Objectives The research objectives were: 1. Development of on- chip local training rules specifically designed for
NASA Technical Reports Server (NTRS)
Phillips, D. T.; Manseur, B.; Foster, J. W.
1982-01-01
Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.
Preliminary model for high-power-waveguide arcing and arc protection
NASA Technical Reports Server (NTRS)
Yen, H. C.
1978-01-01
The arc protection subsystems that are implemented in the DSN high power transmitters are discussed. The status of present knowledge about waveguide arcs is reviewed in terms of a simple engineering model. A fairly general arc detection scheme is also discussed. Areas where further studies are needed are pointed out along with proposed approaches to the solutions of these problems.
Using energy budgets to combine ecology and toxicology in a mammalian sentinel species
NASA Astrophysics Data System (ADS)
Desforges, Jean-Pierre W.; Sonne, Christian; Dietz, Rune
2017-04-01
Process-driven modelling approaches can resolve many of the shortcomings of traditional descriptive and non-mechanistic toxicology. We developed a simple dynamic energy budget (DEB) model for the mink (Mustela vison), a sentinel species in mammalian toxicology, which coupled animal physiology, ecology and toxicology, in order to mechanistically investigate the accumulation and adverse effects of lifelong dietary exposure to persistent environmental toxicants, most notably polychlorinated biphenyls (PCBs). Our novel mammalian DEB model accurately predicted, based on energy allocations to the interconnected metabolic processes of growth, development, maintenance and reproduction, lifelong patterns in mink growth, reproductive performance and dietary accumulation of PCBs as reported in the literature. Our model results were consistent with empirical data from captive and free-ranging studies in mink and other wildlife and suggest that PCB exposure can have significant population-level impacts resulting from targeted effects on fetal toxicity, kit mortality and growth and development. Our approach provides a simple and cross-species framework to explore the mechanistic interactions of physiological processes and ecotoxicology, thus allowing for a deeper understanding and interpretation of stressor-induced adverse effects at all levels of biological organization.
van Mantgem, P.J.; Stephenson, N.L.
2005-01-01
1 We assess the use of simple, size-based matrix population models for projecting population trends for six coniferous tree species in the Sierra Nevada, California. We used demographic data from 16 673 trees in 15 permanent plots to create 17 separate time-invariant, density-independent population projection models, and determined differences between trends projected from initial surveys with a 5-year interval and observed data during two subsequent 5-year time steps. 2 We detected departures from the assumptions of the matrix modelling approach in terms of strong growth autocorrelations. We also found evidence of observation errors for measurements of tree growth and, to a more limited degree, recruitment. Loglinear analysis provided evidence of significant temporal variation in demographic rates for only two of the 17 populations. 3 Total population sizes were strongly predicted by model projections, although population dynamics were dominated by carryover from the previous 5-year time step (i.e. there were few cases of recruitment or death). Fractional changes to overall population sizes were less well predicted. Compared with a null model and a simple demographic model lacking size structure, matrix model projections were better able to predict total population sizes, although the differences were not statistically significant. Matrix model projections were also able to predict short-term rates of survival, growth and recruitment. Mortality frequencies were not well predicted. 4 Our results suggest that simple size-structured models can accurately project future short-term changes for some tree populations. However, not all populations were well predicted and these simple models would probably become more inaccurate over longer projection intervals. The predictive ability of these models would also be limited by disturbance or other events that destabilize demographic rates. ?? 2005 British Ecological Society.
Genealogical and evolutionary inference with the human Y chromosome.
Stumpf, M P; Goldstein, D B
2001-03-02
Population genetics has emerged as a powerful tool for unraveling human history. In addition to the study of mitochondrial and autosomal DNA, attention has recently focused on Y-chromosome variation. Ambiguities and inaccuracies in data analysis, however, pose an important obstacle to further development of the field. Here we review the methods available for genealogical inference using Y-chromosome data. Approaches can be divided into those that do and those that do not use an explicit population model in genealogical inference. We describe the strengths and weaknesses of these model-based and model-free approaches, as well as difficulties associated with the mutation process that affect both methods. In the case of genealogical inference using microsatellite loci, we use coalescent simulations to show that relatively simple generalizations of the mutation process can greatly increase the accuracy of genealogical inference. Because model-free and model-based approaches have different biases and limitations, we conclude that there is considerable benefit in the continued use of both types of approaches.
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
NASA Technical Reports Server (NTRS)
Blackwell, William C., Jr.
2004-01-01
In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.
A General Interface Method for Aeroelastic Analysis of Aircraft
NASA Technical Reports Server (NTRS)
Tzong, T.; Chen, H. H.; Chang, K. C.; Wu, T.; Cebeci, T.
1996-01-01
The aeroelastic analysis of an aircraft requires an accurate and efficient procedure to couple aerodynamics and structures. The procedure needs an interface method to bridge the gap between the aerodynamic and structural models in order to transform loads and displacements. Such an interface method is described in this report. This interface method transforms loads computed by any aerodynamic code to a structural finite element (FE) model and converts the displacements from the FE model to the aerodynamic model. The approach is based on FE technology in which virtual work is employed to transform the aerodynamic pressures into FE nodal forces. The displacements at the FE nodes are then converted back to aerodynamic grid points on the aircraft surface through the reciprocal theorem in structural engineering. The method allows both high and crude fidelities of both models and does not require an intermediate modeling. In addition, the method performs the conversion of loads and displacements directly between individual aerodynamic grid point and its corresponding structural finite element and, hence, is very efficient for large aircraft models. This report also describes the application of this aero-structure interface method to a simple wing and an MD-90 wing. The results show that the aeroelastic effect is very important. For the simple wing, both linear and nonlinear approaches are used. In the linear approach, the deformation of the structural model is considered small, and the loads from the deformed aerodynamic model are applied to the original geometry of the structure. In the nonlinear approach, the geometry of the structure and its stiffness matrix are updated in every iteration and the increments of loads from the previous iteration are applied to the new structural geometry in order to compute the displacement increments. Additional studies to apply the aero-structure interaction procedure to more complicated geometry will be conducted in the second phase of the present contract.
A simple method for identifying parameter correlations in partially observed linear dynamic models.
Li, Pu; Vu, Quoc Dong
2015-12-14
Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a software packet.
Simple models for studying complex spatiotemporal patterns of animal behavior
NASA Astrophysics Data System (ADS)
Tyutyunov, Yuri V.; Titova, Lyudmila I.
2017-06-01
Minimal mathematical models able to explain complex patterns of animal behavior are essential parts of simulation systems describing large-scale spatiotemporal dynamics of trophic communities, particularly those with wide-ranging species, such as occur in pelagic environments. We present results obtained with three different modelling approaches: (i) an individual-based model of animal spatial behavior; (ii) a continuous taxis-diffusion-reaction system of partial-difference equations; (iii) a 'hybrid' approach combining the individual-based algorithm of organism movements with explicit description of decay and diffusion of the movement stimuli. Though the models are based on extremely simple rules, they all allow description of spatial movements of animals in a predator-prey system within a closed habitat, reproducing some typical patterns of the pursuit-evasion behavior observed in natural populations. In all three models, at each spatial position the animal movements are determined by local conditions only, so the pattern of collective behavior emerges due to self-organization. The movement velocities of animals are proportional to the density gradients of specific cues emitted by individuals of the antagonistic species (pheromones, exometabolites or mechanical waves of the media, e.g., sound). These cues play a role of taxis stimuli: prey attract predators, while predators repel prey. Depending on the nature and the properties of the movement stimulus we propose using either a simplified individual-based model, a continuous taxis pursuit-evasion system, or a little more detailed 'hybrid' approach that combines simulation of the individual movements with the continuous model describing diffusion and decay of the stimuli in an explicit way. These can be used to improve movement models for many species, including large marine predators.
Vanuytrecht, Eline; Thorburn, Peter J
2017-05-01
Elevated atmospheric CO 2 concentrations ([CO 2 ]) cause direct changes in crop physiological processes (e.g. photosynthesis and stomatal conductance). To represent these CO 2 responses, commonly used crop simulation models have been amended, using simple and semicomplex representations of the processes involved. Yet, there is no standard approach to and often poor documentation of these developments. This study used a bottom-up approach (starting with the APSIM framework as case study) to evaluate modelled responses in a consortium of commonly used crop models and illuminate whether variation in responses reflects true uncertainty in our understanding compared to arbitrary choices of model developers. Diversity in simulated CO 2 responses and limited validation were common among models, both within the APSIM framework and more generally. Whereas production responses show some consistency up to moderately high [CO 2 ] (around 700 ppm), transpiration and stomatal responses vary more widely in nature and magnitude (e.g. a decrease in stomatal conductance varying between 35% and 90% among models was found for [CO 2 ] doubling to 700 ppm). Most notably, nitrogen responses were found to be included in few crop models despite being commonly observed and critical for the simulation of photosynthetic acclimation, crop nutritional quality and carbon allocation. We suggest harmonization and consideration of more mechanistic concepts in particular subroutines, for example, for the simulation of N dynamics, as a way to improve our predictive understanding of CO 2 responses and capture secondary processes. Intercomparison studies could assist in this aim, provided that they go beyond simple output comparison and explicitly identify the representations and assumptions that are causal for intermodel differences. Additionally, validation and proper documentation of the representation of CO 2 responses within models should be prioritized. © 2017 John Wiley & Sons Ltd.
Application of empirical and dynamical closure methods to simple climate models
NASA Astrophysics Data System (ADS)
Padilla, Lauren Elizabeth
This dissertation applies empirically- and physically-based methods for closure of uncertain parameters and processes to three model systems that lie on the simple end of climate model complexity. Each model isolates one of three sources of closure uncertainty: uncertain observational data, large dimension, and wide ranging length scales. They serve as efficient test systems toward extension of the methods to more realistic climate models. The empirical approach uses the Unscented Kalman Filter (UKF) to estimate the transient climate sensitivity (TCS) parameter in a globally-averaged energy balance model. Uncertainty in climate forcing and historical temperature make TCS difficult to determine. A range of probabilistic estimates of TCS computed for various assumptions about past forcing and natural variability corroborate ranges reported in the IPCC AR4 found by different means. Also computed are estimates of how quickly uncertainty in TCS may be expected to diminish in the future as additional observations become available. For higher system dimensions the UKF approach may become prohibitively expensive. A modified UKF algorithm is developed in which the error covariance is represented by a reduced-rank approximation, substantially reducing the number of model evaluations required to provide probability densities for unknown parameters. The method estimates the state and parameters of an abstract atmospheric model, known as Lorenz 96, with accuracy close to that of a full-order UKF for 30-60% rank reduction. The physical approach to closure uses the Multiscale Modeling Framework (MMF) to demonstrate closure of small-scale, nonlinear processes that would not be resolved directly in climate models. A one-dimensional, abstract test model with a broad spatial spectrum is developed. The test model couples the Kuramoto-Sivashinsky equation to a transport equation that includes cloud formation and precipitation-like processes. In the test model, three main sources of MMF error are evaluated independently. Loss of nonlinear multi-scale interactions and periodic boundary conditions in closure models were dominant sources of error. Using a reduced order modeling approach to maximize energy content allowed reduction of the closure model dimension up to 75% without loss in accuracy. MMF and a comparable alternative model peformed equally well compared to direct numerical simulation.
Using a crowdsourced approach for monitoring water level in a remote Kenyan catchment
NASA Astrophysics Data System (ADS)
Weeser, Björn; Jacobs, Suzanne; Rufino, Mariana; Breuer, Lutz
2017-04-01
Hydrological models or effective water management strategies only succeed if they are based on reliable data. Decreasing costs of technical equipment lower the barrier to create comprehensive monitoring networks and allow both spatial and temporal high-resolution measurements. However, these networks depend on specialised equipment, supervision, and maintenance producing high running expenses. This becomes particularly challenging for remote areas. Low income countries often do not have the capacity to run such networks. Delegating simple measurements to citizens living close to relevant monitoring points may reduce costs and increase the public awareness. Here we present our experiences of using a crowdsourced approach for monitoring water levels in remote catchments in Kenya. We established a low-cost system consisting of thirteen simple water level gauges and a Raspberry Pi based SMS-Server for data handling. Volunteers determine the water level and transmit their records using a simple text message. These messages are automatically processed and real-time feedback on the data quality is given. During the first year, more than 1200 valid records with high quality have been collected. In summary, the simple techniques for data collecting, transmitting and processing created an open platform that has the potential for reaching volunteers without the need for special equipment. Even though the temporal resolution of measurements cannot be controlled and peak flows might be missed, this data can still be considered as a valuable enhancement for developing management strategies or for hydrological modelling.
Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*
Castruccio, Stefano; McInerney, David J.; Stein, Michael L.; ...
2014-02-24
The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO 2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as patternmore » scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. In conclusion, it may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.« less
Decentralized control of sound radiation using iterative loop recovery.
Schiller, Noah H; Cabell, Randolph H; Fuller, Chris R
2010-10-01
A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.
Decentralized Control of Sound Radiation Using Iterative Loop Recovery
NASA Technical Reports Server (NTRS)
Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.
2009-01-01
A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.
Segmentation by fusion of histogram-based k-means clusters in different color spaces.
Mignotte, Max
2008-05-01
This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.
A simple white noise analysis of neuronal light responses.
Chichilnisky, E J
2001-05-01
A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.
Communication: Symmetrical quasi-classical analysis of linear optical spectroscopy
NASA Astrophysics Data System (ADS)
Provazza, Justin; Coker, David F.
2018-05-01
The symmetrical quasi-classical approach for propagation of a many degree of freedom density matrix is explored in the context of computing linear spectra. Calculations on a simple two state model for which exact results are available suggest that the approach gives a qualitative description of peak positions, relative amplitudes, and line broadening. Short time details in the computed dipole autocorrelation function result in exaggerated tails in the spectrum.
Exponentially Stabilizing Robot Control Laws
NASA Technical Reports Server (NTRS)
Wen, John T.; Bayard, David S.
1990-01-01
New class of exponentially stabilizing laws for joint-level control of robotic manipulators introduced. In case of set-point control, approach offers simplicity of proportion/derivative control architecture. In case of tracking control, approach provides several important alternatives to completed-torque method, as far as computational requirements and convergence. New control laws modified in simple fashion to obtain asymptotically stable adaptive control, when robot model and/or payload mass properties unknown.
A simple analytical model for dynamics of time-varying target leverage ratios
NASA Astrophysics Data System (ADS)
Lo, C. F.; Hui, C. H.
2012-03-01
In this paper we have formulated a simple theoretical model for the dynamics of the time-varying target leverage ratio of a firm under some assumptions based upon empirical observations. In our theoretical model the time evolution of the target leverage ratio of a firm can be derived self-consistently from a set of coupled Ito's stochastic differential equations governing the leverage ratios of an ensemble of firms by the nonlinear Fokker-Planck equation approach. The theoretically derived time paths of the target leverage ratio bear great resemblance to those used in the time-dependent stationary-leverage (TDSL) model [Hui et al., Int. Rev. Financ. Analy. 15, 220 (2006)]. Thus, our simple model is able to provide a theoretical foundation for the selected time paths of the target leverage ratio in the TDSL model. We also examine how the pace of the adjustment of a firm's target ratio, the volatility of the leverage ratio and the current leverage ratio affect the dynamics of the time-varying target leverage ratio. Hence, with the proposed dynamics of the time-dependent target leverage ratio, the TDSL model can be readily applied to generate the default probabilities of individual firms and to assess the default risk of the firms.
Analysis of composite plates by using mechanics of structure genome and comparison with ANSYS
NASA Astrophysics Data System (ADS)
Zhao, Banghua
Motivated by a recently discovered concept, Structure Genome (SG) which is defined as the smallest mathematical building block of a structure, a new approach named Mechanics of Structure Genome (MSG) to model and analyze composite plates is introduced. MSG is implemented in a general-purpose code named SwiftComp(TM), which provides the constitutive models needed in structural analysis by homogenization and pointwise local fields by dehomogenization. To improve the user friendliness of SwiftComp(TM), a simple graphic user interface (GUI) based on ANSYS Mechanical APDL platform, called ANSYS-SwiftComp GUI is developed, which provides a convenient way to create some common SG models or arbitrary customized SG models in ANSYS and invoke SwiftComp(TM) to perform homogenization and dehomogenization. The global structural analysis can also be handled in ANSYS after homogenization, which could predict the global behavior and provide needed inputs for dehomogenization. To demonstrate the accuracy and efficiency of the MSG approach, several numerical cases are studied and compared using both MSG and ANSYS. In the ANSYS approach, 3D solid element models (ANSYS 3D approach) are used as reference models and the 2D shell element models created by ANSYS Composite PrepPost (ACP approach) are compared with the MSG approach. The results of the MSG approach agree well with the ANSYS 3D approach while being as efficient as the ACP approach. Therefore, the MSG approach provides an efficient and accurate new way to model composite plates.
Measuring Household Vulnerability: A Fuzzy Approach
NASA Astrophysics Data System (ADS)
Sethi, G.; Pierce, S. A.
2016-12-01
This research develops an index of vulnerability for Ugandan households using a variety of economic, social and environmental variables with two objectives. First, there is only a small body of research that measures household vulnerability. Given the stresses faced by households susceptible to water, environment, food, livelihood, energy, and health security concerns, it is critical that they be identified in order to make effective policy. We draw on the socio-ecological systems (SES) framework described by Ostrom (2009) and adapt the model developed by from Giupponi, Giove, and Giannini (2013) to develop a composite measure. Second, most indices in the literature are linear in nature, relying on simple weighted averages. In this research, we contrast the results obtained by a simple weighted average with those obtained by using the Choquet integral. The Choquet integral is a fuzzy measure, and is based on the generalization of the Lebesgue integral. Due to its non-additive nature, the Choquet integral offers a more general approach. Our results reveal that all households included in this study are highly vulnerable, and that vulnerability scores obtained by the fuzzy approach are significantly different from those obtained by using the simple weighted average (p = 9.46e-160).
Modeling an explosion : the devil is in the details
Peter W. Hart; Alan W. Rudie
2011-01-01
The Chemical Safety and Hazards Investigation Board has recently encouraged chemical engineering faculty to address student knowledge about reactive hazards in their curricula. This paper presents a simple approach that may be used to illustrate the importance of these types of safety considerations.
Estimating Lake Volume from Limited Data: A Simple GIS Approach
Lake volume provides key information for estimating residence time or modeling pollutants. Methods for calculating lake volume have relied on dated technologies (e.g. planimeters) or used potentially inaccurate assumptions (e.g. volume of a frustum of a cone). Modern GIS provid...
ERIC Educational Resources Information Center
Lu, Yonggang; Henning, Kevin S. S.
2013-01-01
Spurred by recent writings regarding statistical pragmatism, we propose a simple, practical approach to introducing students to a new style of statistical thinking that models nature through the lens of data-generating processes, not populations. (Contains 5 figures.)
Learning molecular energies using localized graph kernels
Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos
2017-03-21
We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less
Russell, Bayden D.; Harley, Christopher D. G.; Wernberg, Thomas; Mieszkowska, Nova; Widdicombe, Stephen; Hall-Spencer, Jason M.; Connell, Sean D.
2012-01-01
Most studies that forecast the ecological consequences of climate change target a single species and a single life stage. Depending on climatic impacts on other life stages and on interacting species, however, the results from simple experiments may not translate into accurate predictions of future ecological change. Research needs to move beyond simple experimental studies and environmental envelope projections for single species towards identifying where ecosystem change is likely to occur and the drivers for this change. For this to happen, we advocate research directions that (i) identify the critical species within the target ecosystem, and the life stage(s) most susceptible to changing conditions and (ii) the key interactions between these species and components of their broader ecosystem. A combined approach using macroecology, experimentally derived data and modelling that incorporates energy budgets in life cycle models may identify critical abiotic conditions that disproportionately alter important ecological processes under forecasted climates. PMID:21900317
Malacarne, Mario; Nardin, Tiziana; Bertoldi, Daniela; Nicolini, Giorgio; Larcher, Roberto
2016-09-01
Commercial tannins from several botanical sources and with different chemical and technological characteristics are used in the food and winemaking industries. Different ways to check their botanical authenticity have been studied in the last few years, through investigation of different analytical parameters. This work proposes a new, effective approach based on the quantification of 6 carbohydrates, 7 polyalcohols, and 55 phenols. 87 tannins from 12 different botanical sources were analysed following a very simple sample preparation procedure. Using Forward Stepwise Discriminant Analysis, 3 statistical models were created based on sugars content, phenols concentration and combination of the two classes of compounds for the 8 most abundant categories (i.e. oak, grape seed, grape skin, gall, chestnut, quebracho, tea and acacia). The last approach provided good results in attributing tannins to the correct botanical origin. Validation, repeated 3 times on subsets of 10% of samples, confirmed the reliability of this model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Beyond Born-Mayer: Improved models for short-range repulsion in ab initio force fields
Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.; ...
2016-06-23
Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less
Learning molecular energies using localized graph kernels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos
We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less
Numerical evaluation of a single ellipsoid motion in Newtonian and power-law fluids
NASA Astrophysics Data System (ADS)
Férec, Julien; Ausias, Gilles; Natale, Giovanniantonio
2018-05-01
A computational model is developed for simulating the motion of a single ellipsoid suspended in a Newtonian and power-law fluid, respectively. Based on a finite element method (FEM), the approach consists in seeking solutions for the linear and angular particle velocities using a minimization algorithm, such that the net hydrodynamic force and torque acting on the ellipsoid are zero. For a Newtonian fluid subjected to a simple shear flow, the Jeffery's predictions are recovered at any aspect ratios. The motion of a single ellipsoidal fiber is found to be slightly disturbed by the shear-thinning character of the suspending fluid, when compared with the Jeffery's solutions. Surprisingly, the perturbation can be completely neglected for a particle with a large aspect ratio. Furthermore, the particle centroid is also found to translate with the same linear velocity as the undisturbed simple shear flow evaluated at particle centroid. This is confirmed by recent works based on experimental investigations and modeling approach (1-2).
Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments
NASA Astrophysics Data System (ADS)
Berk, Mario; Å pačková, Olga; Straub, Daniel
2017-12-01
The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.
Vehicle track segmentation using higher order random fields
Quach, Tu -Thach
2017-01-09
Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less
Vehicle track segmentation using higher order random fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quach, Tu -Thach
Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less
NASA Astrophysics Data System (ADS)
McCaul, G. M. G.; Lorenz, C. D.; Kantorovich, L.
2017-03-01
We present a partition-free approach to the evolution of density matrices for open quantum systems coupled to a harmonic environment. The influence functional formalism combined with a two-time Hubbard-Stratonovich transformation allows us to derive a set of exact differential equations for the reduced density matrix of an open system, termed the extended stochastic Liouville-von Neumann equation. Our approach generalizes previous work based on Caldeira-Leggett models and a partitioned initial density matrix. This provides a simple, yet exact, closed-form description for the evolution of open systems from equilibriated initial conditions. The applicability of this model and the potential for numerical implementations are also discussed.
Behavior systems and reinforcement: an integrative approach.
Timberlake, W
1993-01-01
Most traditional conceptions of reinforcement are based on a simple causal model in which responding is strengthened by the presentation of a reinforcer. I argue that reinforcement is better viewed as the outcome of constraint of a functioning causal system comprised of multiple interrelated causal sequences, complex linkages between causes and effects, and a set of initial conditions. Using a simplified system conception of the reinforcement situation, I review the similarities and drawbacks of traditional reinforcement models and analyze the recent contributions of cognitive, regulatory, and ecological approaches. Finally, I show how the concept of behavior systems can begin to incorporate both traditional and recent conceptions of reinforcement in an integrative approach. PMID:8354963
NASA Astrophysics Data System (ADS)
Stumpp, C.; Nützmann, G.; Maciejewski, S.; Maloszewski, P.
2009-09-01
SummaryIn this paper, five model approaches with different physical and mathematical concepts varying in their model complexity and requirements were applied to identify the transport processes in the unsaturated zone. The applicability of these model approaches were compared and evaluated investigating two tracer breakthrough curves (bromide, deuterium) in a cropped, free-draining lysimeter experiment under natural atmospheric boundary conditions. The data set consisted of time series of water balance, depth resolved water contents, pressure heads and resident concentrations measured during 800 days. The tracer transport parameters were determined using a simple stochastic (stream tube model), three lumped parameter (constant water content model, multi-flow dispersion model, variable flow dispersion model) and a transient model approach. All of them were able to fit the tracer breakthrough curves. The identified transport parameters of each model approach were compared. Despite the differing physical and mathematical concepts the resulting parameters (mean water contents, mean water flux, dispersivities) of the five model approaches were all in the same range. The results indicate that the flow processes are also describable assuming steady state conditions. Homogeneous matrix flow is dominant and a small pore volume with enhanced flow velocities near saturation was identified with variable saturation flow and transport approach. The multi-flow dispersion model also identified preferential flow and additionally suggested a third less mobile flow component. Due to high fitting accuracy and parameter similarity all model approaches indicated reliable results.
Figure-Ground Segmentation Using Factor Graphs
Shen, Huiying; Coughlan, James; Ivanchenko, Volodymyr
2009-01-01
Foreground-background segmentation has recently been applied [26,12] to the detection and segmentation of specific objects or structures of interest from the background as an efficient alternative to techniques such as deformable templates [27]. We introduce a graphical model (i.e. Markov random field)-based formulation of structure-specific figure-ground segmentation based on simple geometric features extracted from an image, such as local configurations of linear features, that are characteristic of the desired figure structure. Our formulation is novel in that it is based on factor graphs, which are graphical models that encode interactions among arbitrary numbers of random variables. The ability of factor graphs to express interactions higher than pairwise order (the highest order encountered in most graphical models used in computer vision) is useful for modeling a variety of pattern recognition problems. In particular, we show how this property makes factor graphs a natural framework for performing grouping and segmentation, and demonstrate that the factor graph framework emerges naturally from a simple maximum entropy model of figure-ground segmentation. We cast our approach in a learning framework, in which the contributions of multiple grouping cues are learned from training data, and apply our framework to the problem of finding printed text in natural scenes. Experimental results are described, including a performance analysis that demonstrates the feasibility of the approach. PMID:20160994
NASA Astrophysics Data System (ADS)
Sadeghi, Arman
2018-03-01
Modeling of fluid flow in polyelectrolyte layer (PEL)-grafted microchannels is challenging due to their two-layer nature. Hence, the pertinent studies are limited only to circular and slit geometries for which matching the solutions for inside and outside the PEL is simple. In this paper, a simple variational-based approach is presented for the modeling of fully developed electroosmotic flow in PEL-grafted microchannels by which the whole fluidic area is considered as a single porous medium of variable properties. The model is capable of being applied to microchannels of a complex cross-sectional area. As an application of the method, it is applied to a rectangular microchannel of uniform PEL properties. It is shown that modeling a rectangular channel as a slit may lead to considerable overestimation of the mean velocity especially when both the PEL and electric double layer (EDL) are thick. It is also demonstrated that the mean velocity is an increasing function of the fixed charge density and PEL thickness and a decreasing function of the EDL thickness and PEL friction coefficient. The influence of the PEL thickness on the mean velocity, however, vanishes when both the PEL thickness and friction coefficient are sufficiently high.
Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.
NASA Astrophysics Data System (ADS)
Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke
2013-04-01
In this abstract a study on the influence of wind to model the PV module temperature is presented. This study is carried out in the framework of the PV-Alps INTERREG project in which the potential of different photovoltaic technologies is analysed for alpine regions. The PV module temperature depends on different parameters, such as ambient temperature, irradiance, wind speed and PV technology [1]. In most models, a very simple approach is used, where the PV module temperature is calculated from NOCT (nominal operating cell temperature), ambient temperature and irradiance alone [2]. In this study the influence of wind speed on the PV module temperature was investigated. First, different approaches suggested by various authors were tested [1], [2], [3], [4], [5]. For our analysis, temperature, irradiance and wind data from a PV test facility at the airport Bolzano (South Tyrol, Italy) from the EURAC Institute of Renewable Energies were used. The PV module temperature was calculated with different models and compared to the measured PV module temperature at the single panels. The best results were achieved with the approach suggested by Skoplaki et al. [1]. Preliminary results indicate that for all PV technologies which were tested (monocrystalline, amorphous, microcrystalline and polycrystalline silicon and cadmium telluride), modelled and measured PV module temperatures show a higher agreement (RMSE about 3-4 K) compared to standard approaches in which wind is not considered. For further investigation the in-situ measured wind velocities were replaced with wind data from numerical weather forecast models (ECMWF, reanalysis fields). Our results show that the PV module temperature calculated with wind data from ECMWF is still in very good agreement with the measured one (R² > 0.9 for all technologies). Compared to the previous analysis, we find comparable mean values and an increasing standard deviation. These results open a promising approach for PV module temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Badel, A; Qiu, J; Nakano, T
2008-05-01
Piezoelectric actuators (PEAs) are commonly used as micropositioning devices due to their high resolution, high stiffness, and fast frequency response. Because piezoceramic materials are ferroelectric, they fundamentally exhibit hysteresis behavior in their response to an applied electric field. The positioning precision can be significantly reduced due to nonlinear hysteresis effects when PEAs are used in relatively long range applications. This paper describes a new, precise, and simple asymmetric hysteresis operator dedicated to PEAs. The complex hysteretic transfer characteristic has been considered in a purely phenomenological way, without taking into account the underlying physics. This operator is based on two curves. The first curve corresponds to the main ascending branch and is modeled by the function f1. The second curve corresponds to the main reversal branch and is modeled by the function g2. The functions f(1) and g(2) are two very simple hyperbola functions with only three parameters. Particular ascending and reversal branches are deduced from appropriate translations of f(1) and g(2). The efficiency and precision of the proposed approach is demonstrated, in practice, by a real-time inverse feed-forward controller for piezoelectric actuators. Advantages and drawbacks of the proposed approach compared with classical hysteresis operators are discussed.
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-01-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage. PMID:9718328
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-09-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage.
A Data-driven Approach for Forecasting Next-day River Discharge
NASA Astrophysics Data System (ADS)
Sharif, H. O.; Billah, K. S.
2017-12-01
This study focuses on evaluating the performance of the Soil and Water Assessment Tool (SWAT) eco-hydrological model, a simple Auto-Regressive with eXogenous input (ARX) model, and a Gene expression programming (GEP)-based model in one-day-ahead forecasting of discharge of a subtropical basin (the upper Kentucky River Basin). The three models were calibrated with daily flow at the US Geological Survey (USGS) stream gauging station not affected by flow regulation for the period of 2002-2005. The calibrated models were then validated at the same gauging station as well as another USGS gauge 88 km downstream for the period of 2008-2010. The results suggest that simple models outperform a sophisticated hydrological model with GEP having the advantage of being able to generate functional relationships that allow scientific investigation of the complex nonlinear interrelationships among input variables. Unlike SWAT, GEP, and to some extent, ARX are less sensitive to the length of the calibration time series and do not require a spin-up period.
Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT
NASA Technical Reports Server (NTRS)
Fagundo, Arturo
1994-01-01
Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.
CP violation in multibody B decays from QCD factorization
NASA Astrophysics Data System (ADS)
Klein, Rebecca; Mannel, Thomas; Virto, Javier; Vos, K. Keri
2017-10-01
We test a data-driven approach based on QCD factorization for charmless three-body B-decays by confronting it to measurements of CP violation in B - → π - π + π -. While some of the needed non-perturbative objects can be directly extracted from data, some others can, so far, only be modelled. Although this approach is currently model dependent, we comment on the perspectives to reduce this model dependence. While our model naturally accommodates the gross features of the Dalitz distribution, it cannot quantitatively explain the details seen in the current experimental data on local CP asymmetries. We comment on possible refinements of our simple model and conclude by briefly discussing a possible extension of the model to large invariant masses, where large local CP asymmetries have been measured.
Complete Hamiltonian analysis of cosmological perturbations at all orders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in
2016-06-01
In this work, we present a consistent Hamiltonian analysis of cosmological perturbations at all orders. To make the procedure transparent, we consider a simple model and resolve the 'gauge-fixing' issues and extend the analysis to scalar field models and show that our approach can be applied to any order of perturbation for any first order derivative fields. In the case of Galilean scalar fields, our procedure can extract constrained relations at all orders in perturbations leading to the fact that there is no extra degrees of freedom due to the presence of higher time derivatives of the field in themore » Lagrangian. We compare and contrast our approach to the Lagrangian approach (Chen et al. [2006]) for extracting higher order correlations and show that our approach is efficient and robust and can be applied to any model of gravity and matter fields without invoking slow-roll approximation.« less
Lee, Mi Kyung; Coker, David F
2016-08-18
An accurate approach for computing intermolecular and intrachromophore contributions to spectral densities to describe the electronic-nuclear interactions relevant for modeling excitation energy transfer processes in light harvesting systems is presented. The approach is based on molecular dynamics (MD) calculations of classical correlation functions of long-range contributions to excitation energy fluctuations and a separate harmonic analysis and single-point gradient quantum calculations for electron-intrachromophore vibrational couplings. A simple model is also presented that enables detailed analysis of the shortcomings of standard MD-based excitation energy fluctuation correlation function approaches. The method introduced here avoids these problems, and its reliability is demonstrated in accurate predictions for bacteriochlorophyll molecules in the Fenna-Matthews-Olson pigment-protein complex, where excellent agreement with experimental spectral densities is found. This efficient approach can provide instantaneous spectral densities for treating the influence of fluctuations in environmental dissipation on fast electronic relaxation.
Li, Xinan; Xu, Hongyuan; Cheung, Jeffrey T
2016-12-01
This work describes a new approach for gait analysis and balance measurement. It uses an inertial measurement unit (IMU) that can either be embedded inside a dynamically unstable platform for balance measurement or mounted on the lower back of a human participant for gait analysis. The acceleration data along three Cartesian coordinates is analyzed by the gait-force model to extract bio-mechanics information in both the dynamic state as in the gait analyzer and the steady state as in the balance scale. For the gait analyzer, the simple, noninvasive and versatile approach makes it appealing to a broad range of applications in clinical diagnosis, rehabilitation monitoring, athletic training, sport-apparel design, and many other areas. For the balance scale, it provides a portable platform to measure the postural deviation and the balance index under visual or vestibular sensory input conditions. Despite its simple construction and operation, excellent agreement has been demonstrated between its performance and the high-cost commercial balance unit over a wide dynamic range. The portable balance scale is an ideal tool for routine monitoring of balance index, fall-risk assessment, and other balance-related health issues for both clinical and household use.
Verification and Validation of Residual Stresses in Bi-Material Composite Rings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Stacy Michelle; Hanson, Alexander Anthony; Briggs, Timothy
Process-induced residual stresses commonly occur in composite structures composed of dissimilar materials. These residual stresses form due to differences in the composite materials’ coefficients of thermal expansion and the shrinkage upon cure exhibited by polymer matrix materials. Depending upon the specific geometric details of the composite structure and the materials’ curing parameters, it is possible that these residual stresses could result in interlaminar delamination or fracture within the composite. Therefore, the consideration of potential residual stresses is important when designing composite parts and their manufacturing processes. However, the experimental determination of residual stresses in prototype parts can be time andmore » cost prohibitive. As an alternative to physical measurement, it is possible for computational tools to be used to quantify potential residual stresses in composite prototype parts. Therefore, the objectives of the presented work are to demonstrate a simplistic method for simulating residual stresses in composite parts, as well as the potential value of sensitivity and uncertainty quantification techniques during analyses for which material property parameters are unknown. Specifically, a simplified residual stress modeling approach, which accounts for coefficient of thermal expansion mismatch and polymer shrinkage, is implemented within the Sandia National Laboratories’ developed SIERRA/SolidMechanics code. Concurrent with the model development, two simple, bi-material structures composed of a carbon fiber/epoxy composite and aluminum, a flat plate and a cylinder, are fabricated and the residual stresses are quantified through the measurement of deformation. Then, in the process of validating the developed modeling approach with the experimental residual stress data, manufacturing process simulations of the two simple structures are developed and undergo a formal verification and validation process, including a mesh convergence study, sensitivity analysis, and uncertainty quantification. The simulations’ final results show adequate agreement with the experimental measurements, indicating the validity of a simple modeling approach, as well as a necessity for the inclusion of material parameter uncertainty in the final residual stress predictions.« less
An improved switching converter model using discrete and average techniques
NASA Technical Reports Server (NTRS)
Shortt, D. J.; Lee, F. C.
1982-01-01
The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.
ERIC Educational Resources Information Center
Black, Ryan A.; Butler, Stephen F.
2012-01-01
Although Rasch models have been shown to be a sound methodological approach to develop and validate measures of psychological constructs for more than 50 years, they remain underutilized in psychology and other social sciences. Until recently, one reason for this underutilization was the lack of syntactically simple procedures to fit Rasch and…
Sophie in the Snow: A Simple Approach to Datalogging and Modelling in Physics
ERIC Educational Resources Information Center
Oldknow, Adrian; Huyton, Pip; Galloway, Ian
2010-01-01
Most students now have access to devices such as digital cameras and mobile phones that are capable of taking short video clips outdoors. Such clips can be used with powerful ICT tools, such as Tracker, Excel and TI-Nspire, to extract time and coordinate data about a moving object, to produce scattergrams and to fit models. In this article we…
NASA Astrophysics Data System (ADS)
Milani, G.; Bertolesi, E.
2017-07-01
A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.
Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.
DeCarlo, Lawrence T
2003-02-01
The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.
Performance of Geno-Fuzzy Model on rainfall-runoff predictions in claypan watersheds
USDA-ARS?s Scientific Manuscript database
Fuzzy logic provides a relatively simple approach to simulate complex hydrological systems while accounting for the uncertainty of environmental variables. The objective of this study was to develop a fuzzy inference system (FIS) with genetic algorithm (GA) optimization for membership functions (MF...
Multidisciplinary optimization in aircraft design using analytic technology models
NASA Technical Reports Server (NTRS)
Malone, Brett; Mason, W. H.
1991-01-01
An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.
The Spin-Foam Approach to Quantum Gravity.
Perez, Alejandro
2013-01-01
This article reviews the present status of the spin-foam approach to the quantization of gravity. Special attention is payed to the pedagogical presentation of the recently-introduced new models for four-dimensional quantum gravity. The models are motivated by a suitable implementation of the path integral quantization of the Plebanski formulation of gravity on a simplicial regularization. The article also includes a self-contained treatment of 2+1 gravity. The simple nature of the latter provides the basis and a perspective for the analysis of both conceptual and technical issues that remain open in four dimensions.
Data-driven outbreak forecasting with a simple nonlinear growth model.
Lega, Joceline; Brown, Heidi E
2016-12-01
Recent events have thrown the spotlight on infectious disease outbreak response. We developed a data-driven method, EpiGro, which can be applied to cumulative case reports to estimate the order of magnitude of the duration, peak and ultimate size of an ongoing outbreak. It is based on a surprisingly simple mathematical property of many epidemiological data sets, does not require knowledge or estimation of disease transmission parameters, is robust to noise and to small data sets, and runs quickly due to its mathematical simplicity. Using data from historic and ongoing epidemics, we present the model. We also provide modeling considerations that justify this approach and discuss its limitations. In the absence of other information or in conjunction with other models, EpiGro may be useful to public health responders. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Howe, Alex R.; Burrows, Adam; Deming, Drake
2017-01-01
We provide an example of an analysis to explore the optimization of observations of transiting hot Jupiters with the James Webb Space Telescope (JWST) to characterize their atmospheres based on a simple three-parameter forward model. We construct expansive forward model sets for 11 hot Jupiters, 10 of which are relatively well characterized, exploring a range of parameters such as equilibrium temperature and metallicity, as well as considering host stars over a wide range in brightness. We compute posterior distributions of our model parameters for each planet with all of the available JWST spectroscopic modes and several programs of combined observations and compute their effectiveness using the metric of estimated mutual information per degree of freedom. From these simulations, clear trends emerge that provide guidelines for designing a JWST observing program. We demonstrate that these guidelines apply over a wide range of planet parameters and target brightnesses for our simple forward model.
Ikeda, Satoshi; Ohwatashi, Akihiko; Harada, Katsuhiro; Kamikawa, Yurie; Yoshida, Akira
2013-01-01
The use of novel rehabilitative approaches for effecting functional recovery following stroke is controversial. Effects of different but effective rehabilitative interventions in the hemiplegic patient are not clear. We studied the effects of different rehabilitative approaches on functional recovery in the rat photochecmical cerebral infarction model. Twenty-four male Wistar rats aged 8 weeks were used. The cranial bone was exposed under deep anesthesia. Rose bengal (20 mg/kg) was injected intravenously, and the sensorimotor area of the cerebral cortex was irradiated transcranially for 20 min with a light beam of 533-nm wavelength. Animals were divided into 3 groups. In the simple-exercise group, treadmill exercise was performed for 20 min every day. In the expected for acquisition movement-training group, beam-walking exercise was done for 20 min daily. The control group was left to recover without additional intervention. Hindlimb function was evaluated with the beam-walking test. Following cerebral infarction, dysfunction of the contralateral extremities was observed. Functional recovery was observed earlier in the expected for acquisition training group than in the other groups. Although rats in the treadmill group recovered more quickly than controls, the beam-walking group had the shortest overall recovery time. Exercise facilitated functional recovery in the rat hemiplegic model, and expected for acquisition exercise was more effective than simple exercise. These findings are considered to have important implications for the future development of clinical rehabilitation programs.
Born-Oppenheimer approximation for a singular system
NASA Astrophysics Data System (ADS)
Akbas, Haci; Turgut, O. Teoman
2018-01-01
We discuss a simple singular system in one dimension, two heavy particles interacting with a light particle via an attractive contact interaction and not interacting among themselves. It is natural to apply the Born-Oppenheimer approximation to this problem. We present a detailed discussion of this approach; the advantage of this simple model is that one can estimate the error terms self-consistently. Moreover, a Fock space approach to this problem is presented where an expansion can be proposed to get higher order corrections. A slight modification of the same problem in which the light particle is relativistic is discussed in a later section by neglecting pair creation processes. Here, the second quantized description is more challenging, but with some care, one can recover the first order expression exactly.
Method and system for automated on-chip material and structural certification of MEMS devices
Sinclair, Michael B.; DeBoer, Maarten P.; Smith, Norman F.; Jensen, Brian D.; Miller, Samuel L.
2003-05-20
A new approach toward MEMS quality control and materials characterization is provided by a combined test structure measurement and mechanical response modeling approach. Simple test structures are cofabricated with the MEMS devices being produced. These test structures are designed to isolate certain types of physical response, so that measurement of their behavior under applied stress can be easily interpreted as quality control and material properties information.
Sampling and position effects in the Electronically Steered Thinned Array Radiometer (ESTAR)
NASA Technical Reports Server (NTRS)
Katzberg, Stephen J.
1993-01-01
A simple engineering level model of the Electronically Steered Thinned Array Radiometer (ESTAR) is developed that allows an identification of the major effects of the sampling process involved with this technique. It is shown that the ESTAR approach is sensitive to aliasing and has a highly non-uniform sensitivity profile. It is further shown that the ESTAR approach is strongly sensitive to position displacements of the low-density sampling antenna elements.
Predictive Rotation Profile Control for the DIII-D Tokamak
NASA Astrophysics Data System (ADS)
Wehner, W. P.; Schuster, E.; Boyer, M. D.; Walker, M. L.; Humphreys, D. A.
2017-10-01
Control-oriented modeling and model-based control of the rotation profile are employed to build a suitable control capability for aiding rotation-related physics studies at DIII-D. To obtain a control-oriented model, a simplified version of the momentum balance equation is combined with empirical representations of the momentum sources. The control approach is rooted in a Model Predictive Control (MPC) framework to regulate the rotation profile while satisfying constraints associated with the desired plasma stored energy and/or βN limit. Simple modifications allow for alternative control objectives, such as maximizing the plasma rotation while maintaining a specified input torque. Because the MPC approach can explicitly incorporate various types of constraints, this approach is well suited to a variety of control objectives, and therefore serves as a valuable tool for experimental physics studies. Closed-loop TRANSP simulations are presented to demonstrate the effectiveness of the control approach. Supported by the US DOE under DE-SC0010661 and DE-FC02-04ER54698.
Simulating Eastern- and Central-Pacific Type ENSO Using a Simple Coupled Model
NASA Astrophysics Data System (ADS)
Fang, Xianghui; Zheng, Fei
2018-06-01
Severe biases exist in state-of-the-art general circulation models (GCMs) in capturing realistic central-Pacific (CP) El Niño structures. At the same time, many observational analyses have emphasized that thermocline (TH) feedback and zonal advective (ZA) feedback play dominant roles in the development of eastern-Pacific (EP) and CP El Niño-Southern Oscillation (ENSO), respectively. In this work, a simple linear air-sea coupled model, which can accurately depict the strength distribution of the TH and ZA feedbacks in the equatorial Pacific, is used to investigate these two types of El Niño. The results indicate that the model can reproduce the main characteristics of CP ENSO if the TH feedback is switched off and the ZA feedback is retained as the only positive feedback, confirming the dominant role played by ZA feedback in the development of CP ENSO. Further experiments indicate that, through a simple nonlinear control approach, many ENSO characteristics, including the existence of both CP and EP El Niño and the asymmetries between El Niño and La Niña, can be successfully captured using the simple linear air-sea coupled model. These analyses indicate that an accurate depiction of the climatological sea surface temperature distribution and the related ZA feedback, which are the subject of severe biases in GCMs, is very important in simulating a realistic CP El Niño.
Statistical Approaches for Spatiotemporal Prediction of Low Flows
NASA Astrophysics Data System (ADS)
Fangmann, A.; Haberlandt, U.
2017-12-01
An adequate assessment of regional climate change impacts on streamflow requires the integration of various sources of information and modeling approaches. This study proposes simple statistical tools for inclusion into model ensembles, which are fast and straightforward in their application, yet able to yield accurate streamflow predictions in time and space. Target variables for all approaches are annual low flow indices derived from a data set of 51 records of average daily discharge for northwestern Germany. The models require input of climatic data in the form of meteorological drought indices, derived from observed daily climatic variables, averaged over the streamflow gauges' catchments areas. Four different modeling approaches are analyzed. Basis for all pose multiple linear regression models that estimate low flows as a function of a set of meteorological indices and/or physiographic and climatic catchment descriptors. For the first method, individual regression models are fitted at each station, predicting annual low flow values from a set of annual meteorological indices, which are subsequently regionalized using a set of catchment characteristics. The second method combines temporal and spatial prediction within a single panel data regression model, allowing estimation of annual low flow values from input of both annual meteorological indices and catchment descriptors. The third and fourth methods represent non-stationary low flow frequency analyses and require fitting of regional distribution functions. Method three is subject to a spatiotemporal prediction of an index value, method four to estimation of L-moments that adapt the regional frequency distribution to the at-site conditions. The results show that method two outperforms successive prediction in time and space. Method three also shows a high performance in the near future period, but since it relies on a stationary distribution, its application for prediction of far future changes may be problematic. Spatiotemporal prediction of L-moments appeared highly uncertain for higher-order moments resulting in unrealistic future low flow values. All in all, the results promote an inclusion of simple statistical methods in climate change impact assessment.
Translucent Radiosity: Efficiently Combining Diffuse Inter-Reflection and Subsurface Scattering.
Sheng, Yu; Shi, Yulong; Wang, Lili; Narasimhan, Srinivasa G
2014-07-01
It is hard to efficiently model the light transport in scenes with translucent objects for interactive applications. The inter-reflection between objects and their environments and the subsurface scattering through the materials intertwine to produce visual effects like color bleeding, light glows, and soft shading. Monte-Carlo based approaches have demonstrated impressive results but are computationally expensive, and faster approaches model either only inter-reflection or only subsurface scattering. In this paper, we present a simple analytic model that combines diffuse inter-reflection and isotropic subsurface scattering. Our approach extends the classical work in radiosity by including a subsurface scattering matrix that operates in conjunction with the traditional form factor matrix. This subsurface scattering matrix can be constructed using analytic, measurement-based or simulation-based models and can capture both homogeneous and heterogeneous translucencies. Using a fast iterative solution to radiosity, we demonstrate scene relighting and dynamically varying object translucencies at near interactive rates.
A simple, approximate model of parachute inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macha, J.M.
1992-11-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluidmore » are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.« less
A simple, approximate model of parachute inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macha, J.M.
1992-01-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluidmore » are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.« less
Modelling tidewater glacier calving: from detailed process models to simple calving laws
NASA Astrophysics Data System (ADS)
Benn, Doug; Åström, Jan; Zwinger, Thomas; Todd, Joe; Nick, Faezeh
2017-04-01
The simple calving laws currently used in ice sheet models do not adequately reflect the complexity and diversity of calving processes. To be effective, calving laws must be grounded in a sound understanding of how calving actually works. We have developed a new approach to formulating calving laws, using a) the Helsinki Discrete Element Model (HiDEM) to explicitly model fracture and calving processes, and b) the full-Stokes continuum model Elmer/Ice to identify critical stress states associated with HiDEM calving events. A range of observed calving processes emerges spontaneously from HiDEM in response to variations in ice-front buoyancy and the size of subaqueous undercuts, and we show that HiDEM calving events are associated with characteristic stress patterns simulated in Elmer/Ice. Our results open the way to developing calving laws that properly reflect the diversity of calving processes, and provide a framework for a unified theory of the calving process continuum.
Teufel, Christoph; Fletcher, Paul C
2016-10-01
Computational models have become an integral part of basic neuroscience and have facilitated some of the major advances in the field. More recently, such models have also been applied to the understanding of disruptions in brain function. In this review, using examples and a simple analogy, we discuss the potential for computational models to inform our understanding of brain function and dysfunction. We argue that they may provide, in unprecedented detail, an understanding of the neurobiological and mental basis of brain disorders and that such insights will be key to progress in diagnosis and treatment. However, there are also potential problems attending this approach. We highlight these and identify simple principles that should always govern the use of computational models in clinical neuroscience, noting especially the importance of a clear specification of a model's purpose and of the mapping between mathematical concepts and reality. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain.
Bioheat model evaluations of laser effects on tissues: role of water evaporation and diffusion
NASA Astrophysics Data System (ADS)
Nagulapally, Deepthi; Joshi, Ravi P.; Thomas, Robert J.
2011-03-01
A two-dimensional, time-dependent bioheat model is applied to evaluate changes in temperature and water content in tissues subjected to laser irradiation. Our approach takes account of liquid-to-vapor phase changes and a simple diffusive flow of water within the biotissue. An energy balance equation considers blood perfusion, metabolic heat generation, laser absorption, and water evaporation. The model also accounts for the water dependence of tissue properties (both thermal and optical), and variations in blood perfusion rates based on local tissue injury. Our calculations show that water diffusion would reduce the local temperature increases and hot spots in comparison to simple models that ignore the role of water in the overall thermal and mass transport. Also, the reduced suppression of perfusion rates due to tissue heating and damage with water diffusion affect the necrotic depth. Two-dimensional results for the dynamic temperature, water content, and damage distributions will be presented for skin simulations. It is argued that reduction in temperature gradients due to water diffusion would mitigate local refractive index variations, and hence influence the phenomenon of thermal lensing. Finally, simple quantitative evaluations of pressure increases within the tissue due to laser absorption are presented.
1990-11-01
1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and
Teaching nutritional biochemistry: an experimental approach using yeast.
Alonso, Manuel; Stella, Carlos A
2012-12-01
In this report, we present a practical approach to teaching several topics in nutrition to science students at the high school and college freshmen levels. This approach uses baker's yeast (Saccharomyces cerevisiae) as a biological system model. The diameters of yeast colonies, which vary according to the nutrients present in the medium, can be observed, compared, and used to teach metabolic requirements. The experiments described in this report show simple macroscopic evidence of submicroscopic nutritional events. This can serve as a useful base for an analogy of heterotrophic human cell nutrition.
Acoustic backscatter models of fish: Gradual or punctuated evolution
NASA Astrophysics Data System (ADS)
Horne, John K.
2004-05-01
Sound-scattering characteristics of aquatic organisms are routinely investigated using theoretical and numerical models. Development of the inverse approach by van Holliday and colleagues in the 1970s catalyzed the development and validation of backscatter models for fish and zooplankton. As the understanding of biological scattering properties increased, so did the number and computational sophistication of backscatter models. The complexity of data used to represent modeled organisms has also evolved in parallel to model development. Simple geometric shapes representing body components or the whole organism have been replaced by anatomically accurate representations derived from imaging sensors such as computer-aided tomography (CAT) scans. In contrast, Medwin and Clay (1998) recommend that fish and zooplankton should be described by simple theories and models, without acoustically superfluous extensions. Since van Holliday's early work, how has data and computational complexity influenced accuracy and precision of model predictions? How has the understanding of aquatic organism scattering properties increased? Significant steps in the history of model development will be identified and changes in model results will be characterized and compared. [Work supported by ONR and the Alaska Fisheries Science Center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, Jason; Winkler, Jon
Moisture buffering of building materials has a significant impact on the building's indoor humidity, and building energy simulations need to model this buffering to accurately predict the humidity. Researchers requiring a simple moisture-buffering approach typically rely on the effective-capacitance model, which has been shown to be a poor predictor of actual indoor humidity. This paper describes an alternative two-layer effective moisture penetration depth (EMPD) model and its inputs. While this model has been used previously, there is a need to understand the sensitivity of this model to uncertain inputs. In this paper, we use the moisture-adsorbent materials exposed to themore » interior air: drywall, wood, and carpet. We use a global sensitivity analysis to determine which inputs are most influential and how the model's prediction capability degrades due to uncertainty in these inputs. We then compare the model's humidity prediction with measured data from five houses, which shows that this model, and a set of simple inputs, can give reasonable prediction of the indoor humidity.« less
Woods, Jason; Winkler, Jon
2018-01-31
Moisture buffering of building materials has a significant impact on the building's indoor humidity, and building energy simulations need to model this buffering to accurately predict the humidity. Researchers requiring a simple moisture-buffering approach typically rely on the effective-capacitance model, which has been shown to be a poor predictor of actual indoor humidity. This paper describes an alternative two-layer effective moisture penetration depth (EMPD) model and its inputs. While this model has been used previously, there is a need to understand the sensitivity of this model to uncertain inputs. In this paper, we use the moisture-adsorbent materials exposed to themore » interior air: drywall, wood, and carpet. We use a global sensitivity analysis to determine which inputs are most influential and how the model's prediction capability degrades due to uncertainty in these inputs. We then compare the model's humidity prediction with measured data from five houses, which shows that this model, and a set of simple inputs, can give reasonable prediction of the indoor humidity.« less
Cylindrically symmetric Green's function approach for modeling the crystal growth morphology of ice.
Libbrecht, K G
1999-08-01
We describe a front-tracking Green's function approach to modeling cylindrically symmetric crystal growth. This method is simple to implement, and with little computer power can adequately model a wide range of physical situations. We apply the method to modeling the hexagonal prism growth of ice crystals, which is governed primarily by diffusion along with anisotropic surface kinetic processes. From ice crystal growth observations in air, we derive measurements of the kinetic growth coefficients for the basal and prism faces as a function of temperature, for supersaturations near the water saturation level. These measurements are interpreted in the context of a model for the nucleation and growth of ice, in which the growth dynamics are dominated by the structure of a disordered layer on the ice surfaces.
Neuroendocrine control of seasonal plasticity in the auditory and vocal systems of fish
Forlano, Paul M.; Sisneros, Joseph A.; Rohmann, Kevin N.; Bass, Andrew H.
2014-01-01
Seasonal changes in reproductive-related vocal behavior are widespread among fishes. This review highlights recent studies of the vocal plainfin midshipman fish, Porichthys notatus, a neuroethological model system used for the past two decades to explore neural and endocrine mechanisms of vocal-acoustic social behaviors shared with tetrapods. Integrative approaches combining behavior, neurophysiology, neuropharmacology, neuroanatomy, and gene expression methodologies have taken advantage of simple, stereotyped and easily quantifiable behaviors controlled by discrete neural networks in this model system to enable discoveries such as the first demonstration of adaptive seasonal plasticity in the auditory periphery of a vertebrate as well as rapid steroid and neuropeptide effects on vocal physiology and behavior. This simple model system has now revealed cellular and molecular mechanisms underlying seasonal and steroid-driven auditory and vocal plasticity in the vertebrate brain. PMID:25168757
An overview of longitudinal data analysis methods for neurological research.
Locascio, Joseph J; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.
Predictability in community dynamics.
Blonder, Benjamin; Moulton, Derek E; Blois, Jessica; Enquist, Brian J; Graae, Bente J; Macias-Fauria, Marc; McGill, Brian; Nogué, Sandra; Ordonez, Alejandro; Sandel, Brody; Svenning, Jens-Christian
2017-03-01
The coupling between community composition and climate change spans a gradient from no lags to strong lags. The no-lag hypothesis is the foundation of many ecophysiological models, correlative species distribution modelling and climate reconstruction approaches. Simple lag hypotheses have become prominent in disequilibrium ecology, proposing that communities track climate change following a fixed function or with a time delay. However, more complex dynamics are possible and may lead to memory effects and alternate unstable states. We develop graphical and analytic methods for assessing these scenarios and show that these dynamics can appear in even simple models. The overall implications are that (1) complex community dynamics may be common and (2) detailed knowledge of past climate change and community states will often be necessary yet sometimes insufficient to make predictions of a community's future state. © 2017 John Wiley & Sons Ltd/CNRS.
Ensemble downscaling in coupled solar wind-magnetosphere modeling for space weather forecasting.
Owens, M J; Horbury, T S; Wicks, R T; McGregor, S L; Savani, N P; Xiong, M
2014-06-01
Advanced forecasting of space weather requires simulation of the whole Sun-to-Earth system, which necessitates driving magnetospheric models with the outputs from solar wind models. This presents a fundamental difficulty, as the magnetosphere is sensitive to both large-scale solar wind structures, which can be captured by solar wind models, and small-scale solar wind "noise," which is far below typical solar wind model resolution and results primarily from stochastic processes. Following similar approaches in terrestrial climate modeling, we propose statistical "downscaling" of solar wind model results prior to their use as input to a magnetospheric model. As magnetospheric response can be highly nonlinear, this is preferable to downscaling the results of magnetospheric modeling. To demonstrate the benefit of this approach, we first approximate solar wind model output by smoothing solar wind observations with an 8 h filter, then add small-scale structure back in through the addition of random noise with the observed spectral characteristics. Here we use a very simple parameterization of noise based upon the observed probability distribution functions of solar wind parameters, but more sophisticated methods will be developed in the future. An ensemble of results from the simple downscaling scheme are tested using a model-independent method and shown to add value to the magnetospheric forecast, both improving the best estimate and quantifying the uncertainty. We suggest a number of features desirable in an operational solar wind downscaling scheme. Solar wind models must be downscaled in order to drive magnetospheric models Ensemble downscaling is more effective than deterministic downscaling The magnetosphere responds nonlinearly to small-scale solar wind fluctuations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, J. Y.; Riley, W. J.
We present a generic flux limiter to account for mass limitations from an arbitrary number of substrates in a biogeochemical reaction network. The flux limiter is based on the observation that substrate (e.g., nitrogen, phosphorus) limitation in biogeochemical models can be represented as to ensure mass conservative and non-negative numerical solutions to the governing ordinary differential equations. Application of the flux limiter includes two steps: (1) formulation of the biogeochemical processes with a matrix of stoichiometric coefficients and (2) application of Liebig's law of the minimum using the dynamic stoichiometric relationship of the reactants. This approach contrasts with the ad hoc down-regulationmore » approaches that are implemented in many existing models (such as CLM4.5 and the ACME (Accelerated Climate Modeling for Energy) Land Model (ALM)) of carbon and nutrient interactions, which are error prone when adding new processes, even for experienced modelers. Through an example implementation with a CENTURY-like decomposition model that includes carbon, nitrogen, and phosphorus, we show that our approach (1) produced almost identical results to that from the ad hoc down-regulation approaches under non-limiting nutrient conditions, (2) properly resolved the negative solutions under substrate-limited conditions where the simple clipping approach failed, (3) successfully avoided the potential conceptual ambiguities that are implied by those ad hoc down-regulation approaches. We expect our approach will make future biogeochemical models easier to improve and more robust.« less
Simulating Donnan equilibria based on the Nernst-Planck equation
NASA Astrophysics Data System (ADS)
Gimmi, Thomas; Alt-Epping, Peter
2018-07-01
Understanding ion transport through clays and clay membranes is important for many geochemical and environmental applications. Ion transport is affected by electrostatic forces exerted by charged clay surfaces. Anions are partly excluded from pore water near these surfaces, whereas cations are enriched. Such effects can be modeled by the Donnan approach. Here we introduce a new, comparatively simple way to represent Donnan equilibria in transport simulations. We include charged surfaces as immobile ions in the balance equation and calculate coupled transport of all components, including the immobile charges, with the Nernst-Planck equation. This results in an additional diffusion potential that influences ion transport, leading to Donnan ion distributions while maintaining local charge balance. The validity of our new approach was demonstrated by comparing Nernst-Planck simulations using the reactive transport code Flotran with analytical solutions available for simple Donnan systems. Attention has to be paid to the numerical evaluation of the electrochemical migration term in the Nernst-Planck equation to obtain correct results for asymmetric electrolytes. Sensitivity simulations demonstrate the influence of various Donnan model parameters on simulated anion accessible porosities. It is furthermore shown that the salt diffusion coefficient in a Donnan pore depends on local concentrations, in contrast to the aqueous salt diffusion coefficient. Our approach can be easily implemented into other transport codes. It is versatile and facilitates, for instance, assessing the implications of different activity models for the Donnan porosity.
Neuromorphic Computing: A Post-Moore's Law Complementary Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuman, Catherine D; Birdwell, John Douglas; Dean, Mark
2016-01-01
We describe our approach to post-Moore's law computing with three neuromorphic computing models that share a RISC philosophy, featuring simple components combined with a flexible and programmable structure. We envision these to be leveraged as co-processors, or as data filters to provide in situ data analysis in supercomputing environments.
Calibration of the soil conditioning index (SCI) to soil organic carbon in the southeastern USA
USDA-ARS?s Scientific Manuscript database
Prediction of soil organic C sequestration with adoption of various conservation agricultural management approaches is needed to meet the emerging market for environmental services provided by agricultural land stewardship. The soil conditioning index (SCI) is a relatively simple model used by the ...
Using Lotus 1-2-3 for "Non-Stop" Graphic Simulation.
ERIC Educational Resources Information Center
Godin, Victor B.; Rao, Ashok
1988-01-01
Discusses the use of Lotus 1-2-3 to create non-stop graphic displays of simulation models. Describes a simple application of this technique using the distribution resulting from repeated throws of dice. Lists other software used with this technique. Stresses the advantages of this approach in education. (CW)
Language Management in the Czech Republic
ERIC Educational Resources Information Center
Neustupny, J. V.; Nekvapil, Jiri
2003-01-01
This monograph, based on the Language Management model, provides information on both the "simple" (discourse-based) and "organised" modes of attention to language problems in the Czech Republic. This includes but is not limited to the language policy of the State. This approach does not satisfy itself with discussing problems…
A Resource-Allocation Theory of Classroom Management.
ERIC Educational Resources Information Center
McDonald, Frederick J.
A fresh approach to classroom management, which responds both to the present body of knowledge in this area and extends to beginning teachers a practical, flexible, and simple method of maintaining classroom control, is presented. Shortcomings of previous management theories (in particular, the Direct Instruction Model) are discussed, and the need…
Flowfield computation of entry vehicles
NASA Technical Reports Server (NTRS)
Prabhu, Dinesh K.
1990-01-01
The equations governing the multidimensional flow of a reacting mixture of thermally perfect gasses were derived. The modeling procedures for the various terms of the conservation laws are discussed. A numerical algorithm, based on the finite-volume approach, to solve these conservation equations was developed. The advantages and disadvantages of the present numerical scheme are discussed from the point of view of accuracy, computer time, and memory requirements. A simple one-dimensional model problem was solved to prove the feasibility and accuracy of the algorithm. A computer code implementing the above algorithm was developed and is presently being applied to simple geometries and conditions. Once the code is completely debugged and validated, it will be used to compute the complete unsteady flow field around the Aeroassist Flight Experiment (AFE) body.
A simple model for the evolution of a non-Abelian cosmic string network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cella, G.; Pieroni, M., E-mail: giancarlo.cella@pi.infn.it, E-mail: mauro.pieroni@apc.univ-paris7.fr
2016-06-01
In this paper we present the results of numerical simulations intended to study the behavior of non-Abelian cosmic strings networks. In particular we are interested in discussing the variations in the asymptotic behavior of the system as we variate the number of generators for the topological defects. A simple model which allows for cosmic strings is presented and its lattice discretization is discussed. The evolution of the generated cosmic string networks is then studied for different values for the number of generators for the topological defects. Scaling solution appears to be approached in most cases and we present an argumentmore » to justify the lack of scaling for the residual cases.« less
Conifer ovulate cones accumulate pollen principally by simple impaction.
Cresswell, James E; Henning, Kevin; Pennel, Christophe; Lahoubi, Mohamed; Patrick, Michael A; Young, Phillipe G; Tabor, Gavin R
2007-11-13
In many pine species (Family Pinaceae), ovulate cones structurally resemble a turbine, which has been widely interpreted as an adaptation for improving pollination by producing complex aerodynamic effects. We tested the turbine interpretation by quantifying patterns of pollen accumulation on ovulate cones in a wind tunnel and by using simulation models based on computational fluid dynamics. We used computer-aided design and computed tomography to create computational fluid dynamics model cones. We studied three species: Pinus radiata, Pinus sylvestris, and Cedrus libani. Irrespective of the approach or species studied, we found no evidence that turbine-like aerodynamics made a significant contribution to pollen accumulation, which instead occurred primarily by simple impaction. Consequently, we suggest alternative adaptive interpretations for the structure of ovulate cones.
Conifer ovulate cones accumulate pollen principally by simple impaction
Cresswell, James E.; Henning, Kevin; Pennel, Christophe; Lahoubi, Mohamed; Patrick, Michael A.; Young, Phillipe G.; Tabor, Gavin R.
2007-01-01
In many pine species (Family Pinaceae), ovulate cones structurally resemble a turbine, which has been widely interpreted as an adaptation for improving pollination by producing complex aerodynamic effects. We tested the turbine interpretation by quantifying patterns of pollen accumulation on ovulate cones in a wind tunnel and by using simulation models based on computational fluid dynamics. We used computer-aided design and computed tomography to create computational fluid dynamics model cones. We studied three species: Pinus radiata, Pinus sylvestris, and Cedrus libani. Irrespective of the approach or species studied, we found no evidence that turbine-like aerodynamics made a significant contribution to pollen accumulation, which instead occurred primarily by simple impaction. Consequently, we suggest alternative adaptive interpretations for the structure of ovulate cones. PMID:17986613
The time series approach to short term load forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagan, M.T.; Behr, S.M.
The application of time series analysis methods to load forecasting is reviewed. It is shown than Box and Jenkins time series models, in particular, are well suited to this application. The logical and organized procedures for model development using the autocorrelation function make these models particularly attractive. One of the drawbacks of these models is the inability to accurately represent the nonlinear relationship between load and temperature. A simple procedure for overcoming this difficulty is introduced, and several Box and Jenkins models are compared with a forecasting procedure currently used by a utility company.
Vertical and pitching resonance of train cars moving over a series of simple beams
NASA Astrophysics Data System (ADS)
Yang, Y. B.; Yau, J. D.
2015-02-01
The resonant response, including both vertical and pitching motions, of an undamped sprung mass unit moving over a series of simple beams is studied by a semi-analytical approach. For a sprung mass that is very small compared with the beam, we first simplify the sprung mass as a constant moving force and obtain the response of the beam in closed form. With this, we then solve for the response of the sprung mass passing over a series of simple beams, and validate the solution by an independent finite element analysis. To evaluate the pitching resonance, we consider the cases of a two-axle model and a coach model traveling over rough rails supported by a series of simple beams. The resonance of a train car is characterized by the fact that its response continues to build up, as it travels over more and more beams. For train cars with long axle intervals, the vertical acceleration induced by pitching resonance dominates the peak response of the train traveling over a series of simple beams. The present semi-analytical study allows us to grasp the key parameters involved in the primary/sub-resonant responses. Other phenomena of resonance are also discussed in the exemplar study.
Lepora, Nathan F; Blomeley, Craig P; Hoyland, Darren; Bracci, Enrico; Overton, Paul G; Gurney, Kevin
2011-11-01
The study of active and passive neuronal dynamics usually relies on a sophisticated array of electrophysiological, staining and pharmacological techniques. We describe here a simple complementary method that recovers many findings of these more complex methods but relies only on a basic patch-clamp recording approach. Somatic short and long current pulses were applied in vitro to striatal medium spiny (MS) and fast spiking (FS) neurons from juvenile rats. The passive dynamics were quantified by fitting two-compartment models to the short current pulse data. Lumped conductances for the active dynamics were then found by compensating this fitted passive dynamics within the current-voltage relationship from the long current pulse data. These estimated passive and active properties were consistent with previous more complex estimations of the neuron properties, supporting the approach. Relationships within the MS and FS neuron types were also evident, including a graduation of MS neuron properties consistent with recent findings about D1 and D2 dopamine receptor expression. Application of the method to simulated neuron data supported the hypothesis that it gives reasonable estimates of membrane properties and gross morphology. Therefore detailed information about the biophysics can be gained from this simple approach, which is useful for both classification of neuron type and biophysical modelling. Furthermore, because these methods rely upon no manipulations to the cell other than patch clamping, they are ideally suited to in vivo electrophysiology. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Abyaneh, M H; Wildman, R D; Ashcroft, I A; Ruiz, P D
2013-11-01
An analysis of the material properties of porcine corneas has been performed. A simple stress relaxation test was performed to determine the viscoelastic properties and a rheological model was built based on the Generalized Maxwell (GM) approach. A validation experiment using nano-indentation showed that an isotropic GM model was insufficient for describing the corneal material behaviour when exposed to a complex stress state. A new technique was proposed for determining the properties, using a combination of nano-indentation experiment, an isotropic and orthotropic GM model and inverse finite element method. The good agreement using this method suggests that this is a promising technique for measuring material properties in vivo and further work should focus on the reliability of the approach in practice. © 2013 Elsevier Ltd. All rights reserved.
A Bayesian Model of the Memory Colour Effect.
Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R
2018-01-01
According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects.
A Bayesian Model of the Memory Colour Effect
Olkkonen, Maria; Gegenfurtner, Karl R.
2018-01-01
According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects. PMID:29760874
A Bayesian Attractor Model for Perceptual Decision Making
Bitzer, Sebastian; Bruineberg, Jelle; Kiebel, Stefan J.
2015-01-01
Even for simple perceptual decisions, the mechanisms that the brain employs are still under debate. Although current consensus states that the brain accumulates evidence extracted from noisy sensory information, open questions remain about how this simple model relates to other perceptual phenomena such as flexibility in decisions, decision-dependent modulation of sensory gain, or confidence about a decision. We propose a novel approach of how perceptual decisions are made by combining two influential formalisms into a new model. Specifically, we embed an attractor model of decision making into a probabilistic framework that models decision making as Bayesian inference. We show that the new model can explain decision making behaviour by fitting it to experimental data. In addition, the new model combines for the first time three important features: First, the model can update decisions in response to switches in the underlying stimulus. Second, the probabilistic formulation accounts for top-down effects that may explain recent experimental findings of decision-related gain modulation of sensory neurons. Finally, the model computes an explicit measure of confidence which we relate to recent experimental evidence for confidence computations in perceptual decision tasks. PMID:26267143
Falke, Jeffrey A.; Dunham, Jason B.; Hockman-Wert, David; Pahl, Randy
2016-01-01
We provide a simple framework for diagnosing the impairment of stream water temperature for coldwater fishes across broad spatial extents based on a weight-of-evidence approach that integrates biological criteria, species distribution models, and geostatistical models of stream temperature. As a test case, we applied our approach to identify stream reaches most likely to be thermally impaired for Lahontan Cutthroat Trout Oncorhynchus clarkii henshawi in the upper Reese River, located in the northern Great Basin, Nevada. We first evaluated the capability of stream thermal regime descriptors to explain variation across 170 sites, and we found that the 7-d moving average of daily maximum stream temperatures (7DADM) provided minimal among-descriptor redundancy and, based on an upper threshold of 20°C, was also a good indicator of acute and chronic thermal stress. Next, we quantified the range of Lahontan Cutthroat Trout within our study area using a geographic distribution model. Finally, we used a geostatistical model to assess spatial variation in 7DADM and predict potential thermal impairment at the stream reach scale. We found that whereas 38% of reaches in our study area exceeded a 7DADM of 20°C and 35% were significantly warmer than predicted, only 17% both exceeded the biological criterion and were significantly warmer than predicted. This filtering allowed us to identify locations where physical and biological impairment were most likely within the network and that would represent the highest management priorities. Although our approach lacks the precision of more comprehensive approaches, it provides a broader context for diagnosing impairment and is a useful means of identifying priorities for more detailed evaluations across broad and heterogeneous stream networks.
ERIC Educational Resources Information Center
Wangdi, Dumcho; Kanthang, Paisan; Precharattana, Monamorn
2017-01-01
This paper attempts to investigate the understanding of the law of mechanical energy conservation using a guided inquiry approach. A simple hands-on model was constructed and used to demonstrate the law of mechanical energy conservation. A total of 30 grade ten students from one of the middle secondary schools in western Bhutan participated in…
Progress towards quantum simulating the classical O(2) Model
2014-12-01
approach by building up on simple models sharing some of the basic features of lattice QCD . In the context of condensed matter, a proof of principle that...independently. Explicit Hilbert space repre- sentations of the physical states and of their matrix elements are mostly absent from today’s lattice QCD ...to lattice QCD , seems possible and interesting. ACKNOWLEDGMENTS We thank Masanori Hanada, Peter Orland, Lode Pollet, Boris Svistunov, the participants
Improved Analysis of Earth System Models and Observations using Simple Climate Models
NASA Astrophysics Data System (ADS)
Nadiga, B. T.; Urban, N. M.
2016-12-01
Earth system models (ESM) are the most comprehensive tools we have to study climate change and develop climate projections. However, the computational infrastructure required and the cost incurred in running such ESMs precludes direct use of such models in conjunction with a wide variety of tools that can further our understanding of climate. Here we are referring to tools that range from dynamical systems tools that give insight into underlying flow structure and topology to tools that come from various applied mathematical and statistical techniques and are central to quantifying stability, sensitivity, uncertainty and predictability to machine learning tools that are now being rapidly developed or improved. Our approach to facilitate the use of such models is to analyze output of ESM experiments (cf. CMIP) using a range of simpler models that consider integral balances of important quantities such as mass and/or energy in a Bayesian framework.We highlight the use of this approach in the context of the uptake of heat by the world oceans in the ongoing global warming. Indeed, since in excess of 90% of the anomalous radiative forcing due greenhouse gas emissions is sequestered in the world oceans, the nature of ocean heat uptake crucially determines the surface warming that is realized (cf. climate sensitivity). Nevertheless, ESMs themselves are never run long enough to directly assess climate sensitivity. So, we consider a range of models based on integral balances--balances that have to be realized in all first-principles based models of the climate system including the most detailed state-of-the art climate simulations. The models range from simple models of energy balance to those that consider dynamically important ocean processes such as the conveyor-belt circulation (Meridional Overturning Circulation, MOC), North Atlantic Deep Water (NADW) formation, Antarctic Circumpolar Current (ACC) and eddy mixing. Results from Bayesian analysis of such models using both ESM experiments and actual observations are presented. One such result points to the importance of direct sequestration of heat below 700 m, a process that is not allowed for in the simple models that have been traditionally used to deduce climate sensitivity.
Shanableh, A
2005-01-01
The main objective of this study was to develop generalized first-order kinetic models to represent hydrothermal decomposition and oxidation of biosolids within a wide range of temperatures (200-450 degrees C). A lumping approach was used in which oxidation of the various organic ingredients was characterized by the chemical oxygen demand (COD), and decomposition was characterized by the particulate (i.e., nonfilterable) chemical oxygen demand (PCOD). Using the Arrhenius equation (k = k(o)e(-Ea/RT)), activation energy (Ea) levels were derived from 42 continuous-flow hydrothermal treatment experiments conducted at temperatures in the range of 200-450 degrees C. Using predetermined values for k(o) in the Arrhenius equation, the activation energies of the various organic ingredients were separated into 42 values for oxidation and a similar number for decomposition. The activation energy values were then classified into levels representing the relative ease at which the organic ingredients of the biosolids were oxidized or decomposed. The resulting simple first-order kinetic models adequately represented, within the experimental data range, hydrothermal decomposition of the organic particles as measured by PCOD and oxidation of the organic content as measured by COD. The modeling approach presented in the paper provide a simple and general framework suitable for assessing the relative reaction rates of the various organic ingredients of biosolids.
Kumberger, Peter; Durso-Cain, Karina; Uprichard, Susan L; Dahari, Harel; Graw, Frederik
2018-04-17
Mathematical models based on ordinary differential equations (ODE) that describe the population dynamics of viruses and infected cells have been an essential tool to characterize and quantify viral infection dynamics. Although an important aspect of viral infection is the dynamics of viral spread, which includes transmission by cell-free virions and direct cell-to-cell transmission, models used so far ignored cell-to-cell transmission completely, or accounted for this process by simple mass-action kinetics between infected and uninfected cells. In this study, we show that the simple mass-action approach falls short when describing viral spread in a spatially-defined environment. Using simulated data, we present a model extension that allows correct quantification of cell-to-cell transmission dynamics within a monolayer of cells. By considering the decreasing proportion of cells that can contribute to cell-to-cell spread with progressing infection, our extension accounts for the transmission dynamics on a single cell level while still remaining applicable to standard population-based experimental measurements. While the ability to infer the proportion of cells infected by either of the transmission modes depends on the viral diffusion rate, the improved estimates obtained using our novel approach emphasize the need to correctly account for spatial aspects when analyzing viral spread.
Differential equation models for sharp threshold dynamics.
Schramm, Harrison C; Dimitrov, Nedialko B
2014-01-01
We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations. Published by Elsevier Inc.
Human mobility in a continuum approach.
Simini, Filippo; Maritan, Amos; Néda, Zoltán
2013-01-01
Human mobility is investigated using a continuum approach that allows to calculate the probability to observe a trip to any arbitrary region, and the fluxes between any two regions. The considered description offers a general and unified framework, in which previously proposed mobility models like the gravity model, the intervening opportunities model, and the recently introduced radiation model are naturally resulting as special cases. A new form of radiation model is derived and its validity is investigated using observational data offered by commuting trips obtained from the United States census data set, and the mobility fluxes extracted from mobile phone data collected in a western European country. The new modeling paradigm offered by this description suggests that the complex topological features observed in large mobility and transportation networks may be the result of a simple stochastic process taking place on an inhomogeneous landscape.
Human Mobility in a Continuum Approach
Simini, Filippo; Maritan, Amos; Néda, Zoltán
2013-01-01
Human mobility is investigated using a continuum approach that allows to calculate the probability to observe a trip to any arbitrary region, and the fluxes between any two regions. The considered description offers a general and unified framework, in which previously proposed mobility models like the gravity model, the intervening opportunities model, and the recently introduced radiation model are naturally resulting as special cases. A new form of radiation model is derived and its validity is investigated using observational data offered by commuting trips obtained from the United States census data set, and the mobility fluxes extracted from mobile phone data collected in a western European country. The new modeling paradigm offered by this description suggests that the complex topological features observed in large mobility and transportation networks may be the result of a simple stochastic process taking place on an inhomogeneous landscape. PMID:23555885
Modelling melting in crustal environments, with links to natural systems in the Nepal Himalayas
NASA Astrophysics Data System (ADS)
Isherwood, C.; Holland, T.; Bickle, M.; Harris, N.
2003-04-01
Melt bodies of broadly granitic character occur frequently in mountain belts such as the Himalayan chain which exposes leucogranitic intrusions along its entire length (e.g. Le Fort, 1975). The genesis and disposition of these bodies have considerable implications for the development of tectonic evolution models for such mountain belts. However, melting processes and melt migration behaviour are influenced by many factors (Hess, 1995; Wolf &McMillan, 1995) which are as yet poorly understood. Recent improvements in internally consistent thermodynamic datasets have allowed the modelling of simple granitic melt systems (Holland &Powell, 2001) at pressures below 10 kbar, of which Himalayan leucogranites provide a good natural example. Model calculations such as these have been extended to include an asymmetrical melt-mixing model based on the Van Laar approach, which uses volumes (or pseudovolumes) for the different end-members in a mixture to control the asymmetry of non-ideal mixing. This asymmetrical formalism has been used in conjunction with several different entropy of mixing assumptions in an attempt to find the closest fit to available experimental data for melting in simple binary and ternary haplogranite systems. The extracted mixing data are extended to more complex systems and allow the construction of phase relations in NKASH necessary to model simple haplogranitic melts involving albite, K-feldspar, quartz, sillimanite and {H}2{O}. The models have been applied to real bulk composition data from Himalayan leucogranites.
Closed-loop, pilot/vehicle analysis of the approach and landing task
NASA Technical Reports Server (NTRS)
Anderson, M. R.; Schmidt, D. K.
1986-01-01
In the case of approach and landing, it is universally accepted that the pilot uses more than one vehicle response, or output, to close his control loops. Therefore, to model this task, a multi-loop analysis technique is required. The analysis problem has been in obtaining reasonable analytic estimates of the describing functions representing the pilot's loop compensation. Once these pilot describing functions are obtained, appropriate performance and workload metrics must then be developed for the landing task. The optimal control approach provides a powerful technique for obtaining the necessary describing functions, once the appropriate task objective is defined in terms of a quadratic objective function. An approach is presented through the use of a simple, reasonable objective function and model-based metrics to evaluate loop performance and pilot workload. The results of an analysis of the LAHOS (Landing and Approach of Higher Order Systems) study performed by R.E. Smith is also presented.
NASA Astrophysics Data System (ADS)
Halimah, B. Z.; Azlina, A.; Sembok, T. M.; Sufian, I.; Sharul Azman, M. N.; Azuraliza, A. B.; Zulaiha, A. O.; Nazlia, O.; Salwani, A.; Sanep, A.; Hailani, M. T.; Zaher, M. Z.; Azizah, J.; Nor Faezah, M. Y.; Choo, W. O.; Abdullah, Chew; Sopian, B.
The Holistic Islamic Banking System (HiCORE), a banking system suitable for virtual banking environment, created based on universityindustry collaboration initiative between Universiti Kebangsaan Malaysia (UKM) and Fuziq Software Sdn Bhd. HiCORE was modeled on a multitiered Simple - Services Oriented Architecture (S-SOA), using the parameterbased semantic approach. HiCORE's existence is timely as the financial world is looking for a new approach to creating banking and financial products that are interest free or based on the Islamic Syariah principles and jurisprudence. An interest free banking system has currently caught the interest of bankers and financiers all over the world. HiCORE's Parameter-based module houses the Customer-information file (CIF), Deposit and Financing components. The Parameter based module represents the third tier of the multi-tiered Simple SOA approach. This paper highlights the multi-tiered parameter- driven approach to the creation of new Islamiic products based on the 'dalil' (Quran), 'syarat' (rules) and 'rukun' (procedures) as required by the syariah principles and jurisprudence reflected by the semantic ontology embedded in the parameter module of the system.
NASA Technical Reports Server (NTRS)
Deshpande, Manohar D.; Dudley, Kenneth
2003-01-01
A simple method is presented to estimate the complex dielectric constants of individual layers of a multilayer composite material. Using the MatLab Optimization Tools simple MatLab scripts are written to search for electric properties of individual layers so as to match the measured and calculated S-parameters. A single layer composite material formed by using materials such as Bakelite, Nomex Felt, Fiber Glass, Woven Composite B and G, Nano Material #0, Cork, Garlock, of different thicknesses are tested using the present approach. Assuming the thicknesses of samples unknown, the present approach is shown to work well in estimating the dielectric constants and the thicknesses. A number of two layer composite materials formed by various combinations of above individual materials are tested using the present approach. However, the present approach could not provide estimate values close to their true values when the thicknesses of individual layers were assumed to be unknown. This is attributed to the difficulty in modelling the presence of airgaps between the layers while doing the measurement of S-parameters. A few examples of three layer composites are also presented.
Information Theory and the Earth's Density Distribution
NASA Technical Reports Server (NTRS)
Rubincam, D. P.
1979-01-01
An argument for using the information theory approach as an inference technique in solid earth geophysics. A spherically symmetric density distribution is derived as an example of the method. A simple model of the earth plus knowledge of its mass and moment of inertia lead to a density distribution which was surprisingly close to the optimum distribution. Future directions for the information theory approach in solid earth geophysics as well as its strengths and weaknesses are discussed.
Corominas, Lluís; Flores-Alsina, Xavier; Snip, Laura; Vanrolleghem, Peter A
2012-11-01
New tools are being developed to estimate greenhouse gas (GHG) emissions from wastewater treatment plants (WWTPs). There is a trend to move from empirical factors to simple comprehensive and more complex process-based models. Thus, the main objective of this study is to demonstrate the importance of using process-based dynamic models to better evaluate GHG emissions. This is tackled by defining a virtual case study based on the whole plant Benchmark Simulation Model Platform No. 2 (BSM2) and estimating GHG emissions using two approaches: (1) a combination of simple comprehensive models based on empirical assumptions and (2) a more sophisticated approach, which describes the mechanistic production of nitrous oxide (N(2) O) in the biological reactor (ASMN) and the generation of carbon dioxide (CO(2) ) and methane (CH(4) ) from the Anaerobic Digestion Model 1 (ADM1). Models already presented in literature are used, but modifications compared to the previously published ASMN model have been made. Also model interfaces between the ASMN and the ADM1 models have been developed. The results show that the use of the different approaches leads to significant differences in the N(2) O emissions (a factor of 3) but not in the CH(4) emissions (about 4%). Estimations of GHG emissions are also compared for steady-state and dynamic simulations. Averaged values for GHG emissions obtained with steady-state and dynamic simulations are rather similar. However, when looking at the dynamics of N(2) O emissions, large variability (3-6 ton CO(2) e day(-1) ) is observed due to changes in the influent wastewater C/N ratio and temperature which would not be captured by a steady-state analysis (4.4 ton CO(2) e day(-1) ). Finally, this study also shows the effect of changing the anaerobic digestion volume on the total GHG emissions. Decreasing the anaerobic digester volume resulted in a slight reduction in CH(4) emissions (about 5%), but significantly decreased N(2) O emissions in the water line (by 14%). Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Hanzer, F.; Marke, T.; Steiger, R.; Strasser, U.
2012-04-01
Tourism and particularly winter tourism is a key factor for the Austrian economy. Judging from currently available climate simulations, the Austrian Alps show a particularly high vulnerability to climatic changes. To reduce the exposure of ski areas towards changes in natural snow conditions as well as to generally enhance snow conditions at skiing sites, technical snowmaking is widely utilized across Austrian ski areas. While such measures result in better snow conditions at the skiing sites and are important for the local skiing industry, its economic efficiency has also to be taken into account. The current work emerges from the project CC-Snow II, where improved future climate scenario simulations are used to determine future natural and artificial snow conditions and their effects on tourism and economy in the Austrian Alps. In a first step, a simple technical snowmaking approach is incorporated into the process based snow model AMUNDSEN, which operates at a spatial resolution of 10-50 m and a temporal resolution of 1-3 hours. Locations of skiing slopes within a ski area in Styria, Austria, were digitized and imported into the model environment. During a predefined time frame in the beginning of the ski season, the model produces a maximum possible amount of technical snow and distributes the associated snow on the slopes, whereas afterwards, until to the end of the ski season, the model tries to maintain a certain snow depth threshold value on the slopes. Due to only few required input parameters, this approach is easily transferable to other ski areas. In our poster contribution, we present first results of this snowmaking approach and give an overview of the data and methodology applied. In a further step in CC-Snow, this simple bulk approach will be extended to consider actual snow cannon locations and technical specifications, which will allow a more detailed description of technical snow production as well as cannon-based recordings of water and energy consumption.
Dynamic calibration of agent-based models using data assimilation.
Ward, Jonathan A; Evans, Andrew J; Malleson, Nicolas S
2016-04-01
A widespread approach to investigating the dynamical behaviour of complex social systems is via agent-based models (ABMs). In this paper, we describe how such models can be dynamically calibrated using the ensemble Kalman filter (EnKF), a standard method of data assimilation. Our goal is twofold. First, we want to present the EnKF in a simple setting for the benefit of ABM practitioners who are unfamiliar with it. Second, we want to illustrate to data assimilation experts the value of using such methods in the context of ABMs of complex social systems and the new challenges these types of model present. We work towards these goals within the context of a simple question of practical value: how many people are there in Leeds (or any other major city) right now? We build a hierarchy of exemplar models that we use to demonstrate how to apply the EnKF and calibrate these using open data of footfall counts in Leeds.
Active disturbance rejection controller for chemical reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Both, Roxana; Dulf, Eva H.; Muresan, Cristina I., E-mail: roxana.both@aut.utcluj.ro
2015-03-10
In the petrochemical industry, the synthesis of 2 ethyl-hexanol-oxo-alcohols (plasticizers alcohol) is of high importance, being achieved through hydrogenation of 2 ethyl-hexenal inside catalytic trickle bed three-phase reactors. For this type of processes the use of advanced control strategies is suitable due to their nonlinear behavior and extreme sensitivity to load changes and other disturbances. Due to the complexity of the mathematical model an approach was to use a simple linear model of the process in combination with an advanced control algorithm which takes into account the model uncertainties, the disturbances and command signal limitations like robust control. However themore » resulting controller is complex, involving cost effective hardware. This paper proposes a simple integer-order control scheme using a linear model of the process, based on active disturbance rejection method. By treating the model dynamics as a common disturbance and actively rejecting it, active disturbance rejection control (ADRC) can achieve the desired response. Simulation results are provided to demonstrate the effectiveness of the proposed method.« less
Equivalent magnetic vector potential model for low-frequency magnetic exposure assessment
NASA Astrophysics Data System (ADS)
Diao, Y. L.; Sun, W. N.; He, Y. Q.; Leung, S. W.; Siu, Y. M.
2017-10-01
In this paper, a novel source model based on a magnetic vector potential for the assessment of induced electric field strength in a human body exposed to the low-frequency (LF) magnetic field of an electrical appliance is presented. The construction of the vector potential model requires only a single-component magnetic field to be measured close to the appliance under test, hence relieving considerable practical measurement effort—the radial basis functions (RBFs) are adopted for the interpolation of discrete measurements; the magnetic vector potential model can then be directly constructed by summing a set of simple algebraic functions of RBF parameters. The vector potentials are then incorporated into numerical calculations as the equivalent source for evaluations of the induced electric field in the human body model. The accuracy and effectiveness of the proposed model are demonstrated by comparing the induced electric field in a human model to that of the full-wave simulation. This study presents a simple and effective approach for modelling the LF magnetic source. The result of this study could simplify the compliance test procedure for assessing an electrical appliance regarding LF magnetic exposure.
Equivalent magnetic vector potential model for low-frequency magnetic exposure assessment.
Diao, Y L; Sun, W N; He, Y Q; Leung, S W; Siu, Y M
2017-09-21
In this paper, a novel source model based on a magnetic vector potential for the assessment of induced electric field strength in a human body exposed to the low-frequency (LF) magnetic field of an electrical appliance is presented. The construction of the vector potential model requires only a single-component magnetic field to be measured close to the appliance under test, hence relieving considerable practical measurement effort-the radial basis functions (RBFs) are adopted for the interpolation of discrete measurements; the magnetic vector potential model can then be directly constructed by summing a set of simple algebraic functions of RBF parameters. The vector potentials are then incorporated into numerical calculations as the equivalent source for evaluations of the induced electric field in the human body model. The accuracy and effectiveness of the proposed model are demonstrated by comparing the induced electric field in a human model to that of the full-wave simulation. This study presents a simple and effective approach for modelling the LF magnetic source. The result of this study could simplify the compliance test procedure for assessing an electrical appliance regarding LF magnetic exposure.
A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images
NASA Technical Reports Server (NTRS)
Memon, Nasir D.; Galatsanos, Nikolas
1995-01-01
In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in the same manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminate plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling) analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Predicting the Ability of Marine Mammal Populations to Compensate for Behavioral Disturbances
2015-09-30
approaches, including simple theoretical models as well as statistical analysis of data rich conditions. Building on models developed for PCoD [2,3], we...conditions is population trajectory most likely to be affected (the central aim of PCoD ). For the revised model presented here, we include a population...averaged condition individuals (here used as a proxy for individual health as defined in PCoD ), and E is the quality of the environment in which the
Prevalence Incidence Mixture Models
The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, and stratified sampling (the two approaches of superpopulation and a finite population are supported for target populations). Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.
Science communication. Response to Comment on "Quantifying long-term scientific impact".
Wang, Dashun; Song, Chaoming; Shen, Hua-Wei; Barabási, Albert-László
2014-07-11
Wang, Mei, and Hicks claim that they observed large mean prediction errors when using our model. We find that their claims are a simple consequence of overfitting, which can be avoided by standard regularization methods. Here, we show that our model provides an effective means to identify papers that may be subject to overfitting, and the model, with or without prior treatment, outperforms the proposed naïve approach. Copyright © 2014, American Association for the Advancement of Science.
NASA Astrophysics Data System (ADS)
Howell, Robert R.; Radebaugh, Jani; M. C Lopes, Rosaly; Kerber, Laura; Solomonidou, Anezina; Watkins, Bryn
2017-10-01
Using remote sensing of planetary volcanism on objects such as Io to determine eruption conditions is challenging because the emitting region is typically not resolved and because exposed lava cools so quickly. A model of the cooling rate and eruption mechanism is typically used to predict the amount of surface area at different temperatures, then that areal distribution is convolved with a Planck blackbody emission curve, and the predicted spectra is compared with observation. Often the broad nature of the Planck curve makes interpretation non-unique. However different eruption mechanisms (for example cooling fire fountain droplets vs. cooling flows) have very different area vs. temperature distributions which can often be characterized by simple power laws. Furthermore different composition magmas have significantly different upper limit cutoff temperatures. In order to test these models in August 2016 and May 2017 we obtained spatially resolved observations of spreading Kilauea pahoehoe flows and fire fountains using a three-wavelength near-infrared prototype camera system. We have measured the area vs. temperature distribution for the flows and find that over a relatively broad temperature range the distribution does follow a power law matching the theoretical predictions. As one approaches the solidus temperature the observed area drops below the simple model predictions by an amount that seems to vary inversely with the vigor of the spreading rate. At these highest temperatures the simple models are probably inadequate. It appears necessary to model the visco-elastic stretching of the very thin crust which covers even the most recently formed surfaces. That deviation between observations and the simple models may be particularly important when using such remote sensing observations to determine magma eruption temperatures.
Tang, J. Y.; Riley, W. J.
2016-02-05
We present a generic flux limiter to account for mass limitations from an arbitrary number of substrates in a biogeochemical reaction network. The flux limiter is based on the observation that substrate (e.g., nitrogen, phosphorus) limitation in biogeochemical models can be represented as to ensure mass conservative and non-negative numerical solutions to the governing ordinary differential equations. Application of the flux limiter includes two steps: (1) formulation of the biogeochemical processes with a matrix of stoichiometric coefficients and (2) application of Liebig's law of the minimum using the dynamic stoichiometric relationship of the reactants. This approach contrasts with the ad hoc down-regulationmore » approaches that are implemented in many existing models (such as CLM4.5 and the ACME (Accelerated Climate Modeling for Energy) Land Model (ALM)) of carbon and nutrient interactions, which are error prone when adding new processes, even for experienced modelers. Through an example implementation with a CENTURY-like decomposition model that includes carbon, nitrogen, and phosphorus, we show that our approach (1) produced almost identical results to that from the ad hoc down-regulation approaches under non-limiting nutrient conditions, (2) properly resolved the negative solutions under substrate-limited conditions where the simple clipping approach failed, (3) successfully avoided the potential conceptual ambiguities that are implied by those ad hoc down-regulation approaches. We expect our approach will make future biogeochemical models easier to improve and more robust.« less
Semantic wireless localization of WiFi terminals in smart buildings
NASA Astrophysics Data System (ADS)
Ahmadi, H.; Polo, A.; Moriyama, T.; Salucci, M.; Viani, F.
2016-06-01
The wireless localization of mobile terminals in indoor scenarios by means of a semantic interpretation of the environment is addressed in this work. A training-less approach based on the real-time calibration of a simple path loss model is proposed which combines (i) the received signal strength information measured by the wireless terminal and (ii) the topological features of the localization domain. A customized evolutionary optimization technique has been designed to estimate the optimal target position that fits the complex wireless indoor propagation and the semantic target-environment relation, as well. The proposed approach is experimentally validated in a real building area where the available WiFi network is opportunistically exploited for data collection. The presented results point out a reduction of the localization error obtained with the introduction of a very simple semantic interpretation of the considered scenario.
Structure and Dynamics of Solvent Landscapes in Charge-Transfer Reactions
NASA Astrophysics Data System (ADS)
Leite, Vitor B. Pereira
The dynamics of solvent polarization plays a major role in the control of charge transfer reactions. The success of Marcus theory describing the solvent influence via a single collective quadratic polarization coordinate has been remarkable. Onuchic and Wolynes have recently proposed (J. Chem Phys 98 (3) 2218, 1993) a simple model demonstrating how a many-dimensional-complex model composed by several dipole moments (representing solvent molecules or polar groups in proteins) can be reduced under the appropriate limits into the Marcus Model. This work presents a dynamical study of the same model, which is characterized by two parameters, an average dipole-dipole interaction as a term associated with the potential energy landscape roughness. It is shown why the effective potential, obtained using a thermodynamic approach, is appropriate for the dynamics of the system. At high temperatures, the system exhibits effective diffusive one-dimensional dynamics, where the Born-Marcus limit is recovered. At low temperatures, a glassy phase appears with a slow non-self-averaging dynamics. At intermediate temperatures, the concept of equivalent diffusion paths and polarization dependence effects are discussed. This approach is extended to treat more realistic solvent models. Real solvents are discussed in terms of simple parameters described above, and an analysis of how different regimes affect the rate of charge transfer is presented. Finally, these ideas are correlated to analogous problems in other areas.
Supermodeling With A Global Atmospheric Model
NASA Astrophysics Data System (ADS)
Wiegerinck, Wim; Burgers, Willem; Selten, Frank
2013-04-01
In weather and climate prediction studies it often turns out to be the case that the multi-model ensemble mean prediction has the best prediction skill scores. One possible explanation is that the major part of the model error is random and is averaged out in the ensemble mean. In the standard multi-model ensemble approach, the models are integrated in time independently and the predicted states are combined a posteriori. Recently an alternative ensemble prediction approach has been proposed in which the models exchange information during the simulation and synchronize on a common solution that is closer to the truth than any of the individual model solutions in the standard multi-model ensemble approach or a weighted average of these. This approach is called the super modeling approach (SUMO). The potential of the SUMO approach has been demonstrated in the context of simple, low-order, chaotic dynamical systems. The information exchange takes the form of linear nudging terms in the dynamical equations that nudge the solution of each model to the solution of all other models in the ensemble. With a suitable choice of the connection strengths the models synchronize on a common solution that is indeed closer to the true system than any of the individual model solutions without nudging. This approach is called connected SUMO. An alternative approach is to integrate a weighted averaged model, weighted SUMO. At each time step all models in the ensemble calculate the tendency, these tendencies are weighted averaged and the state is integrated one time step into the future with this weighted averaged tendency. It was shown that in case the connected SUMO synchronizes perfectly, the connected SUMO follows the weighted averaged trajectory and both approaches yield the same solution. In this study we pioneer both approaches in the context of a global, quasi-geostrophic, three-level atmosphere model that is capable of simulating quite realistically the extra-tropical circulation in the Northern Hemisphere winter.
Advanced Method to Estimate Fuel Slosh Simulation Parameters
NASA Technical Reports Server (NTRS)
Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl
2005-01-01
The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the estimation approach to a simple, accurately modeled system, its effectiveness and accuracy can be evaluated. The same experimental setup can then be used with fluid-filled tanks to further evaluate the effectiveness of the process. Ultimately, the proven process can be applied to the full-sized spinning experimental setup to quickly and accurately determine the slosh model parameters for a particular spacecraft mission. Automating the parameter identification process will save time, allow more changes to be made to proposed designs, and lower the cost in the initial design stages.
NASA Astrophysics Data System (ADS)
Ke, Haohao; Ondov, John M.; Rogge, Wolfgang F.
2013-12-01
Composite chemical profiles of motor vehicle emissions were extracted from ambient measurements at a near-road site in Baltimore during a windless traffic episode in November, 2002, using four independent approaches, i.e., simple peak analysis, windless model-based linear regression, PMF, and UNMIX. Although the profiles are in general agreement, the windless-model-based profile treatment more effectively removes interference from non-traffic sources and is deemed to be more accurate for many species. In addition to abundances of routine pollutants (e.g., NOx, CO, PM2.5, EC, OC, sulfate, and nitrate), 11 particle-bound metals and 51 individual traffic-related organic compounds (including n-alkanes, PAHs, oxy-PAHs, hopanes, alkylcyclohexanes, and others) were included in the modeling.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
Opletal, George; Drumm, Daniel W; Wang, Rong P; Russo, Salvy P
2014-07-03
Ternary glass structures are notoriously difficult to model accurately, and yet prevalent in several modern endeavors. Here, a novel combination of Reverse Monte Carlo (RMC) modeling and ab initio molecular dynamics (MD) is presented, rendering these complicated structures computationally tractable. A case study (Ge6.25As32.5Se61.25 glass) illustrates the effects of ab initio MD quench rates and equilibration temperatures, and the combined approach's efficacy over standard RMC or random insertion methods. Submelting point MD quenches achieve the most stable, realistic models, agreeing with both experimental and fully ab initio results. The simple approach of RMC followed by ab initio geometry optimization provides similar quality to the RMC-MD combination, for far fewer resources.
Webcam camera as a detector for a simple lab-on-chip time based approach.
Wongwilai, Wasin; Lapanantnoppakhun, Somchai; Grudpan, Supara; Grudpan, Kate
2010-05-15
A modification of a webcam camera for use as a small and low cost detector was demonstrated with a simple lab-on-chip reactor. Real time continuous monitoring of the reaction zone could be done. Acid-base neutralization with phenolphthalein indicator was used as a model reaction. The fading of pink color of the indicator when the acidic solution diffused into the basic solution zone was recorded as the change of red, blue and green colors (%RBG.) The change was related to acid concentration. A low cost portable semi-automation analysis system was achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyachkov, Sergey, E-mail: serj.dyachkov@gmail.com; Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, Moscow Region 141700; Levashov, Pavel, E-mail: pasha@ihed.ras.ru
We determine the region of applicability of the finite–temperature Thomas–Fermi model and its thermal part with respect to quantum and exchange corrections. Very high accuracy of computations has been achieved by using a special approach for the solution of the boundary problem and numerical integration. We show that the thermal part of the model can be applied at lower temperatures than the full model. Also we offer simple approximations of the boundaries of validity for practical applications.
Diverging patterns with endogenous labor migration.
Reichlin, P; Rustichini, A
1998-05-05
"The standard neoclassical model cannot explain persistent migration flows and lack of cross-country convergence when capital and labor are mobile. Here we present a model where both phenomena may take place.... Our model is based on the Arrow-Romer approach to endogenous growth theory. We single out the importance of a (however weak) scale effect from the size of the workforce.... The main conclusion of this simple model is that lack of convergence, or even divergence, among countries is possible, even with perfect capital mobility and labor mobility." excerpt
Unity of quarks and leptons at the TeV scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foot, R.; Lew, H.
1990-08-01
The gauge group (SU(3)){sup 2}{direct product}(SU(2)){sup 2}{direct product}(U(1){sub {ital Y}{prime}}){sup 3} supplemented by quark-lepton, left-right, and generation discrete symmetries represents a new approach to the understanding of the particle content of the standard model. In particular, as a result of the large number of symmetries, the fermion sector of the model is very simple. After symmetry breaking, the standard model can be shown to emerge from this highly symmetric model at low energies.
Thermal performance modeling of NASA s scientific balloons
NASA Astrophysics Data System (ADS)
Franco, H.; Cathey, H.
The flight performance of a scientific balloon is highly dependant on the interaction between the balloon and its environment. The balloon is a thermal vehicle. Modeling a scientific balloon's thermal performance has proven to be a difficult analytical task. Most previous thermal models have attempted these analyses by using either a bulk thermal model approach, or by simplified representations of the balloon. These approaches to date have provided reasonable, but not very accurate results. Improvements have been made in recent years using thermal analysis tools developed for the thermal modeling of spacecraft and other sophisticated heat transfer problems. These tools, which now allow for accurate modeling of highly transmissive materials, have been applied to the thermal analysis of NASA's scientific balloons. A research effort has been started that utilizes the "Thermal Desktop" addition to AUTO CAD. This paper will discuss the development of thermal models for both conventional and Ultra Long Duration super-pressure balloons. This research effort has focused on incremental analysis stages of development to assess the accuracy of the tool and the required model resolution to produce usable data. The first stage balloon thermal analyses started with simple spherical balloon models with a limited number of nodes, and expanded the number of nodes to determine required model resolution. These models were then modified to include additional details such as load tapes. The second stage analyses looked at natural shaped Zero Pressure balloons. Load tapes were then added to these shapes, again with the goal of determining the required modeling accuracy by varying the number of gores. The third stage, following the same steps as the Zero Pressure balloon efforts, was directed at modeling super-pressure pumpkin shaped balloons. The results were then used to develop analysis guidelines and an approach for modeling balloons for both simple first order estimates and detailed full models. The development of the radiative environment and program input files, the development of the modeling techniques for balloons, and the development of appropriate data output handling techniques for both the raw data and data plots will be discussed. A general guideline to match predicted balloon performance with known flight data will also be presented. One long-term goal of this effort is to develop simplified approaches and techniques to include results in performance codes being developed.
A simple distributed sediment delivery approach for rural catchments
NASA Astrophysics Data System (ADS)
Reid, Lucas; Scherer, Ulrike
2014-05-01
The transfer of sediments from source areas to surface waters is a complex process. In process based erosion models sediment input is thus quantified by representing all relevant sub processes such as detachment, transport and deposition of sediment particles along the flow path to the river. A successful application of these models requires, however, a large amount of spatially highly resolved data on physical catchment characteristics, which is only available for a few, well examined small catchments. For the lack of appropriate models, the empirical Universal Soil Loss Equation (USLE) is widely applied to quantify the sediment production in meso to large scale basins. As the USLE provides long-term mean soil loss rates, it is often combined with spatially lumped models to estimate the sediment delivery ratio (SDR). In these models, the SDR is related to data on morphological characteristics of the catchment such as average local relief, drainage density, proportion of depressions or soil texture. Some approaches include the relative distance between sediment source areas and the river channels. However, several studies showed that spatially lumped parameters describing the morphological characteristics are only of limited value to represent the factors of influence on sediment transport at the catchment scale. Sediment delivery is controlled by the location of the sediment source areas in the catchment and the morphology along the flow path to the surface water bodies. This complex interaction of spatially varied physiographic characteristics cannot be adequately represented by lumped morphological parameters. The objective of this study is to develop a simple but spatially distributed approach to quantify the sediment delivery ratio by considering the characteristics of the flow paths in a catchment. We selected a small catchment located in in an intensively cultivated loess region in Southwest Germany as study area for the development of the SDR approach. The flow pathways were extracted in a geographic information system. Then the sediment delivery ratio for each source area was determined using an empirical approach considering the slope, morphology and land use properties along the flow path. As a benchmark for the calibration of the model parameters we used results of a detailed process based erosion model available for the study area. Afterwards the approach was tested in larger catchments located in the same loess region.
Multi-Purpose Enrollment Projections: A Comparative Analysis of Four Approaches
ERIC Educational Resources Information Center
Allen, Debra Mary
2013-01-01
Providing support for institutional planning is central to the function of institutional research. Necessary for the planning process are accurate enrollment projections. The purpose of the present study was to develop a short-term enrollment model simple enough to be understood by those who rely on it, yet sufficiently complex to serve varying…
ERIC Educational Resources Information Center
Hourigan, Kristen Lee
2013-01-01
This article introduces a simple, flexible approach to engaging students within large classes, known as ARC (application, response, collaboration). ARC encourages each student's presence and engagement in class; creates a sense of excitement and anticipation; breaks down passivity and anonymity; effectively gains, maintains, and utilizes students'…
Climate analyses to assess risks from invasive forest insects: Simple matching to advanced models
Robert C. Venette
2017-01-01
Purpose of Review. The number of invasive alien insects that adversely affect trees and forests continues to increase as do associated ecological, economic, and sociological impacts. Prevention strategies remain the most cost-effective approach to address the issue, but risk management decisions, particularly those affecting international trade,...
ERIC Educational Resources Information Center
Harding, David J.; Gennetian, Lisa; Winship, Christopher; Sanbonmatsu, Lisa; Kling, Jeffrey R.
2010-01-01
We motivate future neighborhood research through a simple model that considers youth educational outcomes as a function of neighborhood context, neighborhood exposure, individual vulnerability to neighborhood effects, and non-neighborhood educational inputs--with a focus on effect heterogeneity. Research using this approach would require three…
A GENERATIVE SKETCH OF BURMESE.
ERIC Educational Resources Information Center
BURLING, ROBBINS
ASSUMING THAT A GENERATIVE APPROACH PROVIDES A FAIRLY DIRECT AND SIMPLE DESCRIPTION OF LINGUISTIC DATA, THE AUTHOR TAKES A TRADITIONAL BURMESE GRAMMAR (W. CORNYN'S "OUTLINE OF BURMESE GRAMMAR," REFERRED TO AS OBG THROUGHOUT THE PAPER) AND REWORKS IT INTO A GENERATIVE FRAMEWORK BASED ON A MODEL BY CHOMSKY. THE STUDY IS DIVIDED INTO FIVE SECTIONS,…
Testing hypotheses for differences between linear regression lines
Stanley J. Zarnoch
2009-01-01
Five hypotheses are identified for testing differences between simple linear regression lines. The distinctions between these hypotheses are based on a priori assumptions and illustrated with full and reduced models. The contrast approach is presented as an easy and complete method for testing for overall differences between the regressions and for making pairwise...
The Simple Analytics of Monetary Policy: A Post-Crisis Approach
ERIC Educational Resources Information Center
Friedman, Benjamin M.
2013-01-01
The standard workhorse models of monetary policy now commonly in use, both for teaching macro-economics to students and for supporting policymaking within many central banks, are incapable of incorporating the most widely accepted accounts of how the 2007-9 financial crisis occurred and are incapable too of analyzing the actions that monetary…
Self-Selection, Optimal Income Taxation, and Redistribution
ERIC Educational Resources Information Center
Amegashie, J. Atsu
2009-01-01
The author makes a pedagogical contribution to optimal income taxation. Using a very simple model adapted from George A. Akerlof (1978), he demonstrates a key result in the approach to public economics and welfare economics pioneered by Nobel laureate James Mirrlees. He shows how incomplete information, in addition to the need to preserve…
Estimating annual bole biomass production using uncertainty analysis
Travis J. Woolley; Mark E. Harmon; Kari B. O' Connell
2007-01-01
Two common sampling methodologies coupled with a simple statistical model were evaluated to determine the accuracy and precision of annual bole biomass production (BBP) and inter-annual variability estimates using this type of approach. We performed an uncertainty analysis using Monte Carlo methods in conjunction with radial growth core data from trees in three Douglas...
We present a simple approach to estimating ground-level fine particle (PM2.5, particles smaller than 2.5 um in diameter) concentration using global atmospheric chemistry models and aerosol optical thickness (AOT) measurements from the Multi- angle Imaging SpectroRadiometer (MISR)...
Realpe, Alba; Adams, Ann; Wall, Peter; Griffin, Damian; Donovan, Jenny L
2016-08-01
How a randomized controlled trial (RCT) is explained to patients is a key determinant of recruitment to that trial. This study developed and implemented a simple six-step model to fully inform patients and to support them in deciding whether to take part or not. Ninety-two consultations with 60 new patients were recorded and analyzed during a pilot RCT comparing surgical and nonsurgical interventions for hip impingement. Recordings were analyzed using techniques of thematic analysis and focused conversation analysis. Early findings supported the development of a simple six-step model to provide a framework for good recruitment practice. Model steps are as follows: (1) explain the condition, (2) reassure patients about receiving treatment, (3) establish uncertainty, (4) explain the study purpose, (5) give a balanced view of treatments, and (6) Explain study procedures. There are also two elements throughout the consultation: (1) responding to patients' concerns and (2) showing confidence. The pilot study was successful, with 70% (n = 60) of patients approached across nine centers agreeing to take part in the RCT, so that the full-scale trial was funded. The six-step model provides a promising framework for successful recruitment to RCTs. Further testing of the model is now required. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Predicting length of children's psychiatric hospitalizations: an "ecologic" approach.
Mossman, D; Songer, D A; Baker, D G
1991-08-01
This article describes the development and validation of a simple and modestly successful model for predicting inpatient length of stay (LOS) at a state-funded facility providing acute to long term care for children and adolescents in Ohio. Six variables--diagnostic group, legal status at time of admission, attending physician, age, sex, and county of residence--explained 30% of the variation in log10LOS in the subgroup used to create the model, and 26% of log10LOS variation in the cross-validation subgroup. The model also identified LOS outliers with moderate accuracy (ROC area = .68-0.76). The authors attribute the model's success to inclusion of variables that are correlated to idiosyncratic "ecologic" factors as well as variables related to severity of illness. Future attempts to construct LOS models may adopt similar approaches.
New analytic results for speciation times in neutral models.
Gernhard, Tanja
2008-05-01
In this paper, we investigate the standard Yule model, and a recently studied model of speciation and extinction, the "critical branching process." We develop an analytic way-as opposed to the common simulation approach-for calculating the speciation times in a reconstructed phylogenetic tree. Simple expressions for the density and the moments of the speciation times are obtained. Methods for dating a speciation event become valuable, if for the reconstructed phylogenetic trees, no time scale is available. A missing time scale could be due to supertree methods, morphological data, or molecular data which violates the molecular clock. Our analytic approach is, in particular, useful for the model with extinction, since simulations of birth-death processes which are conditioned on obtaining n extant species today are quite delicate. Further, simulations are very time consuming for big n under both models.
Greedy algorithms and Zipf laws
NASA Astrophysics Data System (ADS)
Moran, José; Bouchaud, Jean-Philippe
2018-04-01
We consider a simple model of firm/city/etc growth based on a multi-item criterion: whenever entity B fares better than entity A on a subset of M items out of K, the agent originally in A moves to B. We solve the model analytically in the cases K = 1 and . The resulting stationary distribution of sizes is generically a Zipf-law provided M > K/2. When , no selection occurs and the size distribution remains thin-tailed. In the special case M = K, one needs to regularize the problem by introducing a small ‘default’ probability ϕ. We find that the stationary distribution has a power-law tail that becomes a Zipf-law when . The approach to the stationary state can also be characterized, with strong similarities with a simple ‘aging’ model considered by Barrat and Mézard.
Detonation product EOS studies: Using ISLS to refine CHEETAH
NASA Astrophysics Data System (ADS)
Zaug, Joseph; Fried, Larry; Hansen, Donald
2001-06-01
Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a suite of non-ideal simple fluids and fluid mixtures. Impulsive Stimulated Light Scattering conducted in the diamond-anvil cell offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model CHEETAH. Computational models are systematically improved with each addition of experimental data. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.
Control-structure interaction study for the Space Station solar dynamic power module
NASA Technical Reports Server (NTRS)
Cheng, J.; Ianculescu, G.; Ly, J.; Kim, M.
1991-01-01
The authors investigate the feasibility of using a conventional PID (proportional plus integral plus derivative) controller design to perform the pointing and tracking functions for the Space Station Freedom solar dynamic power module. Using this simple controller design, the control/structure interaction effects were also studied without assuming frequency bandwidth separation. From the results, the feasibility of a simple solar dynamic control solution with a reduced-order model, which satisfies the basic system pointing and stability requirements, is suggested. However, the conventional control design approach is shown to be very much influenced by the order of reduction of the plant model, i.e., the number of the retained elastic modes from the full-order model. This suggests that, for complex large space structures, such as the Space Station Freedom solar dynamic, the conventional control system design methods may not be adequate.
An Overview of Longitudinal Data Analysis Methods for Neurological Research
Locascio, Joseph J.; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825
Quantitative Modeling of Earth Surface Processes
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.
This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.
Parra-Robles, J; Ajraoui, S; Deppe, M H; Parnell, S R; Wild, J M
2010-06-01
Models of lung acinar geometry have been proposed to analytically describe the diffusion of (3)He in the lung (as measured with pulsed gradient spin echo (PGSE) methods) as a possible means of characterizing lung microstructure from measurement of the (3)He ADC. In this work, major limitations in these analytical models are highlighted in simple diffusion weighted experiments with (3)He in cylindrical models of known geometry. The findings are substantiated with numerical simulations based on the same geometry using finite difference representation of the Bloch-Torrey equation. The validity of the existing "cylinder model" is discussed in terms of the physical diffusion regimes experienced and the basic reliance of the cylinder model and other ADC-based approaches on a Gaussian diffusion behaviour is highlighted. The results presented here demonstrate that physical assumptions of the cylinder model are not valid for large diffusion gradient strengths (above approximately 15 mT/m), which are commonly used for (3)He ADC measurements in human lungs. (c) 2010 Elsevier Inc. All rights reserved.
Giovannini, Giannina; Sbarciog, Mihaela; Steyer, Jean-Philippe; Chamy, Rolando; Vande Wouwer, Alain
2018-05-01
Hydrogen has been found to be an important intermediate during anaerobic digestion (AD) and a key variable for process monitoring as it gives valuable information about the stability of the reactor. However, simple dynamic models describing the evolution of hydrogen are not commonplace. In this work, such a dynamic model is derived using a systematic data driven-approach, which consists of a principal component analysis to deduce the dimension of the minimal reaction subspace explaining the data, followed by an identification of the kinetic parameters in the least-squares sense. The procedure requires the availability of informative data sets. When the available data does not fulfill this condition, the model can still be built from simulated data, obtained using a detailed model such as ADM1. This dynamic model could be exploited in monitoring and control applications after a re-identification of the parameters using actual process data. As an example, the model is used in the framework of a control strategy, and is also fitted to experimental data from raw industrial wine processing wastewater. Copyright © 2018 Elsevier Ltd. All rights reserved.
Some research perspectives in galloping phenomena: critical conditions and post-critical behavior
NASA Astrophysics Data System (ADS)
Piccardo, Giuseppe; Pagnini, Luisa Carlotta; Tubino, Federica
2015-01-01
This paper gives an overview of wind-induced galloping phenomena, describing its manifold features and the many advances that have taken place in this field. Starting from a quasi-steady model of aeroelastic forces exerted by the wind on a rigid cylinder with three degree-of-freedom, two translations and a rotation in the plane of the model cross section, the fluid-structure interaction forces are described in simple terms, yet suitable with complexity of mechanical systems, both in the linear and in the nonlinear field, thus allowing investigation of a wide range of structural typologies and their dynamic behavior. The paper is driven by some key concerns. A great effort is made in underlying strengths and weaknesses of the classic quasi-steady theory as well as of the simplistic assumptions that are introduced in order to investigate such complex phenomena through simple engineering models. A second aspect, which is crucial to the authors' approach, is to take into account and harmonize the engineering, physical and mathematical perspectives in an interdisciplinary way—something which does not happen often. The authors underline that the quasi-steady approach is an irreplaceable tool, tough approximate and simple, for performing engineering analyses; at the same time, the study of this phenomenon gives origin to numerous problems that make the application of high-level mathematical solutions particularly attractive. Finally, the paper discusses a wide range of features of the galloping theory and its practical use which deserve further attention and refinements, pointing to the great potential represented by new fields of application and advanced analysis tools.
Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
Exergetic simulation of a combined infrared-convective drying process
NASA Astrophysics Data System (ADS)
Aghbashlo, Mortaza
2016-04-01
Optimal design and performance of a combined infrared-convective drying system with respect to the energy issue is extremely put through the application of advanced engineering analyses. This article proposes a theoretical approach for exergy analysis of the combined infrared-convective drying process using a simple heat and mass transfer model. The applicability of the developed model to actual drying processes was proved using an illustrative example for a typical food.
A hybrid multigroup neutron-pattern model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogosbekyan, L.R.; Lysov, D.A.
In this paper, we use the general approach to construct a multigroup hybrid model for the neutron pattern. The equations are given together with a reasonably economic and simple iterative method of solving them. The algorithm can be used to calculate the pattern and the functionals as well as to correct the constants from the experimental data and to adapt the support over the constants to the engineering programs by reference to precision ones.
Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach
NASA Astrophysics Data System (ADS)
Feldbauer, Christian; Kubin, Gernot; Kleijn, W. Bastiaan
2005-12-01
Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel) coding.
Norman, Laura M.
2007-01-01
Ecological considerations need to be interwoven with economic policy and planning along the United States‐Mexican border. Non‐point source pollution can have significant implications for the availability of potable water and the continued health of borderland ecosystems in arid lands. However, environmental assessments in this region present a host of unique issues and problems. A common obstacle to the solution of these problems is the integration of data with different resolutions, naming conventions, and quality to create a consistent database across the binational study area. This report presents a simple modeling approach to predict nonpoint source pollution that can be used for border watersheds. The modeling approach links a hillslopescale erosion‐prediction model and a spatially derived sediment‐delivery model within a geographic information system to estimate erosion, sediment yield, and sediment deposition across the Ambos Nogales watershed in Sonora, Mexico, and Arizona. This paper discusses the procedures used for creating a watershed database to apply the models and presents an example of the modeling approach applied to a conservation‐planning problem.
PoMo: An Allele Frequency-Based Approach for Species Tree Estimation
De Maio, Nicola; Schrempf, Dominik; Kosiol, Carolin
2015-01-01
Incomplete lineage sorting can cause incongruencies of the overall species-level phylogenetic tree with the phylogenetic trees for individual genes or genomic segments. If these incongruencies are not accounted for, it is possible to incur several biases in species tree estimation. Here, we present a simple maximum likelihood approach that accounts for ancestral variation and incomplete lineage sorting. We use a POlymorphisms-aware phylogenetic MOdel (PoMo) that we have recently shown to efficiently estimate mutation rates and fixation biases from within and between-species variation data. We extend this model to perform efficient estimation of species trees. We test the performance of PoMo in several different scenarios of incomplete lineage sorting using simulations and compare it with existing methods both in accuracy and computational speed. In contrast to other approaches, our model does not use coalescent theory but is allele frequency based. We show that PoMo is well suited for genome-wide species tree estimation and that on such data it is more accurate than previous approaches. PMID:26209413
Simple versus complex models of trait evolution and stasis as a response to environmental change
NASA Astrophysics Data System (ADS)
Hunt, Gene; Hopkins, Melanie J.; Lidgard, Scott
2015-04-01
Previous analyses of evolutionary patterns, or modes, in fossil lineages have focused overwhelmingly on three simple models: stasis, random walks, and directional evolution. Here we use likelihood methods to fit an expanded set of evolutionary models to a large compilation of ancestor-descendant series of populations from the fossil record. In addition to the standard three models, we assess more complex models with punctuations and shifts from one evolutionary mode to another. As in previous studies, we find that stasis is common in the fossil record, as is a strict version of stasis that entails no real evolutionary changes. Incidence of directional evolution is relatively low (13%), but higher than in previous studies because our analytical approach can more sensitively detect noisy trends. Complex evolutionary models are often favored, overwhelmingly so for sequences comprising many samples. This finding is consistent with evolutionary dynamics that are, in reality, more complex than any of the models we consider. Furthermore, the timing of shifts in evolutionary dynamics varies among traits measured from the same series. Finally, we use our empirical collection of evolutionary sequences and a long and highly resolved proxy for global climate to inform simulations in which traits adaptively track temperature changes over time. When realistically calibrated, we find that this simple model can reproduce important aspects of our paleontological results. We conclude that observed paleontological patterns, including the prevalence of stasis, need not be inconsistent with adaptive evolution, even in the face of unstable physical environments.
Kim, Y S; Balland, V; Limoges, B; Costentin, C
2017-07-21
Cyclic voltammetry is a particularly useful tool for characterizing charge accumulation in conductive materials. A simple model is presented to evaluate proton transport effects on charge storage in conductive materials associated with a redox process coupled with proton insertion in the bulk material from an aqueous buffered solution, a situation frequently encountered in metal oxide materials. The interplay between proton transport inside and outside the materials is described using a formulation of the problem through introduction of dimensionless variables that allows defining the minimum number of parameters governing the cyclic voltammetry response with consideration of a simple description of the system geometry. This approach is illustrated by analysis of proton insertion in a mesoporous TiO 2 film.
Generation of multicellular tumor spheroids by the hanging-drop method.
Timmins, Nicholas E; Nielsen, Lars K
2007-01-01
Owing to their in vivo-like characteristics, three-dimensional (3D) multicellular tumor spheroid (MCTS) cultures are gaining increasing popularity as an in vitro model of tumors. A straightforward and simple approach to the cultivation of these MCTS is the hanging-drop method. Cells are suspended in droplets of medium, where they develop into coherent 3D aggregates and are readily accessed for analysis. In addition to being simple, the method eliminates surface interactions with an underlying substratum (e.g., polystyrene plastic or agarose), requires only a low number of starting cells, and is highly reproducible. This method has also been applied to the co-cultivation of mixed cell populations, including the co-cultivation of endothelial cells and tumor cells as a model of early tumor angiogenesis.
NASA Astrophysics Data System (ADS)
Brunger, M. J.; Thorn, P. A.; Campbell, L.; Kato, H.; Kawahara, H.; Hoshino, M.; Tanaka, H.; Kim, Y.-K.
2008-05-01
We consider the efficacy of the BEf-scaling approach, in calculating reliable integral cross sections for electron impact excitation of dipole-allowed electronic states in molecules. We will demonstrate, using specific examples in H2, CO and H2O, that this relatively simple procedure can generate quite accurate integral cross sections which compare well with available experimental data. Finally, we will briefly consider the ramifications of this to atmospheric and other types of modelling studies.
On the ``Matrix Approach'' to Interacting Particle Systems
NASA Astrophysics Data System (ADS)
de Sanctis, L.; Isopi, M.
2004-04-01
Derrida et al. and Schütz and Stinchcombe gave algebraic formulas for the correlation functions of the partially asymmetric simple exclusion process. Here we give a fairly general recipe of how to get these formulas and extend them to the whole time evolution (starting from the generator of the process), for a certain class of interacting systems. We then analyze the algebraic relations obtained to show that the matrix approach does not work with some models such as the voter and the contact processes.
NASA Astrophysics Data System (ADS)
Hattori, Y.; Ushiki, H.; Engl, W.; Courbin, L.; Panizza, P.
2005-08-01
Within the framework of an effective medium approach and a mean-field approximation, we present a simple lattice model to treat electrical percolation in the presence of attractive interactions. We show that the percolation line depends on the magnitude of interactions. In 2 dimensions, the percolation line meets the binodal line at the critical point. A good qualitative agreement is observed with experimental results on a ternary AOT-based water-in-oil microemulsion system.
Baciocchi, Renato; Berardi, Simona; Verginelli, Iason
2010-09-15
Clean-up of contaminated sites is usually based on a risk-based approach for the definition of the remediation goals, which relies on the well known ASTM-RBCA standard procedure. In this procedure, migration of contaminants is described through simple analytical models and the source contaminants' concentration is supposed to be constant throughout the entire exposure period, i.e. 25-30 years. The latter assumption may often result over-protective of human health, leading to unrealistically low remediation goals. The aim of this work is to propose an alternative model taking in account the source depletion, while keeping the original simplicity and analytical form of the ASTM-RBCA approach. The results obtained by the application of this model are compared with those provided by the traditional ASTM-RBCA approach, by a model based on the source depletion algorithm of the RBCA ToolKit software and by a numerical model, allowing to assess its feasibility for inclusion in risk analysis procedures. The results discussed in this work are limited to on-site exposure to contaminated water by ingestion, but the approach proposed can be extended to other exposure pathways. Copyright 2010 Elsevier B.V. All rights reserved.
Winkelmann, Stefanie; Schütte, Christof
2017-09-21
Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.
The rank correlated SLW model of gas radiation in non-uniform media
NASA Astrophysics Data System (ADS)
Solovjov, Vladimir P.; Andre, Frederic; Lemonnier, Denis; Webb, Brent W.
2017-08-01
A comprehensive theoretical development of possible reference approaches in modelling of radiation transfer in non-uniform gaseous media is developed within the framework of the Generalized SLW Model. The notion of absorption spectrum ;correlation; adopted currently for global methods in gas radiation is critically revisited and replaced by a less restrictive concept of rank correlated spectrum. Within this framework it is shown that eight different reference approaches are possible, of which only three have been reported in the literature. Among the approaches presented is a novel Rank Correlated SLW Model, which is distinguished by the fact that i) it does not require the specification of a reference gas thermodynamic state, and ii) it preserves the emission term in the spectrally integrated Radiative Transfer Equation. Construction of this reference model requires only two absorption line blackbody distribution functions, and subdivision into gray gases can be performed using standard quadratures. Consequently, this new reference approach appears to have significant advantages over all other methods, and is, in general, a significant improvement in the global modelling of gas radiation. All reference approaches are summarized in the present work, and their use in radiative transfer prediction is demonstrated for simple example cases. Further, a detailed rigorous theoretical development of the improved methods is provided.
NASA Astrophysics Data System (ADS)
Winkelmann, Stefanie; Schütte, Christof
2017-09-01
Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.
NASA Technical Reports Server (NTRS)
Ball, Danny (Technical Monitor); Pagitz, M.; Pellegrino, Xu S.
2004-01-01
This paper presents a computational study of the stability of simple lobed balloon structures. Two approaches are presented, one based on a wrinkled material model and one based on a variable Poisson s ratio model that eliminates compressive stresses iteratively. The first approach is used to investigate the stability of both a single isotensoid and a stack of four isotensoids, for perturbations of in.nitesimally small amplitude. It is found that both structures are stable for global deformation modes, but unstable for local modes at su.ciently large pressure. Both structures are stable if an isotropic model is assumed. The second approach is used to investigate the stability of the isotensoid stack for large shape perturbations, taking into account contact between di.erent surfaces. For this structure a distorted, stable configuration is found. It is also found that the volume enclosed by this con.guration is smaller than that enclosed by the undistorted structure.
Systemic Analysis Approaches for Air Transportation
NASA Technical Reports Server (NTRS)
Conway, Sheila
2005-01-01
Air transportation system designers have had only limited success using traditional operations research and parametric modeling approaches in their analyses of innovations. They need a systemic methodology for modeling of safety-critical infrastructure that is comprehensive, objective, and sufficiently concrete, yet simple enough to be used with reasonable investment. The methodology must also be amenable to quantitative analysis so issues of system safety and stability can be rigorously addressed. However, air transportation has proven itself an extensive, complex system whose behavior is difficult to describe, no less predict. There is a wide range of system analysis techniques available, but some are more appropriate for certain applications than others. Specifically in the area of complex system analysis, the literature suggests that both agent-based models and network analysis techniques may be useful. This paper discusses the theoretical basis for each approach in these applications, and explores their historic and potential further use for air transportation analysis.
Scripting MODFLOW model development using Python and FloPy
Bakker, Mark; Post, Vincent E. A.; Langevin, Christian D.; Hughes, Joseph D.; White, Jeremy; Starn, Jeffrey; Fienen, Michael N.
2016-01-01
Graphical user interfaces (GUIs) are commonly used to construct and postprocess numerical groundwater flow and transport models. Scripting model development with the programming language Python is presented here as an alternative approach. One advantage of Python is that there are many packages available to facilitate the model development process, including packages for plotting, array manipulation, optimization, and data analysis. For MODFLOW-based models, the FloPy package was developed by the authors to construct model input files, run the model, and read and plot simulation results. Use of Python with the available scientific packages and FloPy facilitates data exploration, alternative model evaluations, and model analyses that can be difficult to perform with GUIs. Furthermore, Python scripts are a complete, transparent, and repeatable record of the modeling process. The approach is introduced with a simple FloPy example to create and postprocess a MODFLOW model. A more complicated capture-fraction analysis with a real-world model is presented to demonstrate the types of analyses that can be performed using Python and FloPy.
Dynamics of Zika virus outbreaks: an overview of mathematical modeling approaches.
Wiratsudakul, Anuwat; Suparit, Parinya; Modchang, Charin
2018-01-01
The Zika virus was first discovered in 1947. It was neglected until a major outbreak occurred on Yap Island, Micronesia, in 2007. Teratogenic effects resulting in microcephaly in newborn infants is the greatest public health threat. In 2016, the Zika virus epidemic was declared as a Public Health Emergency of International Concern (PHEIC). Consequently, mathematical models were constructed to explicitly elucidate related transmission dynamics. In this review article, two steps of journal article searching were performed. First, we attempted to identify mathematical models previously applied to the study of vector-borne diseases using the search terms "dynamics," "mathematical model," "modeling," and "vector-borne" together with the names of vector-borne diseases including chikungunya, dengue, malaria, West Nile, and Zika. Then the identified types of model were further investigated. Second, we narrowed down our survey to focus on only Zika virus research. The terms we searched for were "compartmental," "spatial," "metapopulation," "network," "individual-based," "agent-based" AND "Zika." All relevant studies were included regardless of the year of publication. We have collected research articles that were published before August 2017 based on our search criteria. In this publication survey, we explored the Google Scholar and PubMed databases. We found five basic model architectures previously applied to vector-borne virus studies, particularly in Zika virus simulations. These include compartmental, spatial, metapopulation, network, and individual-based models. We found that Zika models carried out for early epidemics were mostly fit into compartmental structures and were less complicated compared to the more recent ones. Simple models are still commonly used for the timely assessment of epidemics. Nevertheless, due to the availability of large-scale real-world data and computational power, recently there has been growing interest in more complex modeling frameworks. Mathematical models are employed to explore and predict how an infectious disease spreads in the real world, evaluate the disease importation risk, and assess the effectiveness of intervention strategies. As the trends in modeling of infectious diseases have been shifting towards data-driven approaches, simple and complex models should be exploited differently. Simple models can be produced in a timely fashion to provide an estimation of the possible impacts. In contrast, complex models integrating real-world data require more time to develop but are far more realistic. The preparation of complicated modeling frameworks prior to the outbreaks is recommended, including the case of future Zika epidemic preparation.
Ensemble downscaling in coupled solar wind-magnetosphere modeling for space weather forecasting
Owens, M J; Horbury, T S; Wicks, R T; McGregor, S L; Savani, N P; Xiong, M
2014-01-01
Advanced forecasting of space weather requires simulation of the whole Sun-to-Earth system, which necessitates driving magnetospheric models with the outputs from solar wind models. This presents a fundamental difficulty, as the magnetosphere is sensitive to both large-scale solar wind structures, which can be captured by solar wind models, and small-scale solar wind “noise,” which is far below typical solar wind model resolution and results primarily from stochastic processes. Following similar approaches in terrestrial climate modeling, we propose statistical “downscaling” of solar wind model results prior to their use as input to a magnetospheric model. As magnetospheric response can be highly nonlinear, this is preferable to downscaling the results of magnetospheric modeling. To demonstrate the benefit of this approach, we first approximate solar wind model output by smoothing solar wind observations with an 8 h filter, then add small-scale structure back in through the addition of random noise with the observed spectral characteristics. Here we use a very simple parameterization of noise based upon the observed probability distribution functions of solar wind parameters, but more sophisticated methods will be developed in the future. An ensemble of results from the simple downscaling scheme are tested using a model-independent method and shown to add value to the magnetospheric forecast, both improving the best estimate and quantifying the uncertainty. We suggest a number of features desirable in an operational solar wind downscaling scheme. Key Points Solar wind models must be downscaled in order to drive magnetospheric models Ensemble downscaling is more effective than deterministic downscaling The magnetosphere responds nonlinearly to small-scale solar wind fluctuations PMID:26213518
Multi-Detection Events, Probability Density Functions, and Reduced Location Area
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Schrom, Brian T.
2016-03-01
Abstract Several efforts have been made in the Comprehensive Nuclear-Test-Ban Treaty (CTBT) community to assess the benefits of combining detections of radionuclides to improve the location estimates available from atmospheric transport modeling (ATM) backtrack calculations. We present a Bayesian estimation approach rather than a simple dilution field of regard approach to allow xenon detections and non-detections to be combined mathematically. This system represents one possible probabilistic approach to radionuclide event formation. Application of this method to a recent interesting radionuclide event shows a substantial reduction in the location uncertainty of that event.
Interpretable Deep Models for ICU Outcome Prediction
Che, Zhengping; Purushotham, Sanjay; Khemani, Robinder; Liu, Yan
2016-01-01
Exponential surge in health care data, such as longitudinal data from electronic health records (EHR), sensor data from intensive care unit (ICU), etc., is providing new opportunities to discover meaningful data-driven characteristics and patterns ofdiseases. Recently, deep learning models have been employedfor many computational phenotyping and healthcare prediction tasks to achieve state-of-the-art performance. However, deep models lack interpretability which is crucial for wide adoption in medical research and clinical decision-making. In this paper, we introduce a simple yet powerful knowledge-distillation approach called interpretable mimic learning, which uses gradient boosting trees to learn interpretable models and at the same time achieves strong prediction performance as deep learning models. Experiment results on Pediatric ICU dataset for acute lung injury (ALI) show that our proposed method not only outperforms state-of-the-art approaches for morality and ventilator free days prediction tasks but can also provide interpretable models to clinicians. PMID:28269832
Distribution of model uncertainty across multiple data streams
NASA Astrophysics Data System (ADS)
Wutzler, Thomas
2014-05-01
When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.
A review of statistical updating methods for clinical prediction models.
Su, Ting-Li; Jaki, Thomas; Hickey, Graeme L; Buchan, Iain; Sperrin, Matthew
2018-01-01
A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crull, E W; Brown Jr., C G; Perkins, M P
2008-07-30
For short monopoles in this low-power case, it has been shown that a simple circuit model is capable of accurate predictions for the shape and magnitude of the antenna response to lightning-generated electric field coupling effects, provided that the elements of the circuit model have accurate values. Numerical EM simulation can be used to provide more accurate values for the circuit elements than the simple analytical formulas, since the analytical formulas are used outside of their region of validity. However, even with the approximate analytical formulas the simple circuit model produces reasonable results, which would improve if more accurate analyticalmore » models were used. This report discusses the coupling analysis approaches taken to understand the interaction between a time-varying EM field and a short monopole antenna, within the context of lightning safety for nuclear weapons at DOE facilities. It describes the validation of a simple circuit model using laboratory study in order to understand the indirect coupling of energy into a part, and the resulting voltage. Results show that in this low-power case, the circuit model predicts peak voltages within approximately 32% using circuit component values obtained from analytical formulas and about 13% using circuit component values obtained from numerical EM simulation. We note that the analytical formulas are used outside of their region of validity. First, the antenna is insulated and not a bare wire and there are perhaps fringing field effects near the termination of the outer conductor that the formula does not take into account. Also, the effective height formula is for a monopole directly over a ground plane, while in the time-domain measurement setup the monopole is elevated above the ground plane by about 1.5-inch (refer to Figure 5).« less
NASA Astrophysics Data System (ADS)
Aronica, G. T.; Candela, A.
2007-12-01
SummaryIn this paper a Monte Carlo procedure for deriving frequency distributions of peak flows using a semi-distributed stochastic rainfall-runoff model is presented. The rainfall-runoff model here used is very simple one, with a limited number of parameters and practically does not require any calibration, resulting in a robust tool for those catchments which are partially or poorly gauged. The procedure is based on three modules: a stochastic rainfall generator module, a hydrologic loss module and a flood routing module. In the rainfall generator module the rainfall storm, i.e. the maximum rainfall depth for a fixed duration, is assumed to follow the two components extreme value (TCEV) distribution whose parameters have been estimated at regional scale for Sicily. The catchment response has been modelled by using the Soil Conservation Service-Curve Number (SCS-CN) method, in a semi-distributed form, for the transformation of total rainfall to effective rainfall and simple form of IUH for the flood routing. Here, SCS-CN method is implemented in probabilistic form with respect to prior-to-storm conditions, allowing to relax the classical iso-frequency assumption between rainfall and peak flow. The procedure is tested on six practical case studies where synthetic FFC (flood frequency curve) were obtained starting from model variables distributions by simulating 5000 flood events combining 5000 values of total rainfall depth for the storm duration and AMC (antecedent moisture conditions) conditions. The application of this procedure showed how Monte Carlo simulation technique can reproduce the observed flood frequency curves with reasonable accuracy over a wide range of return periods using a simple and parsimonious approach, limited data input and without any calibration of the rainfall-runoff model.
Bridging the scales in a eulerian air quality model to assess megacity export of pollution
NASA Astrophysics Data System (ADS)
Siour, G.; Colette, A.; Menut, L.; Bessagnet, B.; Coll, I.; Meleux, F.
2013-08-01
In Chemistry Transport Models (CTMs), spatial scale interactions are often represented through off-line coupling between large and small scale models. However, those nested configurations cannot give account of the impact of the local scale on its surroundings. This issue can be critical in areas exposed to air mass recirculation (sea breeze cells) or around regions with sharp pollutant emission gradients (large cities). Such phenomena can still be captured by the mean of adaptive gridding, two-way nesting or using model nudging, but these approaches remain relatively costly. We present here the development and the results of a simple alternative multi-scale approach making use of a horizontal stretched grid, in the Eulerian CTM CHIMERE. This method, called "stretching" or "zooming", consists in the introduction of local zooms in a single chemistry-transport simulation. It allows bridging online the spatial scales from the city (∼1 km resolution) to the continental area (∼50 km resolution). The CHIMERE model was run over a continental European domain, zoomed over the BeNeLux (Belgium, Netherlands and Luxembourg) area. We demonstrate that, compared with one-way nesting, the zooming method allows the expression of a significant feedback of the refined domain towards the large scale: around the city cluster of BeNeLuX, NO2 and O3 scores are improved. NO2 variability around BeNeLux is also better accounted for, and the net primary pollutant flux transported back towards BeNeLux is reduced. Although the results could not be validated for ozone over BeNeLux, we show that the zooming approach provides a simple and immediate way to better represent scale interactions within a CTM, and constitutes a useful tool for apprehending the hot topic of megacities within their continental environment.
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-11-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-01-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested. PMID:9345087
Word of Mouth : An Agent-based Approach to Predictability of Stock Prices
NASA Astrophysics Data System (ADS)
Shimokawa, Tetsuya; Misawa, Tadanobu; Watanabe, Kyoko
This paper addresses how communication processes among investors affect stock prices formation, especially emerging predictability of stock prices, in financial markets. An agent based model, called the word of mouth model, is introduced for analyzing the problem. This model provides a simple, but sufficiently versatile, description of informational diffusion process and is successful in making lucidly explanation for the predictability of small sized stocks, which is a stylized fact in financial markets but difficult to resolve by traditional models. Our model also provides a rigorous examination of the under reaction hypothesis to informational shocks.
A survey of commercial object-oriented database management systems
NASA Technical Reports Server (NTRS)
Atkins, John
1992-01-01
The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.
A classification procedure for the effective management of changes during the maintenance process
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Basili, Victor R.
1992-01-01
During software operation, maintainers are often faced with numerous change requests. Given available resources such as effort and calendar time, changes, if approved, have to be planned to fit within budget and schedule constraints. In this paper, we address the issue of assessing the difficulty of a change based on known or predictable data. This paper should be considered as a first step towards the construction of customized economic models for maintainers. In it, we propose a modeling approach, based on regular statistical techniques, that can be used in a variety of software maintenance environments. The approach can be easily automated, and is simple for people with limited statistical experience to use. Moreover, it deals effectively with the uncertainty usually associated with both model inputs and outputs. The modeling approach is validated on a data set provided by NASA/GSFC which shows it was effective in classifying changes with respect to the effort involved in implementing them. Other advantages of the approach are discussed along with additional steps to improve the results.
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, J.; Polly, B.; Collis, J.
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less
Adaptive correlation filter-based video stabilization without accumulative global motion estimation
NASA Astrophysics Data System (ADS)
Koh, Eunjin; Lee, Chanyong; Jeong, Dong Gil
2014-12-01
We present a digital video stabilization approach that provides both robustness and efficiency for practical applications. In this approach, we adopt a stabilization model that maintains spatio-temporal information of past input frames efficiently and can track original stabilization position. Because of the stabilization model, the proposed method does not need accumulative global motion estimation and can recover the original position even if there is a failure in interframe motion estimation. It can also intelligently overcome the situation of damaged or interrupted video sequences. Moreover, because it is simple and suitable to parallel scheme, we implement it on a commercial field programmable gate array and a graphics processing unit board with compute unified device architecture in a breeze. Experimental results show that the proposed approach is both fast and robust.
Petersen, J.H.; DeAngelis, D.L.; Paukert, C.P.
2008-01-01
Many fish species are at risk to some degree, and conservation efforts are planned or underway to preserve sensitive populations. For many imperiled species, models could serve as useful tools for researchers and managers as they seek to understand individual growth, quantify predator-prey dynamics, and identify critical sources of mortality. Development and application of models for rare species however, has been constrained by small population sizes, difficulty in obtaining sampling permits, limited opportunities for funding, and regulations on how endangered species can be used in laboratory studies. Bioenergetic and life history models should help with endangered species-recovery planning since these types of models have been used successfully in the last 25 years to address management problems for many commercially and recreationally important fish species. In this paper we discuss five approaches to developing models and parameters for rare species. Borrowing model functions and parameters from related species is simple, but uncorroborated results can be misleading. Directly estimating parameters with laboratory studies may be possible for rare species that have locally abundant populations. Monte Carlo filtering can be used to estimate several parameters by means of performing simple laboratory growth experiments to first determine test criteria. Pattern-oriented modeling (POM) is a new and developing field of research that uses field-observed patterns to build, test, and parameterize models. Models developed using the POM approach are closely linked to field data, produce testable hypotheses, and require a close working relationship between modelers and empiricists. Artificial evolution in individual-based models can be used to gain insight into adaptive behaviors for poorly understood species and thus can fill in knowledge gaps. ?? Copyright by the American Fisheries Society 2008.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howe, Alex R.; Burrows, Adam; Deming, Drake, E-mail: arhowe@umich.edu, E-mail: burrows@astro.princeton.edu, E-mail: ddeming@astro.umd.edu
We provide an example of an analysis to explore the optimization of observations of transiting hot Jupiters with the James Webb Space Telescope ( JWST ) to characterize their atmospheres based on a simple three-parameter forward model. We construct expansive forward model sets for 11 hot Jupiters, 10 of which are relatively well characterized, exploring a range of parameters such as equilibrium temperature and metallicity, as well as considering host stars over a wide range in brightness. We compute posterior distributions of our model parameters for each planet with all of the available JWST spectroscopic modes and several programs ofmore » combined observations and compute their effectiveness using the metric of estimated mutual information per degree of freedom. From these simulations, clear trends emerge that provide guidelines for designing a JWST observing program. We demonstrate that these guidelines apply over a wide range of planet parameters and target brightnesses for our simple forward model.« less
A new simple /spl infin/OH neuron model as a biologically plausible principal component analyzer.
Jankovic, M V
2003-01-01
A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.
Large-scale modelling permafrost distribution in Ötztal, Pitztal and Kaunertal (Tyrol)
NASA Astrophysics Data System (ADS)
Hoinkes, S.; Sailer, R.; Lehning, M.; Steinkogler, W.
2012-04-01
Permafrost is an important element of the global cryosphere, which is seriously affected by climate change. Due to the fact that permafrost is a mostly invisible phenomenon, the area-wide distribution is not properly known. Point measurements are conducted to get information, whether permafrost is present at certain places or not. For an area wide distribution mapping, models have to be built and applied. Different kinds of permafrost distribution models already exist, which are based on different approaches and complexities. Differences in model approaches are mainly due to scaling issues, availability of input data and type of output parameters. In the presented work, we want to map and model the distribution of permafrost in the most elevated parts of the Ötztal, Pitztal and Kaunertal, which are situated in the Eastern European Alps and cover an area of approximately 750 km2. As air temperature is believed to be the best and simplest proxy for energy balance in mountainous regions, we took only the mean annual air temperature from the interpolated ÖKLIM dataset of the Central Institute of Meteorology and Geodynamics to calculate areas with possible presence of permafrost. In a second approach we took a high resolution digital elevation model (DEM) derived by air-borne laser scanning and calculated possible areas with permafrost based on elevation and aspect only which is an established approach among the permafrost community since years. These two simple approaches are compared with each other and in order to validate the model we will compare the outputs with point measurements such as temperature recorded at the snow-soil interface (BTS), continuous temperature data, rock glacier inventories, geophysical measurements. We show that the model based on the mean annual air temperature (≤ -2°C) only, would predict less permafrost in the northerly exposed slopes and in lower elevation than the model based on elevation and aspect. In the southern aspects, more permafrost areas are predicted, but the overall pattern of permafrost distribution is similar. Regarding the input parameters, their different spatial resolutions and the complex topography in high alpine terrain these differences in the results are evident. In a next step these two very simple approaches will be compared to a more complex hydro-meteorological three-dimensional simulation (ALPINE3D). First a one-dimensional model will be used to model permafrost presence at certain points and to calibrate the model parameters, further the model will be applied for the whole investigation area. The model output will be a map of probable permafrost distribution, where energy balance, topography, snow cover, (sub)surface material and land cover is playing a major role.
Bridging the divide: a model-data approach to Polar and Alpine microbiology.
Bradley, James A; Anesio, Alexandre M; Arndt, Sandra
2016-03-01
Advances in microbial ecology in the cryosphere continue to be driven by empirical approaches including field sampling and laboratory-based analyses. Although mathematical models are commonly used to investigate the physical dynamics of Polar and Alpine regions, they are rarely applied in microbial studies. Yet integrating modelling approaches with ongoing observational and laboratory-based work is ideally suited to Polar and Alpine microbial ecosystems given their harsh environmental and biogeochemical characteristics, simple trophic structures, distinct seasonality, often difficult accessibility, geographical expansiveness and susceptibility to accelerated climate changes. In this opinion paper, we explain how mathematical modelling ideally complements field and laboratory-based analyses. We thus argue that mathematical modelling is a powerful tool for the investigation of these extreme environments and that fully integrated, interdisciplinary model-data approaches could help the Polar and Alpine microbiology community address some of the great research challenges of the 21st century (e.g. assessing global significance and response to climate change). However, a better integration of field and laboratory work with model design and calibration/validation, as well as a stronger focus on quantitative information is required to advance models that can be used to make predictions and upscale processes and fluxes beyond what can be captured by observations alone. © FEMS 2016.
Bridging the divide: a model-data approach to Polar and Alpine microbiology
Bradley, James A.; Anesio, Alexandre M.; Arndt, Sandra
2016-01-01
Advances in microbial ecology in the cryosphere continue to be driven by empirical approaches including field sampling and laboratory-based analyses. Although mathematical models are commonly used to investigate the physical dynamics of Polar and Alpine regions, they are rarely applied in microbial studies. Yet integrating modelling approaches with ongoing observational and laboratory-based work is ideally suited to Polar and Alpine microbial ecosystems given their harsh environmental and biogeochemical characteristics, simple trophic structures, distinct seasonality, often difficult accessibility, geographical expansiveness and susceptibility to accelerated climate changes. In this opinion paper, we explain how mathematical modelling ideally complements field and laboratory-based analyses. We thus argue that mathematical modelling is a powerful tool for the investigation of these extreme environments and that fully integrated, interdisciplinary model-data approaches could help the Polar and Alpine microbiology community address some of the great research challenges of the 21st century (e.g. assessing global significance and response to climate change). However, a better integration of field and laboratory work with model design and calibration/validation, as well as a stronger focus on quantitative information is required to advance models that can be used to make predictions and upscale processes and fluxes beyond what can be captured by observations alone. PMID:26832206
Lovreglio, Ruggiero; Ronchi, Enrico; Maragkos, Georgios; Beji, Tarek; Merci, Bart
2016-11-15
The release of toxic gases due to natural/industrial accidents or terrorist attacks in populated areas can have tragic consequences. To prevent and evaluate the effects of these disasters different approaches and modelling tools have been introduced in the literature. These instruments are valuable tools for risk managers doing risk assessment of threatened areas. Despite the significant improvements in hazard assessment in case of toxic gas dispersion, these analyses do not generally include the impact of human behaviour and people movement during emergencies. This work aims at providing an approach which considers both modelling of gas dispersion and evacuation movement in order to improve the accuracy of risk assessment for disasters involving toxic gases. The approach is applied to a hypothetical scenario including a ship releasing Nitrogen dioxide (NO2) on a crowd attending a music festival. The difference between the results obtained with existing static methods (people do not move) and a dynamic approach (people move away from the danger) which considers people movement with different degrees of sophistication (either a simple linear path or more complex behavioural modelling) is discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Imaging approach to mechanistic study of nanoparticle interactions with the blood-brain barrier.
Bramini, Mattia; Ye, Dong; Hallerbach, Anna; Nic Raghnaill, Michelle; Salvati, Anna; Aberg, Christoffer; Dawson, Kenneth A
2014-05-27
Understanding nanoparticle interactions with the central nervous system, in particular the blood-brain barrier, is key to advances in therapeutics, as well as assessing the safety of nanoparticles. Challenges in achieving insights have been significant, even for relatively simple models. Here we use a combination of live cell imaging and computational analysis to directly study nanoparticle translocation across a human in vitro blood-brain barrier model. This approach allows us to identify and avoid problems in more conventional inferential in vitro measurements by identifying the catalogue of events of barrier internalization and translocation as they occur. Potentially this approach opens up the window of applicability of in vitro models, thereby enabling in depth mechanistic studies in the future. Model nanoparticles are used to illustrate the method. For those, we find that translocation, though rare, appears to take place. On the other hand, barrier uptake is efficient, and since barrier export is small, there is significant accumulation within the barrier.
NASA Astrophysics Data System (ADS)
Fakhari, Abbas; Bolster, Diogo
2017-04-01
We introduce a simple and efficient lattice Boltzmann method for immiscible multiphase flows, capable of handling large density and viscosity contrasts. The model is based on a diffuse-interface phase-field approach. Within this context we propose a new algorithm for specifying the three-phase contact angle on curved boundaries within the framework of structured Cartesian grids. The proposed method has superior computational accuracy compared with the common approach of approximating curved boundaries with stair cases. We test the model by applying it to four benchmark problems: (i) wetting and dewetting of a droplet on a flat surface and (ii) on a cylindrical surface, (iii) multiphase flow past a circular cylinder at an intermediate Reynolds number, and (iv) a droplet falling on hydrophilic and superhydrophobic circular cylinders under differing conditions. Where available, our results show good agreement with analytical solutions and/or existing experimental data, highlighting strengths of this new approach.
Software Validation via Model Animation
NASA Technical Reports Server (NTRS)
Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.
2015-01-01
This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.