Building Regression Models: The Importance of Graphics.
ERIC Educational Resources Information Center
Dunn, Richard
1989-01-01
Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
Estimation of a Nonlinear Intervention Phase Trajectory for Multiple-Baseline Design Data
ERIC Educational Resources Information Center
Hembry, Ian; Bunuan, Rommel; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim
2015-01-01
A multilevel logistic model for estimating a nonlinear trajectory in a multiple-baseline design is introduced. The model is applied to data from a real multiple-baseline design study to demonstrate interpretation of relevant parameters. A simple change-in-levels (?"Levels") model and a model involving a quadratic function…
Modeling How, When, and What Is Learned in a Simple Fault-Finding Task
ERIC Educational Resources Information Center
Ritter, Frank E.; Bibby, Peter A.
2008-01-01
We have developed a process model that learns in multiple ways while finding faults in a simple control panel device. The model predicts human participants' learning through its own learning. The model's performance was systematically compared to human learning data, including the time course and specific sequence of learned behaviors. These…
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
A simple technique for obtaining future climate data inputs for natural resource models
USDA-ARS?s Scientific Manuscript database
Those conducting impact studies using natural resource models need to be able to quickly and easily obtain downscaled future climate data from multiple models, scenarios, and timescales for multiple locations. This paper describes a method of quickly obtaining future climate data over a wide range o...
Humanizing Outgroups Through Multiple Categorization
Prati, Francesca; Crisp, Richard J.; Meleady, Rose; Rubini, Monica
2016-01-01
In three studies, we examined the impact of multiple categorization on intergroup dehumanization. Study 1 showed that perceiving members of a rival university along multiple versus simple categorical dimensions enhanced the tendency to attribute human traits to this group. Study 2 showed that multiple versus simple categorization of immigrants increased the attribution of uniquely human emotions to them. This effect was explained by the sequential mediation of increased individuation of the outgroup and reduced outgroup threat. Study 3 replicated this sequential mediation model and introduced a novel way of measuring humanization in which participants generated attributes corresponding to the outgroup in a free response format. Participants generated more uniquely human traits in the multiple versus simple categorization conditions. We discuss the theoretical implications of these findings and consider their role in informing and improving efforts to ameliorate contemporary forms of intergroup discrimination. PMID:26984016
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.
2009-01-01
In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.
Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model
NASA Astrophysics Data System (ADS)
Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman
2015-01-01
The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.
Gravitational lensing by an ensemble of isothermal galaxies
NASA Technical Reports Server (NTRS)
Katz, Neal; Paczynski, Bohdan
1987-01-01
Calculation of 28,000 models of gravitational lensing of a distant quasar by an ensemble of randomly placed galaxies, each having a singular isothermal mass distribuiton, is reported. The average surface mass density was 0.2 of the critical value in all models. It is found that the surface mass density averaged over the area of the smallest circle that encompasses the multiple images is 0.82, only slightly smaller than expected from a simple analytical model of Turner et al. (1984). The probability of getting multiple images is also as large as expected analytically. Gravitational lensing is dominated by the matter in the beam; i.e., by the beam convergence. The cases where the multiple imaging is due to asymmetry in mass distribution (i.e., due to shear) are very rare. Therefore, the observed gravitational-lens candidates for which no lensing object has been detected between the images cannot be a result of asymmetric mass distribution outside the images, at least in a model with randomly distributed galaxies. A surprisingly large number of large separations between the multiple images is found: up to 25 percent of multiple images have their angular separation 2 to 4 times larger than expected in a simple analytical model.
ERIC Educational Resources Information Center
Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.
2006-01-01
Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…
Multiple-path model of spectral reflectance of a dyed fabric.
Rogers, Geoffrey; Dalloz, Nicolas; Fournel, Thierry; Hebert, Mathieu
2017-05-01
Experimental results are presented of the spectral reflectance of a dyed fabric as analyzed by a multiple-path model of reflection. The multiple-path model provides simple analytic expressions for reflection and transmission of turbid media by applying the Beer-Lambert law to each path through the medium and summing over all paths, each path weighted by its probability. The path-length probability is determined by a random-walk analysis. The experimental results presented here show excellent agreement with predictions made by the model.
Analysis of bacterial migration. 2: Studies with multiple attractant gradients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, I.; Frymier, P.D.; Hahn, C.M.
1995-02-01
Many motile bacteria exhibit chemotaxis, the ability to bias their random motion toward or away from increasing concentrations of chemical substances which benefit or inhibit their survival, respectively. Since bacteria encounter numerous chemical concentration gradients simultaneously in natural surroundings, it is necessary to know quantitatively how a bacterial population responds in the presence of more than one chemical stimulus to develop predictive mathematical models describing bacterial migration in natural systems. This work evaluates three hypothetical models describing the integration of chemical signals from multiple stimuli: high sensitivity, maximum signal, and simple additivity. An expression for the tumbling probability for individualmore » stimuli is modified according to the proposed models and incorporated into the cell balance equation for a 1-D attractant gradient. Random motility and chemotactic sensitivity coefficients, required input parameters for the model, are measured for single stimulus responses. Theoretical predictions with the three signal integration models are compared to the net chemotactic response of Escherichia coli to co- and antidirectional gradients of D-fucose and [alpha]-methylaspartate in the stopped-flow diffusion chamber assay. Results eliminate the high-sensitivity model and favor the simple additivity over the maximum signal. None of the simple models, however, accurately predict the observed behavior, suggesting a more complex model with more steps in the signal processing mechanism is required to predict responses to multiple stimuli.« less
Experimental Evaluation of Balance Prediction Models for Sit-to-Stand Movement in the Sagittal Plane
Pena Cabra, Oscar David; Watanabe, Takashi
2013-01-01
Evaluation of balance control ability would become important in the rehabilitation training. In this paper, in order to make clear usefulness and limitation of a traditional simple inverted pendulum model in balance prediction in sit-to-stand movements, the traditional simple model was compared to an inertia (rotational radius) variable inverted pendulum model including multiple-joint influence in the balance predictions. The predictions were tested upon experimentation with six healthy subjects. The evaluation showed that the multiple-joint influence model is more accurate in predicting balance under demanding sit-to-stand conditions. On the other hand, the evaluation also showed that the traditionally used simple inverted pendulum model is still reliable in predicting balance during sit-to-stand movement under non-demanding (normal) condition. Especially, the simple model was shown to be effective for sit-to-stand movements with low center of mass velocity at the seat-off. Moreover, almost all trajectories under the normal condition seemed to follow the same control strategy, in which the subjects used extra energy than the minimum one necessary for standing up. This suggests that the safety considerations come first than the energy efficiency considerations during a sit to stand, since the most energy efficient trajectory is close to the backward fall boundary. PMID:24187580
Extension of the ADC Charge-Collection Model to Include Multiple Junctions
NASA Technical Reports Server (NTRS)
Edmonds, Larry D.
2011-01-01
The ADC model is a charge-collection model derived for simple p-n junction silicon diodes having a single reverse-biased p-n junction at one end and an ideal substrate contact at the other end. The present paper extends the model to include multiple junctions, and the goal is to estimate how collected charge is shared by the different junctions.
Phenomenology of COMPASS data: Multiplicities and phenomenology - part II
Anselmino, M.; Boglione, M.; Gonzalez H., J. O.; ...
2015-01-23
In this study, we present some of the main features of the multidimensional COMPASS multiplicities, via our analysis using the simple Gaussian model. We briefly discuss these results in connection with azimuthal asymmetries.
Goal programming for land use planning.
Enoch F. Bell
1976-01-01
A simple transformation of the linear programing model used in land use planning to a goal programing model allows the multiple goals implied by multiple use management to be explicitly recognized. This report outlines the procedure for accomplishing the transformation and discusses problems with use of goal programing. Of particular concern are the expert opinions...
A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading
NASA Astrophysics Data System (ADS)
Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo
A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).
Applying the compound Poisson process model to the reporting of injury-related mortality rates.
Kegler, Scott R
2007-02-16
Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.
Firing patterns in the adaptive exponential integrate-and-fire model.
Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram
2008-11-01
For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.
Oscillations and Multiple Equilibria in Microvascular Blood Flow.
Karst, Nathaniel J; Storey, Brian D; Geddes, John B
2015-07-01
We investigate the existence of oscillatory dynamics and multiple steady-state flow rates in a network with a simple topology and in vivo microvascular blood flow constitutive laws. Unlike many previous analytic studies, we employ the most biologically relevant models of the physical properties of whole blood. Through a combination of analytic and numeric techniques, we predict in a series of two-parameter bifurcation diagrams a range of dynamical behaviors, including multiple equilibria flow configurations, simple oscillations in volumetric flow rate, and multiple coexistent limit cycles at physically realizable parameters. We show that complexity in network topology is not necessary for complex behaviors to arise and that nonlinear rheology, in particular the plasma skimming effect, is sufficient to support oscillatory dynamics similar to those observed in vivo.
Advanced statistics: linear regression, part II: multiple linear regression.
Marill, Keith A
2004-01-01
The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.
Simple to complex modeling of breathing volume using a motion sensor.
John, Dinesh; Staudenmayer, John; Freedson, Patty
2013-06-01
To compare simple and complex modeling techniques to estimate categories of low, medium, and high ventilation (VE) from ActiGraph™ activity counts. Vertical axis ActiGraph™ GT1M activity counts, oxygen consumption and VE were measured during treadmill walking and running, sports, household chores and labor-intensive employment activities. Categories of low (<19.3 l/min), medium (19.3 to 35.4 l/min) and high (>35.4 l/min) VEs were derived from activity intensity classifications (light <2.9 METs, moderate 3.0 to 5.9 METs and vigorous >6.0 METs). We examined the accuracy of two simple techniques (multiple regression and activity count cut-point analyses) and one complex (random forest technique) modeling technique in predicting VE from activity counts. Prediction accuracy of the complex random forest technique was marginally better than the simple multiple regression method. Both techniques accurately predicted VE categories almost 80% of the time. The multiple regression and random forest techniques were more accurate (85 to 88%) in predicting medium VE. Both techniques predicted the high VE (70 to 73%) with greater accuracy than low VE (57 to 60%). Actigraph™ cut-points for light, medium and high VEs were <1381, 1381 to 3660 and >3660 cpm. There were minor differences in prediction accuracy between the multiple regression and the random forest technique. This study provides methods to objectively estimate VE categories using activity monitors that can easily be deployed in the field. Objective estimates of VE should provide a better understanding of the dose-response relationship between internal exposure to pollutants and disease. Copyright © 2013 Elsevier B.V. All rights reserved.
Masquerade Detection Using a Taxonomy-Based Multinomial Modeling Approach in UNIX Systems
2008-08-25
primarily the modeling of statistical features , such as the frequency of events, the duration of events, the co- occurrence of multiple events...are identified, we can extract features representing such behavior while auditing the user’s behavior. Figure1: Taxonomy of Linux and Unix...achieved when the features are extracted just from simple commands. Method Hit Rate False Positive Rate ocSVM using simple cmds (freq.-based
Howley, Donna; Howley, Peter; Oxenham, Marc F
2018-06-01
Stature and a further 8 anthropometric dimensions were recorded from the arms and hands of a sample of 96 staff and students from the Australian National University and The University of Newcastle, Australia. These dimensions were used to create simple and multiple logistic regression models for sex estimation and simple and multiple linear regression equations for stature estimation of a contemporary Australian population. Overall sex classification accuracies using the models created were comparable to similar studies. The stature estimation models achieved standard errors of estimates (SEE) which were comparable to and in many cases lower than those achieved in similar research. Generic, non sex-specific models achieved similar SEEs and R 2 values to the sex-specific models indicating stature may be accurately estimated when sex is unknown. Copyright © 2018 Elsevier B.V. All rights reserved.
Interaction dynamics of multiple mobile robots with simple navigation strategies
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1989-01-01
The global dynamic behavior of multiple interacting autonomous mobile robots with simple navigation strategies is studied. Here, the effective spatial domain of each robot is taken to be a closed ball about its mass center. It is assumed that each robot has a specified cone of visibility such that interaction with other robots takes place only when they enter its visibility cone. Based on a particle model for the robots, various simple homing and collision-avoidance navigation strategies are derived. Then, an analysis of the dynamical behavior of the interacting robots in unbounded spatial domains is made. The article concludes with the results of computer simulations studies of two or more interacting robots.
Forecasting USAF JP-8 Fuel Needs
2009-03-01
versus complex ones. When we consider long -term forecasts, 5-years in this case, multiple regression outperforms ANN modeling within the specified...with more simple and easy-to-implement methods, versus complex ones. When we consider long -term 5-year forecasts, our multiple regression model...effort. The insight and experience was certainly appreciated. Special thanks to my Turkish peers for their continuous support and help during this long
Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya
2013-01-01
Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.
NASA Astrophysics Data System (ADS)
Wu, Qing-Chu; Fu, Xin-Chu; Sun, Wei-Gang
2010-01-01
In this paper a class of networks with multiple connections are discussed. The multiple connections include two different types of links between nodes in complex networks. For this new model, we give a simple generating procedure. Furthermore, we investigate dynamical synchronization behavior in a delayed two-layer network, giving corresponding theoretical analysis and numerical examples.
Meteorological adjustment of yearly mean values for air pollutant concentration comparison
NASA Technical Reports Server (NTRS)
Sidik, S. M.; Neustadter, H. E.
1976-01-01
Using multiple linear regression analysis, models which estimate mean concentrations of Total Suspended Particulate (TSP), sulfur dioxide, and nitrogen dioxide as a function of several meteorologic variables, two rough economic indicators, and a simple trend in time are studied. Meteorologic data were obtained and do not include inversion heights. The goodness of fit of the estimated models is partially reflected by the squared coefficient of multiple correlation which indicates that, at the various sampling stations, the models accounted for about 23 to 47 percent of the total variance of the observed TSP concentrations. If the resulting model equations are used in place of simple overall means of the observed concentrations, there is about a 20 percent improvement in either: (1) predicting mean concentrations for specified meteorological conditions; or (2) adjusting successive yearly averages to allow for comparisons devoid of meteorological effects. An application to source identification is presented using regression coefficients of wind velocity predictor variables.
Simple analytical model of a thermal diode
NASA Astrophysics Data System (ADS)
Kaushik, Saurabh; Kaushik, Sachin; Marathe, Rahul
2018-05-01
Recently there is a lot of attention given to manipulation of heat by constructing thermal devices such as thermal diodes, transistors and logic gates. Many of the models proposed have an asymmetry which leads to the desired effect. Presence of non-linear interactions among the particles is also essential. But, such models lack analytical understanding. Here we propose a simple, analytically solvable model of a thermal diode. Our model consists of classical spins in contact with multiple heat baths and constant external magnetic fields. Interestingly the magnetic field is the only parameter required to get the effect of heat rectification.
Blanton, Hart; Jaccard, James
2006-01-01
Theories that posit multiplicative relationships between variables are common in psychology. A. G. Greenwald et al. recently presented a theory that explicated relationships between group identification, group attitudes, and self-esteem. Their theory posits a multiplicative relationship between concepts when predicting a criterion variable. Greenwald et al. suggested analytic strategies to test their multiplicative model that researchers might assume are appropriate for testing multiplicative models more generally. The theory and analytic strategies of Greenwald et al. are used as a case study to show the strong measurement assumptions that underlie certain tests of multiplicative models. It is shown that the approach used by Greenwald et al. can lead to declarations of theoretical support when the theory is wrong as well as rejection of the theory when the theory is correct. A simple strategy for testing multiplicative models that makes weaker measurement assumptions than the strategy proposed by Greenwald et al. is suggested and discussed.
Mazoure, Bogdan; Caraus, Iurie; Nadon, Robert; Makarenkov, Vladimir
2018-06-01
Data generated by high-throughput screening (HTS) technologies are prone to spatial bias. Traditionally, bias correction methods used in HTS assume either a simple additive or, more recently, a simple multiplicative spatial bias model. These models do not, however, always provide an accurate correction of measurements in wells located at the intersection of rows and columns affected by spatial bias. The measurements in these wells depend on the nature of interaction between the involved biases. Here, we propose two novel additive and two novel multiplicative spatial bias models accounting for different types of bias interactions. We describe a statistical procedure that allows for detecting and removing different types of additive and multiplicative spatial biases from multiwell plates. We show how this procedure can be applied by analyzing data generated by the four HTS technologies (homogeneous, microorganism, cell-based, and gene expression HTS), the three high-content screening (HCS) technologies (area, intensity, and cell-count HCS), and the only small-molecule microarray technology available in the ChemBank small-molecule screening database. The proposed methods are included in the AssayCorrector program, implemented in R, and available on CRAN.
Two-Electron Transfer Pathways.
Lin, Jiaxing; Balamurugan, D; Zhang, Peng; Skourtis, Spiros S; Beratan, David N
2015-06-18
The frontiers of electron-transfer chemistry demand that we develop theoretical frameworks to describe the delivery of multiple electrons, atoms, and ions in molecular systems. When electrons move over long distances through high barriers, where the probability for thermal population of oxidized or reduced bridge-localized states is very small, the electrons will tunnel from the donor (D) to acceptor (A), facilitated by bridge-mediated superexchange interactions. If the stable donor and acceptor redox states on D and A differ by two electrons, it is possible that the electrons will propagate coherently from D to A. While structure-function relations for single-electron superexchange in molecules are well established, strategies to manipulate the coherent flow of multiple electrons are largely unknown. In contrast to one-electron superexchange, two-electron superexchange involves both one- and two-electron virtual intermediate states, the number of virtual intermediates increases very rapidly with system size, and multiple classes of pathways interfere with one another. In the study described here, we developed simple superexchange models for two-electron transfer. We explored how the bridge structure and energetics influence multielectron superexchange, and we compared two-electron superexchange interactions to single-electron superexchange. Multielectron superexchange introduces interference between singly and doubly oxidized (or reduced) bridge virtual states, so that even simple linear donor-bridge-acceptor systems have pathway topologies that resemble those seen for one-electron superexchange through bridges with multiple parallel pathways. The simple model systems studied here exhibit a richness that is amenable to experimental exploration by manipulating the multiple pathways, pathway crosstalk, and changes in the number of donor and acceptor species. The features that emerge from these studies may assist in developing new strategies to deliver multiple electrons in condensed-phase redox systems, including multiple-electron redox species, multimetallic/multielectron redox catalysts, and multiexciton excited states.
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-01-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage. PMID:9718328
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-09-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage.
Multiple-generator errors are unavoidable under model misspecification.
Jewett, D L; Zhang, Z
1995-08-01
Model misspecification poses a major problem for dipole source localization (DSL) because it causes insidious multiple-generator errors (MulGenErrs) to occur in the fitted dipole parameters. This paper describes how and why this occurs, based upon simple algebraic considerations. MulGenErrs must occur, to some degree, in any DSL analysis of real data because there is model misspecification and mathematically the equations used for the simultaneously active generators must be of a different form than the equations for each generator active alone.
Giftedness and Genetics: The Emergenic-Epigenetic Model and Its Implications
ERIC Educational Resources Information Center
Simonton, Dean Keith
2005-01-01
The genetic endowment underlying giftedness may operate in a far more complex manner than often expressed in most theoretical accounts of the phenomenon. First, an endowment may be emergenic. That is, a gift may consist of multiple traits (multidimensional) that are inherited in a multiplicative (configurational), rather than an additive (simple)…
Modeling Students' Memory for Application in Adaptive Educational Systems
ERIC Educational Resources Information Center
Pelánek, Radek
2015-01-01
Human memory has been thoroughly studied and modeled in psychology, but mainly in laboratory setting under simplified conditions. For application in practical adaptive educational systems we need simple and robust models which can cope with aspects like varied prior knowledge or multiple-choice questions. We discuss and evaluate several models of…
Modeling Smoke Plume-Rise and Dispersion from Southern United States Prescribed Burns with Daysmoke
G L Achtemeier; S L Goodrick; Y Liu; F Garcia-Menendez; Y Hu; M. Odman
2011-01-01
We present Daysmoke, an empirical-statistical plume rise and dispersion model for simulating smoke from prescribed burns. Prescribed fires are characterized by complex plume structure including multiple-core updrafts which makes modeling with simple plume models difficult. Daysmoke accounts for plume structure in a three-dimensional veering/sheering atmospheric...
Two simple models of classical heat pumps.
Marathe, Rahul; Jayannavar, A M; Dhar, Abhishek
2007-03-01
Motivated by recent studies of models of particle and heat quantum pumps, we study similar simple classical models and examine the possibility of heat pumping. Unlike many of the usual ratchet models of molecular engines, the models we study do not have particle transport. We consider a two-spin system and a coupled oscillator system which exchange heat with multiple heat reservoirs and which are acted upon by periodic forces. The simplicity of our models allows accurate numerical and exact solutions and unambiguous interpretation of results. We demonstrate that while both our models seem to be built on similar principles, one is able to function as a heat pump (or engine) while the other is not.
ERIC Educational Resources Information Center
Yilmaz, Ilker; Konukman, Ferman; Birkan, Binyamin; Yanardag, Mehmet
2010-01-01
Effects of most to least prompting on teaching simple progression swimming skill for children with autism were investigated. A single subject multiple baseline model across subjects with probe conditions was used. Participants were three boys, 9 years old. Data were collected over a 10-week with session three times a week period using the single…
A Multilevel Multiset Time-Series Model for Describing Complex Developmental Processes
Ma, Xin; Shen, Jianping
2017-01-01
The authors sought to develop an analytical platform where multiple sets of time series can be examined simultaneously. This multivariate platform capable of testing interaction effects among multiple sets of time series can be very useful in empirical research. The authors demonstrated that the multilevel framework can readily accommodate this analytical capacity. Given their intention to use the multilevel multiset time-series model to pursue complicated research purposes, their resulting model is relatively simple to specify, to run, and to interpret. These advantages make the adoption of their model relatively effortless as long as researchers have the basic knowledge and skills in working with multilevel growth modeling. With multiple potential extensions of their model, the establishment of this analytical platform for analysis of multiple sets of time series can inspire researchers to pursue far more advanced research designs to address complex developmental processes in reality. PMID:29881094
Seasonal ENSO forecasting: Where does a simple model stand amongst other operational ENSO models?
NASA Astrophysics Data System (ADS)
Halide, Halmar
2017-01-01
We apply a simple linear multiple regression model called IndOzy for predicting ENSO up to 7 seasonal lead times. The model still used 5 (five) predictors of the past seasonal Niño 3.4 ENSO indices derived from chaos theory and it was rolling-validated to give a one-step ahead forecast. The model skill was evaluated against data from the season of May-June-July (MJJ) 2003 to November-December-January (NDJ) 2015/2016. There were three skill measures such as: Pearson correlation, RMSE, and Euclidean distance were used for forecast verification. The skill of this simple model was than compared to those of combined Statistical and Dynamical models compiled at the IRI (International Research Institute) website. It was found that the simple model was only capable of producing a useful ENSO prediction only up to 3 seasonal leads, while the IRI statistical and Dynamical model skill were still useful up to 4 and 6 seasonal leads, respectively. Even with its short-range seasonal prediction skills, however, the simple model still has a potential to give ENSO-derived tailored products such as probabilistic measures of precipitation and air temperature. Both meteorological conditions affect the presence of wild-land fire hot-spots in Sumatera and Kalimantan. It is suggested that to improve its long-range skill, the simple INDOZY model needs to incorporate a nonlinear model such as an artificial neural network technique.
Suzuki, Hideaki; Tabata, Takahisa; Koizumi, Hiroki; Hohchi, Nobusuke; Takeuchi, Shoko; Kitamura, Takuro; Fujino, Yoshihisa; Ohbuchi, Toyoaki
2014-12-01
This study aimed to create a multiple regression model for predicting hearing outcomes of idiopathic sudden sensorineural hearing loss (ISSNHL). The participants were 205 consecutive patients (205 ears) with ISSNHL (hearing level ≥ 40 dB, interval between onset and treatment ≤ 30 days). They received systemic steroid administration combined with intratympanic steroid injection. Data were examined by simple and multiple regression analyses. Three hearing indices (percentage hearing improvement, hearing gain, and posttreatment hearing level [HLpost]) and 7 prognostic factors (age, days from onset to treatment, initial hearing level, initial hearing level at low frequencies, initial hearing level at high frequencies, presence of vertigo, and contralateral hearing level) were included in the multiple regression analysis as dependent and explanatory variables, respectively. In the simple regression analysis, the percentage hearing improvement, hearing gain, and HLpost showed significant correlation with 2, 5, and 6 of the 7 prognostic factors, respectively. The multiple correlation coefficients were 0.396, 0.503, and 0.714 for the percentage hearing improvement, hearing gain, and HLpost, respectively. Predicted values of HLpost calculated by the multiple regression equation were reliable with 70% probability with a 40-dB-width prediction interval. Prediction of HLpost by the multiple regression model may be useful to estimate the hearing prognosis of ISSNHL. © The Author(s) 2014.
ERIC Educational Resources Information Center
Goodwin, Amanda P.; August, Diane; Calderon, Margarita
2015-01-01
The current study unites multiple theories (i.e., the orthographic depth hypothesis and linguistic grain size theory, the simple view of reading, and the common underlying proficiency model) to explore differences in how 113 fourth-grade Spanish-speaking English learners (ELs) approached reading in their native language of Spanish, which is…
Afantitis, Antreas; Melagraki, Georgia; Sarimveis, Haralambos; Koutentis, Panayiotis A; Markopoulos, John; Igglessi-Markopoulou, Olga
2006-08-01
A quantitative-structure activity relationship was obtained by applying Multiple Linear Regression Analysis to a series of 80 1-[2-hydroxyethoxy-methyl]-6-(phenylthio) thymine (HEPT) derivatives with significant anti-HIV activity. For the selection of the best among 37 different descriptors, the Elimination Selection Stepwise Regression Method (ES-SWR) was utilized. The resulting QSAR model (R (2) (CV) = 0.8160; S (PRESS) = 0.5680) proved to be very accurate both in training and predictive stages.
Chatterjee, Abhijit; Vlachos, Dionisios G
2007-07-21
While recently derived continuum mesoscopic equations successfully bridge the gap between microscopic and macroscopic physics, so far they have been derived only for simple lattice models. In this paper, general deterministic continuum mesoscopic equations are derived rigorously via nonequilibrium statistical mechanics to account for multiple interacting surface species and multiple processes on multiple site types and/or different crystallographic planes. Adsorption, desorption, reaction, and surface diffusion are modeled. It is demonstrated that contrary to conventional phenomenological continuum models, microscopic physics, such as the interaction potential, determines the final form of the mesoscopic equation. Models of single component diffusion and binary diffusion of interacting particles on single-type site lattice and of single component diffusion on complex microporous materials' lattices consisting of two types of sites are derived, as illustrations of the mesoscopic framework. Simplification of the diffusion mesoscopic model illustrates the relation to phenomenological models, such as the Fickian and Maxwell-Stefan transport models. It is demonstrated that the mesoscopic equations are in good agreement with lattice kinetic Monte Carlo simulations for several prototype examples studied.
Kuan, Hui-Shun; Betterton, Meredith D.
2016-01-01
Motor protein motion on biopolymers can be described by models related to the totally asymmetric simple exclusion process (TASEP). Inspired by experiments on the motion of kinesin-4 motors on antiparallel microtubule overlaps, we analyze a model incorporating the TASEP on two antiparallel lanes with binding kinetics and lane switching. We determine the steady-state motor density profiles using phase-plane analysis of the steady-state mean field equations and kinetic Monte Carlo simulations. We focus on the density-density phase plane, where we find an analytic solution to the mean field model. By studying the phase-space flows, we determine the model’s fixed points and their changes with parameters. Phases previously identified for the single-lane model occur for low switching rate between lanes. We predict a multiple coexistence phase due to additional fixed points that appear as the switching rate increases: switching moves motors from the higher-density to the lower-density lane, causing local jamming and creating multiple domain walls. We determine the phase diagram of the model for both symmetric and general boundary conditions. PMID:27627345
From Brown-Peterson to continual distractor via operation span: A SIMPLE account of complex span.
Neath, Ian; VanWormer, Lisa A; Bireta, Tamra J; Surprenant, Aimée M
2014-09-01
Three memory tasks-Brown-Peterson, complex span, and continual distractor-all alternate presentation of a to-be-remembered item and a distractor activity, but each task is associated with a different memory system, short-term memory, working memory, and long-term memory, respectively. SIMPLE, a relative local distinctiveness model, has previously been fit to data from both the Brown-Peterson and continual distractor tasks; here we use the same version of the model to fit data from a complex span task. Despite the many differences between the tasks, including unpredictable list length, SIMPLE fit the data well. Because SIMPLE posits a single memory system, these results constitute yet another demonstration that performance on tasks originally thought to tap different memory systems can be explained without invoking multiple memory systems.
Design Considerations for Heavily-Doped Cryogenic Schottky Diode Varactor Multipliers
NASA Technical Reports Server (NTRS)
Schlecht, E.; Maiwald, F.; Chattopadhyay, G.; Martin, S.; Mehdi, I.
2001-01-01
Diode modeling for Schottky varactor frequency multipliers above 500 GHz is presented with special emphasis placed on simple models and fitted equations for rapid circuit design. Temperature- and doping-dependent mobility, resistivity, and avalanche current multiplication and breakdown are presented. Next is a discussion of static junction current, including the effects of tunneling as well as thermionic emission. These results have been compared to detailed measurements made down to 80 K on diodes fabricated at JPL, followed by a discussion of the effect on multiplier efficiency. Finally, a simple model of current saturation in the undepleted active layer suitable for inclusion in harmonic balance simulators is derived.
SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.
Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi
2010-01-01
Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.
NASA Astrophysics Data System (ADS)
Yuan, Cadmus C. A.
2015-12-01
Optical ray tracing modeling applied Beer-Lambert method in the single luminescence material system to model the white light pattern from blue LED light source. This paper extends such algorithm to a mixed multiple luminescence material system by introducing the equivalent excitation and emission spectrum of individual luminescence materials. The quantum efficiency numbers of individual material and self-absorption of the multiple luminescence material system are considered as well. By this combination, researchers are able to model the luminescence characteristics of LED chip-scaled packaging (CSP), which provides simple process steps and the freedom of the luminescence material geometrical dimension. The method will be first validated by the experimental results. Afterward, a further parametric investigation has been then conducted.
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.
Siddique, Juned; Harel, Ofer; Crespi, Catherine M.; Hedeker, Donald
2014-01-01
The true missing data mechanism is never known in practice. We present a method for generating multiple imputations for binary variables that formally incorporates missing data mechanism uncertainty. Imputations are generated from a distribution of imputation models rather than a single model, with the distribution reflecting subjective notions of missing data mechanism uncertainty. Parameter estimates and standard errors are obtained using rules for nested multiple imputation. Using simulation, we investigate the impact of missing data mechanism uncertainty on post-imputation inferences and show that incorporating this uncertainty can increase the coverage of parameter estimates. We apply our method to a longitudinal smoking cessation trial where nonignorably missing data were a concern. Our method provides a simple approach for formalizing subjective notions regarding nonresponse and can be implemented using existing imputation software. PMID:24634315
Steady-states for shear flows of a liquid-crystal model: Multiplicity, stability, and hysteresis
NASA Astrophysics Data System (ADS)
Dorn, Tim; Liu, Weishi
In this work, we study shear flows of a fluid layer between two solid blocks via a liquid-crystal type model proposed in [C.H.A. Cheng, L.H. Kellogg, S. Shkoller, D.L. Turcotte, A liquid-crystal model for friction, Proc. Natl. Acad. Sci. USA 21 (2007) 1-5] for an understanding of frictions. A characterization on the existence and multiplicity of steady-states is provided. Stability issue of the steady-states is examined mainly focusing on bifurcations of zero eigenvalues. The stability result suggests that this simple model exhibits hysteresis, and it is supported by a numerical simulation.
Modeling Rabbit Responses to Single and Multiple Aerosol ...
Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev
Millimeter wave satellite communication studies. Results of the 1981 propagation modeling effort
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Tsolakis, A.; Dishman, W. K.
1982-01-01
Theoretical modeling associated with rain effects on millimeter wave propagation is detailed. Three areas of work are discussed. A simple model for prediction of rain attenuation is developed and evaluated. A method for computing scattering from single rain drops is presented. A complete multiple scattering model is described which permits accurate calculation of the effects on dual polarized signals passing through rain.
Taboo Search: An Approach to the Multiple Minima Problem
NASA Astrophysics Data System (ADS)
Cvijovic, Djurdje; Klinowski, Jacek
1995-02-01
Described here is a method, based on Glover's taboo search for discrete functions, of solving the multiple minima problem for continuous functions. As demonstrated by model calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimization, this procedure is generally applicable, easy to implement, derivative-free, and conceptually simple.
Using Simple and Complex Growth Models to Articulate Developmental Change: Matching Theory to Method
ERIC Educational Resources Information Center
Ram, Nilam; Grimm, Kevin
2007-01-01
Growth curve modeling has become a mainstay in the study of development. In this article we review some of the flexibility provided by this technique for describing and testing hypotheses about: (1) intraindividual change across multiple occasions of measurement, and (2) interindividual differences in intraindividual change. Through empirical…
Assistive Technologies for Second-Year Statistics Students Who Are Blind
ERIC Educational Resources Information Center
Erhardt, Robert J.; Shuman, Michael P.
2015-01-01
At Wake Forest University, a student who is blind enrolled in a second course in statistics. The course covered simple and multiple regression, model diagnostics, model selection, data visualization, and elementary logistic regression. These topics required that the student both interpret and produce three sets of materials: mathematical writing,…
Distribution of model uncertainty across multiple data streams
NASA Astrophysics Data System (ADS)
Wutzler, Thomas
2014-05-01
When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.
Sonuga-Barke, Edmund J S
2005-06-01
Until recently, causal models of attention-deficit/hyperactivity disorder (ADHD) have tended to focus on the role of common, simple, core deficits. One such model highlights the role of executive dysfunction due to deficient inhibitory control resulting from disturbances in the frontodorsal striatal circuit and associated mesocortical dopaminergic branches. An alternative model presents ADHD as resulting from impaired signaling of delayed rewards arising from disturbances in motivational processes, involving frontoventral striatal reward circuits and mesolimbic branches terminating in the ventral striatum, particularly the nucleus accumbens. In the present article, these models are elaborated in two ways. First, they are each placed within their developmental context by consideration of the role of person x environment correlation and interaction and individual adaptation to developmental constraint. Second, their relationship to one another is reviewed in the light of recent data suggesting that delay aversion and executive functions might each make distinctive contributions to the development of the disorder. This provides an impetus for theoretical models built around the idea of multiple neurodevelopmental pathways. The possibility of neuropathologic heterogeneity in ADHD is likely to have important implications for the clinical management of the condition, potentially impacting on both diagnostic strategies and treatment options.
NASA Astrophysics Data System (ADS)
Mundhenk, T. Nathan; Ni, Kang-Yu; Chen, Yang; Kim, Kyungnam; Owechko, Yuri
2012-01-01
An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion, even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model. These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model compared with the null hypothesis of <20%. Additionally, the performance for fingerprints stays well above the null hypothesis for as much as 800 frames. Thus, a simple and highly compact single Gaussian model is useful for target reacquisition. Since the model is agnostic to view point and object size, it is expected to perform as well on a test of target handoff. Since some of the performance degradation is due to problems with the initial target acquisition and tracking, the simple Gaussian model may perform even better with an improved initial acquisition technique. Also, since the model makes no assumption about the object to be tracked, it should be possible to use it to fingerprint a multitude of objects, not just cars. Further accuracy may be obtained by creating manifolds of objects from multiple samples.
Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas
2012-08-01
In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.
A simple Lagrangian forecast system with aviation forecast potential
NASA Technical Reports Server (NTRS)
Petersen, R. A.; Homan, J. H.
1983-01-01
A trajectory forecast procedure is developed which uses geopotential tendency fields obtained from a simple, multiple layer, potential vorticity conservative isentropic model. This model can objectively account for short-term advective changes in the mass field when combined with fine-scale initial analyses. This procedure for producing short-term, upper-tropospheric trajectory forecasts employs a combination of a detailed objective analysis technique, an efficient mass advection model, and a diagnostically proven trajectory algorithm, none of which require extensive computer resources. Results of initial tests are presented, which indicate an exceptionally good agreement for trajectory paths entering the jet stream and passing through an intensifying trough. It is concluded that this technique not only has potential for aiding in route determination, fuel use estimation, and clear air turbulence detection, but also provides an example of the types of short range forecasting procedures which can be applied at local forecast centers using simple algorithms and a minimum of computer resources.
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Runyon, D. L.
1984-01-01
Rain depolarization is quantified through the cross-polarization discrimination (XPD) versus attenuation relationship. Such a relationship is derived by curve fitting to a rigorous theoretical model (the multiple scattering model) to determine the variation of the parameters involved. This simple isolation model (SIM) is compared to data from several earth-space link experiments and to three other models.
Vehicle Surveillance with a Generic, Adaptive, 3D Vehicle Model.
Leotta, Matthew J; Mundy, Joseph L
2011-07-01
In automated surveillance, one is often interested in tracking road vehicles, measuring their shape in 3D world space, and determining vehicle classification. To address these tasks simultaneously, an effective approach is the constrained alignment of a prior model of 3D vehicle shape to images. Previous 3D vehicle models are either generic but overly simple or rigid and overly complex. Rigid models represent exactly one vehicle design, so a large collection is needed. A single generic model can deform to a wide variety of shapes, but those shapes have been far too primitive. This paper uses a generic 3D vehicle model that deforms to match a wide variety of passenger vehicles. It is adjustable in complexity between the two extremes. The model is aligned to images by predicting and matching image intensity edges. Novel algorithms are presented for fitting models to multiple still images and simultaneous tracking while estimating shape in video. Experiments compare the proposed model to simple generic models in accuracy and reliability of 3D shape recovery from images and tracking in video. Standard techniques for classification are also used to compare the models. The proposed model outperforms the existing simple models at each task.
Simple linear and multivariate regression models.
Rodríguez del Águila, M M; Benítez-Parejo, N
2011-01-01
In biomedical research it is common to find problems in which we wish to relate a response variable to one or more variables capable of describing the behaviour of the former variable by means of mathematical models. Regression techniques are used to this effect, in which an equation is determined relating the two variables. While such equations can have different forms, linear equations are the most widely used form and are easy to interpret. The present article describes simple and multiple linear regression models, how they are calculated, and how their applicability assumptions are checked. Illustrative examples are provided, based on the use of the freely accessible R program. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.
Nonlinear multiplicative dendritic integration in neuron and network models
Zhang, Danke; Li, Yuanqing; Rasch, Malte J.; Wu, Si
2013-01-01
Neurons receive inputs from thousands of synapses distributed across dendritic trees of complex morphology. It is known that dendritic integration of excitatory and inhibitory synapses can be highly non-linear in reality and can heavily depend on the exact location and spatial arrangement of inhibitory and excitatory synapses on the dendrite. Despite this known fact, most neuron models used in artificial neural networks today still only describe the voltage potential of a single somatic compartment and assume a simple linear summation of all individual synaptic inputs. We here suggest a new biophysical motivated derivation of a single compartment model that integrates the non-linear effects of shunting inhibition, where an inhibitory input on the route of an excitatory input to the soma cancels or “shunts” the excitatory potential. In particular, our integration of non-linear dendritic processing into the neuron model follows a simple multiplicative rule, suggested recently by experiments, and allows for strict mathematical treatment of network effects. Using our new formulation, we further devised a spiking network model where inhibitory neurons act as global shunting gates, and show that the network exhibits persistent activity in a low firing regime. PMID:23658543
NASA Astrophysics Data System (ADS)
Minunno, Francesco; Peltoniemi, Mikko; Launiainen, Samuli; Mäkelä, Annikki
2014-05-01
Biogeochemical models quantify the material and energy flux exchanges between biosphere, atmosphere and soil, however there is still considerable uncertainty underpinning model structure and parametrization. The increasing availability of data from of multiple sources provides useful information for model calibration and validation at different space and time scales. We calibrated a simplified ecosystem process model PRELES to data from multiple sites. In this work we had the following objective: to compare a multi-site calibration and site-specific calibrations, in order to test if PRELES is a model of general applicability, and to test how well one parameterization can predict ecosystem fluxes. Model calibration and evaluation were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 9 sites of Finland and Sweden were used in the study; half dataset was used for model calibrations and half for the comparative analyses. 10 BCs were performed; the model was independently calibrated for each of the nine sites (site-specific calibrations) and a multi-site calibration was achieved using the data from all the sites in one BC. Then 9 BMCs were carried out, one for each site, using output from the multi-site and the site-specific versions of PRELES. Similar estimates were obtained for the parameters at which model outputs are most sensitive. Not surprisingly, the joint posterior distribution achieved through the multi-site calibration was characterized by lower uncertainty, because more data were involved in the calibration process. No significant differences were encountered in the prediction of the multi-site and site-specific versions of PRELES, and after BMC, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Despite being a simple model, PRELES provided good estimates of GPP and ET; only for one site PRELES multi-site version underestimated water fluxes. Our study implies convergence of GPP and water processes in boreal zone to the extent that their plausible prediction is possible with a simple model using global parameterization.
Using the Graded Response Model to Control Spurious Interactions in Moderated Multiple Regression
ERIC Educational Resources Information Center
Morse, Brendan J.; Johanson, George A.; Griffeth, Rodger W.
2012-01-01
Recent simulation research has demonstrated that using simple raw score to operationalize a latent construct can result in inflated Type I error rates for the interaction term of a moderated statistical model when the interaction (or lack thereof) is proposed at the latent variable level. Rescaling the scores using an appropriate item response…
Helping Students Assess the Relative Importance of Different Intermolecular Interactions
ERIC Educational Resources Information Center
Jasien, Paul G.
2008-01-01
A semi-quantitative model has been developed to estimate the relative effects of dispersion, dipole-dipole interactions, and H-bonding on the normal boiling points ("T[subscript b]") for a subset of simple organic systems. The model is based upon a statistical analysis using multiple linear regression on a series of straight-chain organic…
Interpretation of commonly used statistical regression models.
Kasza, Jessica; Wolfe, Rory
2014-01-01
A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.
Testing the Structure of Hydrological Models using Genetic Programming
NASA Astrophysics Data System (ADS)
Selle, B.; Muttil, N.
2009-04-01
Genetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that genetic programming can be used to test the structure hydrological models and to identify dominant processes in hydrological systems. To test this, genetic programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, water table depths and water ponding times during surface irrigation. Using genetic programming, a simple model of deep percolation was consistently evolved in multiple model runs. This simple and interpretable model confirmed the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that genetic programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.
Pastor, D; Amaya, W; García-Olcina, R; Sales, S
2007-07-01
We present a simple theoretical model of and the experimental verification for vanishing of the autocorrelation peak due to wavelength detuning on the coding-decoding process of coherent direct sequence optical code multiple access systems based on a superstructured fiber Bragg grating. Moreover, the detuning vanishing effect has been explored to take advantage of this effect and to provide an additional degree of multiplexing and/or optical code tuning.
The Forbes 400, the Pareto power-law and efficient markets
NASA Astrophysics Data System (ADS)
Klass, O. S.; Biham, O.; Levy, M.; Malcai, O.; Solomon, S.
2007-01-01
Statistical regularities at the top end of the wealth distribution in the United States are examined using the Forbes 400 lists of richest Americans, published between 1988 and 2003. It is found that the wealths are distributed according to a power-law (Pareto) distribution. This result is explained using a simple stochastic model of multiple investors that incorporates the efficient market hypothesis as well as the multiplicative nature of financial market fluctuations.
NASA Astrophysics Data System (ADS)
Guha, Anirban
2017-11-01
Theoretical studies on linear shear instabilities as well as different kinds of wave interactions often use simple velocity and/or density profiles (e.g. constant, piecewise) for obtaining good qualitative and quantitative predictions of the initial disturbances. Moreover, such simple profiles provide a minimal model to obtain a mechanistic understanding of shear instabilities. Here we have extended this minimal paradigm into nonlinear domain using vortex method. Making use of unsteady Bernoulli's equation in presence of linear shear, and extending Birkhoff-Rott equation to multiple interfaces, we have numerically simulated the interaction between multiple fully nonlinear waves. This methodology is quite general, and has allowed us to simulate diverse problems that can be essentially reduced to the minimal system with interacting waves, e.g. spilling and plunging breakers, stratified shear instabilities (Holmboe, Taylor-Caulfield, stratified Rayleigh), jet flows, and even wave-topography interaction problem like Bragg resonance. We found that the minimal models capture key nonlinear features (e.g. wave breaking features like cusp formation and roll-ups) which are observed in experiments and/or extensive simulations with smooth, realistic profiles.
Plessis, Anne; Hafemeister, Christoph; Wilkins, Olivia; Gonzaga, Zennia Jean; Meyer, Rachel Sarah; Pires, Inês; Müller, Christian; Septiningsih, Endang M; Bonneau, Richard; Purugganan, Michael
2015-11-26
Plants rely on transcriptional dynamics to respond to multiple climatic fluctuations and contexts in nature. We analyzed the genome-wide gene expression patterns of rice (Oryza sativa) growing in rainfed and irrigated fields during two distinct tropical seasons and determined simple linear models that relate transcriptomic variation to climatic fluctuations. These models combine multiple environmental parameters to account for patterns of expression in the field of co-expressed gene clusters. We examined the similarities of our environmental models between tropical and temperate field conditions, using previously published data. We found that field type and macroclimate had broad impacts on transcriptional responses to environmental fluctuations, especially for genes involved in photosynthesis and development. Nevertheless, variation in solar radiation and temperature at the timescale of hours had reproducible effects across environmental contexts. These results provide a basis for broad-based predictive modeling of plant gene expression in the field.
Internalizing Trajectories in Young Boys and Girls: The Whole Is Not a Simple Sum of Its Parts
ERIC Educational Resources Information Center
Carter, Alice S.; Godoy, Leandra; Wagmiller, Robert L.; Veliz, Philip; Marakovitz, Susan; Briggs-Gowan, Margaret J.
2010-01-01
There is support for a differentiated model of early internalizing emotions and behaviors, yet researchers have not examined the course of multiple components of an internalizing domain across early childhood. In this paper we present growth models for the Internalizing domain of the Infant-Toddler Social and Emotional Assessment and its component…
Time delays, population, and economic development
NASA Astrophysics Data System (ADS)
Gori, Luca; Guerrini, Luca; Sodini, Mauro
2018-05-01
This research develops an augmented Solow model with population dynamics and time delays. The model produces either a single stationary state or multiple stationary states (able to characterise different development regimes). The existence of time delays may cause persistent fluctuations in both economic and demographic variables. In addition, the work identifies in a simple way the reasons why economics affects demographics and vice versa.
A simple optical model to estimate suspended particulate matter in Yellow River Estuary.
Qiu, Zhongfeng
2013-11-18
Distribution of the suspended particulate matter (SPM) concentration is a key issue for analyzing the deposition and erosion variety of the estuary and evaluating the material fluxes from river to sea. Satellite remote sensing is a useful tool to investigate the spatial variation of SPM concentration in estuarial zones. However, algorithm developments and validations of the SPM concentrations in Yellow River Estuary (YRE) have been seldom performed before and therefore our knowledge on the quality of retrieval of SPM concentration is poor. In this study, we developed a new simple optical model to estimate SPM concentration in YRE by specifying the optimal wavelength ratios (600-710 nm)/ (530-590 nm) based on observations of 5 cruises during 2004 and 2011. The simple optical model was attentively calibrated and the optimal band ratios were selected for application to multiple sensors, 678/551 for the Moderate Resolution Imaging Spectroradiometer (MODIS), 705/560 for the Medium Resolution Imaging Spectrometer (MERIS) and 680/555 for the Geostationary Ocean Color Imager (GOCI). With the simple optical model, the relative percentage difference and the mean absolute error were 35.4% and 15.6 gm(-3) respectively for MODIS, 42.2% and 16.3 gm(-3) for MERIS, and 34.2% and 14.7 gm(-3) for GOCI, based on an independent validation data set. Our results showed a good precision of estimation for SPM concentration using the new simple optical model, contrasting with the poor estimations derived from existing empirical models. Providing an available atmospheric correction scheme for satellite imagery, our simple model could be used for quantitative monitoring of SPM concentrations in YRE.
Simple models for estimating local removals of timber in the northeast
David N. Larsen; David A. Gansner
1975-01-01
Provides a practical method of estimating subregional removals of timber and demonstrates its application to a typical problem. Stepwise multiple regression analysis is used to develop equations for estimating removals of softwood, hardwood, and all timber from selected characteristics of socioeconomic structure.
Gravitational decoupling and the Picard-Lefschetz approach
NASA Astrophysics Data System (ADS)
Brown, Jon; Cole, Alex; Shiu, Gary; Cottrell, William
2018-01-01
In this work, we consider tunneling between nonmetastable states in gravitational theories. Such processes arise in various contexts, e.g., in inflationary scenarios where the inflaton potential involves multiple fields or multiple branches. They are also relevant for bubble wall nucleation in some cosmological settings. However, we show that the transition amplitudes computed using the Euclidean method generally do not approach the corresponding field theory limit as Mp→∞ . This implies that in the Euclidean framework, there is no systematic expansion in powers of GN for such processes. Such considerations also carry over directly to no-boundary scenarios involving Hawking-Turok instantons. In this note, we illustrate this failure of decoupling in the Euclidean approach with a simple model of axion monodromy and then argue that the situation can be remedied with a Lorentzian prescription such as the Picard-Lefschetz theory. As a proof of concept, we illustrate with a simple model how tunneling transition amplitudes can be calculated using the Picard-Lefschetz approach.
Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.
2014-01-01
We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157
Developmental dissociation in the neural responses to simple multiplication and subtraction problems
Prado, Jérôme; Mutreja, Rachna; Booth, James R.
2014-01-01
Mastering single-digit arithmetic during school years is commonly thought to depend upon an increasing reliance on verbally memorized facts. An alternative model, however, posits that fluency in single-digit arithmetic might also be achieved via the increasing use of efficient calculation procedures. To test between these hypotheses, we used a cross-sectional design to measure the neural activity associated with single-digit subtraction and multiplication in 34 children from 2nd to 7th grade. The neural correlates of language and numerical processing were also identified in each child via localizer scans. Although multiplication and subtraction were undistinguishable in terms of behavior, we found a striking developmental dissociation in their neural correlates. First, we observed grade-related increases of activity for multiplication, but not for subtraction, in a language-related region of the left temporal cortex. Second, we found grade-related increases of activity for subtraction, but not for multiplication, in a region of the right parietal cortex involved in the procedural manipulation of numerical quantities. The present results suggest that fluency in simple arithmetic in children may be achieved by both increasing reliance on verbal retrieval and by greater use of efficient quantity-based procedures, depending on the operation. PMID:25089323
NASA Astrophysics Data System (ADS)
Shi, Jinfei; Zhu, Songqing; Chen, Ruwen
2017-12-01
An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.
Laser Pulse-Stretching Using Multiple Optical Ring-Cavities
NASA Technical Reports Server (NTRS)
Kojima, Jun; Nguyen, Quang-Viet; Lee, Chi-Ming (Technical Monitor)
2002-01-01
We describe a simple and passive nanosecond-long (ns-long) laser 'pulse-stretcher' using multiple optical ring-cavities. We present a model of the pulse-stretching process for an arbitrary number of optical ring-cavities. Using the model, we optimize the design of a pulse-stretcher for use in a spontaneous Raman scattering excitation system that avoids laser-induced plasma spark problems. From the optimized design, we then experimentally demonstrate and verify the model with a 3-cavity pulse-stretcher system that converts a 1000 mJ, 8.4 ns-long input laser pulse into an approximately 75 ns-long (FWHM) output laser pulse with a peak power reduction of 0.10X, and an 83% efficiency.
Road simulation for four-wheel vehicle whole input power spectral density
NASA Astrophysics Data System (ADS)
Wang, Jiangbo; Qiang, Baomin
2017-05-01
As the vibration of running vehicle mainly comes from road and influence vehicle ride performance. So the road roughness power spectral density simulation has great significance to analyze automobile suspension vibration system parameters and evaluate ride comfort. Firstly, this paper based on the mathematical model of road roughness power spectral density, established the integral white noise road random method. Then in the MATLAB/Simulink environment, according to the research method of automobile suspension frame from simple two degree of freedom single-wheel vehicle model to complex multiple degrees of freedom vehicle model, this paper built the simple single incentive input simulation model. Finally the spectrum matrix was used to build whole vehicle incentive input simulation model. This simulation method based on reliable and accurate mathematical theory and can be applied to the random road simulation of any specified spectral which provides pavement incentive model and foundation to vehicle ride performance research and vibration simulation.
Suppression Situations in Multiple Linear Regression
ERIC Educational Resources Information Center
Shieh, Gwowen
2006-01-01
This article proposes alternative expressions for the two most prevailing definitions of suppression without resorting to the standardized regression modeling. The formulation provides a simple basis for the examination of their relationship. For the two-predictor regression, the author demonstrates that the previous results in the literature are…
Improving Memory for Optimization and Learning in Dynamic Environments
2011-07-01
algorithm uses simple, in- cremental clustering to separate solutions into memory entries. The cluster centers are used as the models in the memory. This is...entire days of traffic with realistic traffic de - mands and turning ratios on a 32 intersection network modeled on downtown Pittsburgh, Pennsyl- vania...early/tardy problem. Management Science, 35(2):177–191, 1989. [78] Daniel Parrott and Xiaodong Li. A particle swarm model for tracking multiple peaks in
Marcus V. Warwell; Gerald E. Rehfeldt; Nicholas L. Crookston
2006-01-01
The Random Forests multiple regression tree was used to develop an empirically-based bioclimate model for the distribution of Pinus albicaulis (whitebark pine) in western North America, latitudes 31° to 51° N and longitudes 102° to 125° W. Independent variables included 35 simple expressions of temperature and precipitation and their interactions....
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
A simple prescription for simulating and characterizing gravitational arcs
NASA Astrophysics Data System (ADS)
Furlanetto, C.; Santiago, B. X.; Makler, M.; de Bom, C.; Brandt, C. H.; Neto, A. F.; Ferreira, P. C.; da Costa, L. N.; Maia, M. A. G.
2013-01-01
Simple models of gravitational arcs are crucial for simulating large samples of these objects with full control of the input parameters. These models also provide approximate and automated estimates of the shape and structure of the arcs, which are necessary for detecting and characterizing these objects on massive wide-area imaging surveys. We here present and explore the ArcEllipse, a simple prescription for creating objects with a shape similar to gravitational arcs. We also present PaintArcs, which is a code that couples this geometrical form with a brightness distribution and adds the resulting object to images. Finally, we introduce ArcFitting, which is a tool that fits ArcEllipses to images of real gravitational arcs. We validate this fitting technique using simulated arcs and apply it to CFHTLS and HST images of tangential arcs around clusters of galaxies. Our simple ArcEllipse model for the arc, associated to a Sérsic profile for the source, recovers the total signal in real images typically within 10%-30%. The ArcEllipse+Sérsic models also automatically recover visual estimates of length-to-width ratios of real arcs. Residual maps between data and model images reveal the incidence of arc substructure. They may thus be used as a diagnostic for arcs formed by the merging of multiple images. The incidence of these substructures is the main factor that prevents ArcEllipse models from accurately describing real lensed systems.
Ester, Edward F.; Deering, Sean
2014-01-01
Spatial attention has been postulated to facilitate perceptual processing via several different mechanisms. For instance, attention can amplify neural responses in sensory areas (sensory gain), mediate neural variability (noise modulation), or alter the manner in which sensory signals are selectively read out by postsensory decision mechanisms (efficient readout). Even in the context of simple behavioral tasks, it is unclear how well each of these mechanisms can account for the relationship between attention-modulated changes in behavior and neural activity because few studies have systematically mapped changes between stimulus intensity, attentional focus, neural activity, and behavioral performance. Here, we used a combination of psychophysics, event-related potentials (ERPs), and quantitative modeling to explicitly link attention-related changes in perceptual sensitivity with changes in the ERP amplitudes recorded from human observers. Spatial attention led to a multiplicative increase in the amplitude of an early sensory ERP component (the P1, peaking ∼80–130 ms poststimulus) and in the amplitude of the late positive deflection component (peaking ∼230–330 ms poststimulus). A simple model based on signal detection theory demonstrates that these multiplicative gain changes were sufficient to account for attention-related improvements in perceptual sensitivity, without a need to invoke noise modulation. Moreover, combining the observed multiplicative gain with a postsensory readout mechanism resulted in a significantly poorer description of the observed behavioral data. We conclude that, at least in the context of relatively simple visual discrimination tasks, spatial attention modulates perceptual sensitivity primarily by modulating the gain of neural responses during early sensory processing PMID:25274817
A simple white noise analysis of neuronal light responses.
Chichilnisky, E J
2001-05-01
A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.
Averaging and Adding in Children's Worth Judgements
ERIC Educational Resources Information Center
Schlottmann, Anne; Harman, Rachel M.; Paine, Julie
2012-01-01
Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…
Educational Production and Teacher Preferences
ERIC Educational Resources Information Center
Bosworth, Ryan; Caliendo, Frank
2007-01-01
We develop a simple model of teacher behavior that offers a solution to the ''class size puzzle'' and is useful for analyzing the potential effects of the No Child Left Behind Act. When teachers must allocate limited classroom time between multiple instructional methods, rational teachers may respond to reductions in class size by reallocating…
Evidence integration in model-based tree search
Solway, Alec; Botvinick, Matthew M.
2015-01-01
Research on the dynamics of reward-based, goal-directed decision making has largely focused on simple choice, where participants decide among a set of unitary, mutually exclusive options. Recent work suggests that the deliberation process underlying simple choice can be understood in terms of evidence integration: Noisy evidence in favor of each option accrues over time, until the evidence in favor of one option is significantly greater than the rest. However, real-life decisions often involve not one, but several steps of action, requiring a consideration of cumulative rewards and a sensitivity to recursive decision structure. We present results from two experiments that leveraged techniques previously applied to simple choice to shed light on the deliberation process underlying multistep choice. We interpret the results from these experiments in terms of a new computational model, which extends the evidence accumulation perspective to multiple steps of action. PMID:26324932
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)
2007-01-01
A method and system for data modeling that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The invention partitions the parameters into a first set of s simple parameters, where observable data are expressible as low order polynomials, and c complex parameters that reflect more complicated variation of the observed data. Variation of the data with the simple parameters is modeled using polynomials; and variation of the data with the complex parameters at each vertex is analyzed using a neural network. Variations with the simple parameters and with the complex parameters are expressed using a first sequence of shape functions and a second sequence of neural network functions. The first and second sequences are multiplicatively combined to form a composite response surface, dependent upon the parameter values, that can be used to identify an accurate mode
A new adaptive multiple modelling approach for non-linear and non-stationary systems
NASA Astrophysics Data System (ADS)
Chen, Hao; Gong, Yu; Hong, Xia
2016-07-01
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Lytras, Theodore; Georgakopoulou, Theano; Tsiodras, Sotirios
2018-01-01
Greece is currently experiencing a large measles outbreak, in the context of multiple similar outbreaks across Europe. We devised and applied a modified chain-binomial epidemic model, requiring very simple data, to estimate the transmission parameters of this outbreak. Model results indicate sustained measles transmission among the Greek Roma population, necessitating a targeted mass vaccination campaign to halt further spread of the epidemic. Our model may be useful for other countries facing similar measles outbreaks. PMID:29717695
Lytras, Theodore; Georgakopoulou, Theano; Tsiodras, Sotirios
2018-04-01
Greece is currently experiencing a large measles outbreak, in the context of multiple similar outbreaks across Europe. We devised and applied a modified chain-binomial epidemic model, requiring very simple data, to estimate the transmission parameters of this outbreak. Model results indicate sustained measles transmission among the Greek Roma population, necessitating a targeted mass vaccination campaign to halt further spread of the epidemic. Our model may be useful for other countries facing similar measles outbreaks.
Age estimation standards for a Western Australian population using the coronal pulp cavity index.
Karkhanis, Shalmira; Mack, Peter; Franklin, Daniel
2013-09-10
Age estimation is a vital aspect in creating a biological profile and aids investigators by narrowing down potentially matching identities from the available pool. In addition to routine casework, in the present global political scenario, age estimation in living individuals is required in cases of refugees, asylum seekers, human trafficking and to ascertain age of criminal responsibility. Thus robust methods that are simple, non-invasive and ethically viable are required. The aim of the present study is, therefore, to test the reliability and applicability of the coronal pulp cavity index method, for the purpose of developing age estimation standards for an adult Western Australian population. A total of 450 orthopantomograms (220 females and 230 males) of Australian individuals were analyzed. Crown and coronal pulp chamber heights were measured in the mandibular left and right premolars, and the first and second molars. These measurements were then used to calculate the tooth coronal index. Data was analyzed using paired sample t-tests to assess bilateral asymmetry followed by simple linear and multiple regressions to develop age estimation models. The most accurate age estimation based on simple linear regression model was with mandibular right first molar (SEE ±8.271 years). Multiple regression models improved age prediction accuracy considerably and the most accurate model was with bilateral first and second molars (SEE ±6.692 years). This study represents the first investigation of this method in a Western Australian population and our results indicate that the method is suitable for forensic application. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Garcia, F; Arruda-Neto, J D; Manso, M V; Helene, O M; Vanin, V R; Rodriguez, O; Mesa, J; Likhachev, V P; Filho, J W; Deppman, A; Perez, G; Guzman, F; de Camargo, S P
1999-10-01
A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data.
Multiple collision effects on the antiproton production by high energy proton (100 GeV - 1000 GeV)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Hiroshi; Powell, J.
Antiproton production rates which take into account multiple collision are calculated using a simple model. Methods to reduce capture of the produced antiprotons by the target are discussed, including geometry of target and the use of a high intensity laser. Antiproton production increases substantially above 150 GeV proton incident energy. The yield increases almost linearly with incident energy, alleviating space charge problems in the high current accelerator that produces large amounts of antiprotons.
A multiple-objective optimal exploration strategy
Christakos, G.; Olea, R.A.
1988-01-01
Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.
Preacher, Kristopher J; Hayes, Andrew F
2008-08-01
Hypotheses involving mediation are common in the behavioral sciences. Mediation exists when a predictor affects a dependent variable indirectly through at least one intervening variable, or mediator. Methods to assess mediation involving multiple simultaneous mediators have received little attention in the methodological literature despite a clear need. We provide an overview of simple and multiple mediation and explore three approaches that can be used to investigate indirect processes, as well as methods for contrasting two or more mediators within a single model. We present an illustrative example, assessing and contrasting potential mediators of the relationship between the helpfulness of socialization agents and job satisfaction. We also provide SAS and SPSS macros, as well as Mplus and LISREL syntax, to facilitate the use of these methods in applications.
Operator Priming and Generalization of Practice in Adults' Simple Arithmetic
ERIC Educational Resources Information Center
Chen, Yalin; Campbell, Jamie I. D.
2016-01-01
There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication,…
Multi-objective optimization for generating a weighted multi-model ensemble
NASA Astrophysics Data System (ADS)
Lee, H.
2017-12-01
Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.
Generalized estimators of avian abundance from count survey data
Royle, J. Andrew
2004-01-01
I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture?recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations) constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.
Modeling the growth and branching of plants: A simple rod-based model
NASA Astrophysics Data System (ADS)
Faruk Senan, Nur Adila; O'Reilly, Oliver M.; Tresierras, Timothy N.
A rod-based model for plant growth and branching is developed in this paper. Specifically, Euler's theory of the elastica is modified to accommodate growth and remodeling. In addition, branching is characterized using a configuration force and evolution equations are postulated for the flexural stiffness and intrinsic curvature. The theory is illustrated with examples of multiple static equilibria of a branched plant and the remodeling and tip growth of a plant stem under gravitational loading.
Laplace transform analysis of a multiplicative asset transfer model
NASA Astrophysics Data System (ADS)
Sokolov, Andrey; Melatos, Andrew; Kieu, Tien
2010-07-01
We analyze a simple asset transfer model in which the transfer amount is a fixed fraction f of the giver’s wealth. The model is analyzed in a new way by Laplace transforming the master equation, solving it analytically and numerically for the steady-state distribution, and exploring the solutions for various values of f∈(0,1). The Laplace transform analysis is superior to agent-based simulations as it does not depend on the number of agents, enabling us to study entropy and inequality in regimes that are costly to address with simulations. We demonstrate that Boltzmann entropy is not a suitable (e.g. non-monotonic) measure of disorder in a multiplicative asset transfer system and suggest an asymmetric stochastic process that is equivalent to the asset transfer model.
Adaptive behaviour and multiple equilibrium states in a predator-prey model.
Pimenov, Alexander; Kelly, Thomas C; Korobeinikov, Andrei; O'Callaghan, Michael J A; Rachinskii, Dmitrii
2015-05-01
There is evidence that multiple stable equilibrium states are possible in real-life ecological systems. Phenomenological mathematical models which exhibit such properties can be constructed rather straightforwardly. For instance, for a predator-prey system this result can be achieved through the use of non-monotonic functional response for the predator. However, while formal formulation of such a model is not a problem, the biological justification for such functional responses and models is usually inconclusive. In this note, we explore a conjecture that a multitude of equilibrium states can be caused by an adaptation of animal behaviour to changes of environmental conditions. In order to verify this hypothesis, we consider a simple predator-prey model, which is a straightforward extension of the classic Lotka-Volterra predator-prey model. In this model, we made an intuitively transparent assumption that the prey can change a mode of behaviour in response to the pressure of predation, choosing either "safe" of "risky" (or "business as usual") behaviour. In order to avoid a situation where one of the modes gives an absolute advantage, we introduce the concept of the "cost of a policy" into the model. A simple conceptual two-dimensional predator-prey model, which is minimal with this property, and is not relying on odd functional responses, higher dimensionality or behaviour change for the predator, exhibits two stable co-existing equilibrium states with basins of attraction separated by a separatrix of a saddle point. Copyright © 2015 Elsevier Inc. All rights reserved.
Whittington, James C. R.; Bogacz, Rafal
2017-01-01
To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output. PMID:28333583
Childhood Maltreatment and PTSD: Spiritual Well-Being and Intimate Partner Violence as Mediators
Zhang, Huaiyu; Pittman, Delishia M.; Lamis, Dorian A.; Fischer, Nicole L.; Schwenke, Tomina J.; Carr, Erika R.; Shah, Sanjay; Kaslow, Nadine J.
2016-01-01
Childhood maltreatment places individuals, including African American women who are undereducated and economically disadvantaged, at risk for developing posttraumatic stress disorder (PTSD) symptoms. Participants were 192 African American women with a history in the prior year of both a suicide attempt and intimate partner violence (IPV) exposure. They were recruited from a public hospital that provides medical and mental health treatment to mostly low-income patients. A simple mediator model was used to examine if (1) existential well-being (sense of purpose) and/or religious well-being (relationship with God) mediated the link between childhood maltreatment and adult PTSD symptoms. Sequential multiple mediator models determined if physical and nonphysical IPV enhanced our understanding of the mediational association among the aforementioned variables. Findings suggest that existential well-being mediated the association between childhood maltreatment and adult PTSD symptoms in a simple mediator model, and existential well-being and recent nonphysical IPV served as sequential multiple mediators of this link. However, religious well-being and physical IPV were not significant mediators. Findings underscore the importance of enhancing existential well-being in the treatment of suicidal African American women with a history of childhood maltreatment and IPV. PMID:26989343
Whittington, James C R; Bogacz, Rafal
2017-05-01
To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output.
Deciding the liveness for a subclass of weighted Petri nets based on structurally circular wait
NASA Astrophysics Data System (ADS)
Liu, GuanJun; Chen, LiJing
2016-05-01
Weighted Petri nets as a kind of formal language are widely used to model and verify discrete event systems related to resource allocation like flexible manufacturing systems. System of Simple Sequential Processes with Multi-Resources (S3PMR, a subclass of weighted Petri nets and an important extension to the well-known System of Simple Sequential Processes with Resources, can model many discrete event systems in which (1) multiple processes may run in parallel and (2) each execution step of each process may use multiple units from multiple resource types. This paper gives a necessary and sufficient condition for the liveness of S3PMR. A new structural concept called Structurally Circular Wait (SCW) is proposed for S3PMR. Blocking Marking (BM) associated with an SCW is defined. It is proven that a marked S3PMR is live if and only if each SCW has no BM. We use an example of multi-processor system-on-chip to show that SCW and BM can precisely characterise the (partial) deadlocks for S3PMR. Simultaneously, two examples are used to show the advantages of SCW in preventing deadlocks of S3PMR. These results are significant for the further research on dealing with the deadlock problem.
Optimisation of a Generic Ionic Model of Cardiac Myocyte Electrical Activity
Guo, Tianruo; Al Abed, Amr; Lovell, Nigel H.; Dokos, Socrates
2013-01-01
A generic cardiomyocyte ionic model, whose complexity lies between a simple phenomenological formulation and a biophysically detailed ionic membrane current description, is presented. The model provides a user-defined number of ionic currents, employing two-gate Hodgkin-Huxley type kinetics. Its generic nature allows accurate reconstruction of action potential waveforms recorded experimentally from a range of cardiac myocytes. Using a multiobjective optimisation approach, the generic ionic model was optimised to accurately reproduce multiple action potential waveforms recorded from central and peripheral sinoatrial nodes and right atrial and left atrial myocytes from rabbit cardiac tissue preparations, under different electrical stimulus protocols and pharmacological conditions. When fitted simultaneously to multiple datasets, the time course of several physiologically realistic ionic currents could be reconstructed. Model behaviours tend to be well identified when extra experimental information is incorporated into the optimisation. PMID:23710254
An entropic barriers diffusion theory of decision-making in multiple alternative tasks
Sigman, Mariano; Cecchi, Guillermo A.
2018-01-01
We present a theory of decision-making in the presence of multiple choices that departs from traditional approaches by explicitly incorporating entropic barriers in a stochastic search process. We analyze response time data from an on-line repository of 15 million blitz chess games, and show that our model fits not just the mean and variance, but the entire response time distribution (over several response-time orders of magnitude) at every stage of the game. We apply the model to show that (a) higher cognitive expertise corresponds to the exploration of more complex solution spaces, and (b) reaction times of users at an on-line buying website can be similarly explained. Our model can be seen as a synergy between diffusion models used to model simple two-choice decision-making and planning agents in complex problem solving. PMID:29499036
A PDP model of the simultaneous perception of multiple objects
NASA Astrophysics Data System (ADS)
Henderson, Cynthia M.; McClelland, James L.
2011-06-01
Illusory conjunctions in normal and simultanagnosic subjects are two instances where the visual features of multiple objects are incorrectly 'bound' together. A connectionist model explores how multiple objects could be perceived correctly in normal subjects given sufficient time, but could give rise to illusory conjunctions with damage or time pressure. In this model, perception of two objects benefits from lateral connections between hidden layers modelling aspects of the ventral and dorsal visual pathways. As with simultanagnosia, simulations of dorsal lesions impair multi-object recognition. In contrast, a large ventral lesion has minimal effect on dorsal functioning, akin to dissociations between simple object manipulation (retained in visual form agnosia and semantic dementia) and object discrimination (impaired in these disorders) [Hodges, J.R., Bozeat, S., Lambon Ralph, M.A., Patterson, K., and Spatt, J. (2000), 'The Role of Conceptual Knowledge: Evidence from Semantic Dementia', Brain, 123, 1913-1925; Milner, A.D., and Goodale, M.A. (2006), The Visual Brain in Action (2nd ed.), New York: Oxford]. It is hoped that the functioning of this model might suggest potential processes underlying dorsal and ventral contributions to the correct perception of multiple objects.
Plessis, Anne; Hafemeister, Christoph; Wilkins, Olivia; Gonzaga, Zennia Jean; Meyer, Rachel Sarah; Pires, Inês; Müller, Christian; Septiningsih, Endang M; Bonneau, Richard; Purugganan, Michael
2015-01-01
Plants rely on transcriptional dynamics to respond to multiple climatic fluctuations and contexts in nature. We analyzed the genome-wide gene expression patterns of rice (Oryza sativa) growing in rainfed and irrigated fields during two distinct tropical seasons and determined simple linear models that relate transcriptomic variation to climatic fluctuations. These models combine multiple environmental parameters to account for patterns of expression in the field of co-expressed gene clusters. We examined the similarities of our environmental models between tropical and temperate field conditions, using previously published data. We found that field type and macroclimate had broad impacts on transcriptional responses to environmental fluctuations, especially for genes involved in photosynthesis and development. Nevertheless, variation in solar radiation and temperature at the timescale of hours had reproducible effects across environmental contexts. These results provide a basis for broad-based predictive modeling of plant gene expression in the field. DOI: http://dx.doi.org/10.7554/eLife.08411.001 PMID:26609814
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1986-01-01
Computational predictions of turbulent flow in sharply curved 180 degree turn around ducts are presented. The CNS2D computer code is used to solve the equations of motion for two-dimensional incompressible flows transformed to a nonorthogonal body-fitted coordinate system. This procedure incorporates the pressure velocity correction algorithm SIMPLE-C to iteratively solve a discretized form of the transformed equations. A multiple scale turbulence model based on simplified spectral partitioning is employed to obtain closure. Flow field predictions utilizing the multiple scale model are compared to features predicted by the traditional single scale k-epsilon model. Tuning parameter sensitivities of the multiple scale model applied to turn around duct flows are also determined. In addition, a wall function approach based on a wall law suitable for incompressible turbulent boundary layers under strong adverse pressure gradients is tested. Turn around duct flow characteristics utilizing this modified wall law are presented and compared to results based on a standard wall treatment.
Testing the structure of a hydrological model using Genetic Programming
NASA Astrophysics Data System (ADS)
Selle, Benny; Muttil, Nitin
2011-01-01
SummaryGenetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that Genetic Programming can be used to test the structure of hydrological models and to identify dominant processes in hydrological systems. To test this, Genetic Programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, watertable depths and water ponding times during surface irrigation. Using Genetic Programming, a simple model of deep percolation was recurrently evolved in multiple Genetic Programming runs. This simple and interpretable model supported the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that Genetic Programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.
NASA Technical Reports Server (NTRS)
Raup, D. M.; Valentine, J. W.
1983-01-01
There is some indication that life may have originated readily under primitive earth conditions. If there were multiple origins of life, the result could have been a polyphyletic biota today. Using simple stochastic models for diversification and extinction, we conclude: (1) the probability of survival of life is low unless there are multiple origins, and (2) given survival of life and given as many as 10 independent origins of life, the odds are that all but one would have gone extinct, yielding the monophyletic biota we have now. The fact of the survival of our particular form of life does not imply that it was unique or superior.
Developmental Dissociation in the Neural Responses to Simple Multiplication and Subtraction Problems
ERIC Educational Resources Information Center
Prado, Jérôme; Mutreja, Rachna; Booth, James R.
2014-01-01
Mastering single-digit arithmetic during school years is commonly thought to depend upon an increasing reliance on verbally memorized facts. An alternative model, however, posits that fluency in single-digit arithmetic might also be achieved via the increasing use of efficient calculation procedures. To test between these hypotheses, we used a…
USDA-ARS?s Scientific Manuscript database
Researchers from the University of Queensland of New South Wales provided guidance to designers regarding the hydraulic performance of embankment dam stepped spillways. Their research compares a number of high-quality physical model data sets from multiple laboratories, emphasizing the variability ...
ERIC Educational Resources Information Center
Shieh, Gwowen
2006-01-01
This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…
Photometric studies of Saturn's ring and eclipses of the Galilean satellites
NASA Technical Reports Server (NTRS)
Brunk, W. E.
1972-01-01
Reliable data defining the photometric function of the Saturn ring system at visual wavelengths are interpreted in terms of a simple scattering model. To facilitate the analysis, new photographic photometry of the ring has been carried out and homogeneous measurements of the mean surface brightness are presented. The ring model adopted is a plane parallel slab of isotropically scattering particles; the single scattering albedo and the perpendicular optical thickness are both arbitrary. Results indicate that primary scattering is inadequate to describe the photometric properties of the ring: multiple scattering predominates for all angles of tilt with respect to the Sun and earth. In addition, the scattering phase function of the individual particles is significantly anisotropic: they scatter preferentially towards the sun. Photoelectric photometry of Ganymede during its eclipse by Jupiter indicate that neither a simple reflecting-layer model nor a semi-infinite homogeneous scattering model provides an adequate physical description of the Jupiter atmosphere.
ERIC Educational Resources Information Center
Muthen, Bengt
This paper investigates methods that avoid using multiple groups to represent the missing data patterns in covariance structure modeling, attempting instead to do a single-group analysis where the only action the analyst has to take is to indicate that data is missing. A new covariance structure approach developed by B. Muthen and G. Arminger is…
Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)
Higher moments of multiplicity fluctuations in a hadron-resonance gas with exact conservation laws
NASA Astrophysics Data System (ADS)
Fu, Jing-Hua
2017-09-01
Higher moments of multiplicity fluctuations of hadrons produced in central nucleus-nucleus collisions are studied within the hadron-resonance gas model in the canonical ensemble. Exact conservation of three charges, baryon number, electric charge, and strangeness is enforced in the large volume limit. Moments up to the fourth order of various particles are calculated at CERN Super Proton Synchrotron, BNL Relativistic Heavy Ion Collider (RHIC), and CERN Large Hadron Collider energies. The asymptotic fluctuations within a simplified model with only one conserved charge in the canonical ensemble are discussed where simple analytical expressions for moments of multiplicity distributions can be obtained. Moments products of net-proton, net-kaon, and net-charge distributions in Au + Au collisions at RHIC energies are calculated. The pseudorapidity coverage dependence of net-charge fluctuation is discussed.
Multiple Consecutive Infections Might Explain the Lack of Protection by BCG
Cardona, Pere-Joan; Vilaplana, Cristina
2014-01-01
Although contacts between tuberculosis patients may result in multiple consecutive infections (MCI), no experimental animal models consider this fact when used in basic studies. Moreover, the current TB vaccine (BCG) has demonstrated a limited protection in humans. In this study we evaluate the effect of tuberculosis MCI by way of a simple mathematical analysis using data from the low dose aerosol murine experimental model. The results show that a higher number of, or shorter intervals between, multiple consecutive infections reduce the protective effect of BCG. This is due to both the increase in bacillary load at the stationary level of the infection, and the protective immune response induced by the infection itself. This factor must therefore be taken into account when designing new prophylactic strategies as candidate vaccines for the replacement of BCG. PMID:24740286
Multiple consecutive infections might explain the lack of protection by BCG.
Cardona, Pere-Joan; Vilaplana, Cristina
2014-01-01
Although contacts between tuberculosis patients may result in multiple consecutive infections (MCI), no experimental animal models consider this fact when used in basic studies. Moreover, the current TB vaccine (BCG) has demonstrated a limited protection in humans. In this study we evaluate the effect of tuberculosis MCI by way of a simple mathematical analysis using data from the low dose aerosol murine experimental model. The results show that a higher number of, or shorter intervals between, multiple consecutive infections reduce the protective effect of BCG. This is due to both the increase in bacillary load at the stationary level of the infection, and the protective immune response induced by the infection itself. This factor must therefore be taken into account when designing new prophylactic strategies as candidate vaccines for the replacement of BCG.
A Simple Illustration for the Need of Multiple Comparison Procedures
ERIC Educational Resources Information Center
Carter, Rickey E.
2010-01-01
Statistical adjustments to accommodate multiple comparisons are routinely covered in introductory statistical courses. The fundamental rationale for such adjustments, however, may not be readily understood. This article presents a simple illustration to help remedy this.
Multiple transient memories in sheared suspensions: Robustness, structure, and routes to plasticity
NASA Astrophysics Data System (ADS)
Keim, Nathan C.; Paulsen, Joseph D.; Nagel, Sidney R.
2013-09-01
Multiple transient memories, originally discovered in charge-density-wave conductors, are a remarkable and initially counterintuitive example of how a system can store information about its driving. In this class of memories, a system can learn multiple driving inputs, nearly all of which are eventually forgotten despite their continual input. If sufficient noise is present, the system regains plasticity so that it can continue to learn new memories indefinitely. Recently, Keim and Nagel [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.010603 107, 010603 (2011)] showed how multiple transient memories could be generalized to a generic driven disordered system with noise, giving as an example simulations of a simple model of a sheared non-Brownian suspension. Here, we further explore simulation models of suspensions under cyclic shear, focusing on three main themes: robustness, structure, and overdriving. We show that multiple transient memories are a robust feature independent of many details of the model. The steady-state spatial distribution of the particles is sensitive to the driving algorithm; nonetheless, the memory formation is independent of such a change in particle correlations. Finally, we demonstrate that overdriving provides another means for controlling memory formation and retention.
Hoover, D R; Peng, Y; Saah, A J; Detels, R R; Day, R S; Phair, J P
A simple non-parametric approach is developed to simultaneously estimate net incidence and morbidity time from specific AIDS illnesses in populations at high risk for death from these illnesses and other causes. The disease-death process has four-stages that can be recast as two sandwiching three-state multiple decrement processes. Non-parametric estimation of net incidence and morbidity time with error bounds are achieved from these sandwiching models through modification of methods from Aalen and Greenwood, and bootstrapping. An application to immunosuppressed HIV-1 infected homosexual men reveals that cytomegalovirus disease, Kaposi's sarcoma and Pneumocystis pneumonia are likely to occur and cause significant morbidity time.
Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.
Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen
2015-05-01
Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.
Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.
2013-01-01
Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283
Forest management and economics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buongiorno, J.; Gilless, J.K.
1987-01-01
This volume provides a survey of quantitative methods, guiding the reader through formulation and analysis of models that address forest management problems. The authors use simple mathematics, graphics, and short computer programs to explain each method. Emphasizing applications, they discuss linear, integer, dynamic, and goal programming; simulation; network modeling; and econometrics, as these relate to problems of determining economic harvest schedules in even-aged and uneven-aged forests, the evaluation of forest policies, multiple-objective decision making, and more.
Simple and multiple linear regression: sample size considerations.
Hanley, James A
2016-11-01
The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.
Hewitt, Angela L.; Popa, Laurentiu S.; Pasalar, Siavash; Hendrix, Claudia M.
2011-01-01
Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of Radj2), followed by position (28 ± 24% of Radj2) and speed (11 ± 19% of Radj2). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower Radj2 values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics. PMID:21795616
RRegrs: an R package for computer-aided model selection with multiple regression models.
Tsiliki, Georgia; Munteanu, Cristian R; Seoane, Jose A; Fernandez-Lozano, Carlos; Sarimveis, Haralambos; Willighagen, Egon L
2015-01-01
Predictive regression models can be created with many different modelling approaches. Choices need to be made for data set splitting, cross-validation methods, specific regression parameters and best model criteria, as they all affect the accuracy and efficiency of the produced predictive models, and therefore, raising model reproducibility and comparison issues. Cheminformatics and bioinformatics are extensively using predictive modelling and exhibit a need for standardization of these methodologies in order to assist model selection and speed up the process of predictive model development. A tool accessible to all users, irrespectively of their statistical knowledge, would be valuable if it tests several simple and complex regression models and validation schemes, produce unified reports, and offer the option to be integrated into more extensive studies. Additionally, such methodology should be implemented as a free programming package, in order to be continuously adapted and redistributed by others. We propose an integrated framework for creating multiple regression models, called RRegrs. The tool offers the option of ten simple and complex regression methods combined with repeated 10-fold and leave-one-out cross-validation. Methods include Multiple Linear regression, Generalized Linear Model with Stepwise Feature Selection, Partial Least Squares regression, Lasso regression, and Support Vector Machines Recursive Feature Elimination. The new framework is an automated fully validated procedure which produces standardized reports to quickly oversee the impact of choices in modelling algorithms and assess the model and cross-validation results. The methodology was implemented as an open source R package, available at https://www.github.com/enanomapper/RRegrs, by reusing and extending on the caret package. The universality of the new methodology is demonstrated using five standard data sets from different scientific fields. Its efficiency in cheminformatics and QSAR modelling is shown with three use cases: proteomics data for surface-modified gold nanoparticles, nano-metal oxides descriptor data, and molecular descriptors for acute aquatic toxicity data. The results show that for all data sets RRegrs reports models with equal or better performance for both training and test sets than those reported in the original publications. Its good performance as well as its adaptability in terms of parameter optimization could make RRegrs a popular framework to assist the initial exploration of predictive models, and with that, the design of more comprehensive in silico screening applications.Graphical abstractRRegrs is a computer-aided model selection framework for R multiple regression models; this is a fully validated procedure with application to QSAR modelling.
Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels
Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J.
2014-01-01
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively “hiding” its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378
Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J
2014-01-01
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.
Lee, Cameron C; Sheridan, Scott C
2018-07-01
Temperature-mortality relationships are nonlinear, time-lagged, and can vary depending on the time of year and geographic location, all of which limits the applicability of simple regression models in describing these associations. This research demonstrates the utility of an alternative method for modeling such complex relationships that has gained recent traction in other environmental fields: nonlinear autoregressive models with exogenous input (NARX models). All-cause mortality data and multiple temperature-based data sets were gathered from 41 different US cities, for the period 1975-2010, and subjected to ensemble NARX modeling. Models generally performed better in larger cities and during the winter season. Across the US, median absolute percentage errors were 10% (ranging from 4% to 15% in various cities), the average improvement in the r-squared over that of a simple persistence model was 17% (6-24%), and the hit rate for modeling spike days in mortality (>80th percentile) was 54% (34-71%). Mortality responded acutely to hot summer days, peaking at 0-2 days of lag before dropping precipitously, and there was an extended mortality response to cold winter days, peaking at 2-4 days of lag and dropping slowly and continuing for multiple weeks. Spring and autumn showed both of the aforementioned temperature-mortality relationships, but generally to a lesser magnitude than what was seen in summer or winter. When compared to distributed lag nonlinear models, NARX model output was nearly identical. These results highlight the applicability of NARX models for use in modeling complex and time-dependent relationships for various applications in epidemiology and environmental sciences. Copyright © 2018 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Wholeben, Brent E.
This volume is an exposition of a mathematical modeling technique for use in the evaluation and solution of complex educational problems at all levels. It explores in detail the application of simple algebraic techniques to such issues as program reduction, fiscal rollbacks, and computer curriculum planning. Part I ("Introduction to the…
A Bayesian Model of the Memory Colour Effect.
Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R
2018-01-01
According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects.
A Bayesian Model of the Memory Colour Effect
Olkkonen, Maria; Gegenfurtner, Karl R.
2018-01-01
According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects. PMID:29760874
Effects of video modeling on social initiations by children with autism.
Nikopoulos, Christos K; Keenan, Michael
2004-01-01
We examined the effects of a video modeling intervention on social initiation and play behaviors with 3 children with autism using a multiple baseline across subjects design. Each child watched a videotape showing a typically developing peer, and the experimenter engaged in a simple social interactive play using one toy. For all children, social initiation and reciprocal play skills were enhanced, and these effects were maintained at 1- and 3-month follow-up periods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman-Smith, Christopher; Müller, Berndt, E-mail: mueller@phy.duke.edu; Brookhaven National Laboratory, Upton, NY 11973
We argue that high-multiplicity events in proton–proton or proton–nucleus collisions originate from large-size fluctuations of the nucleon shape. We discuss a pair of simple models of such proton shape fluctuations. A “fat” proton with a size of 3 fm occurs with observable frequency. In light of this result, collective flow behavior in the ensuing nuclear interaction seems feasible. We discuss the influence of these models on the parton structure of the proton.
Effects of video modeling on social initiations by children with autism.
Nikopoulos, Christos K; Keenan, Michael
2004-01-01
We examined the effects of a video modeling intervention on social initiation and play behaviors with 3 children with autism using a multiple baseline across subjects design. Each child watched a videotape showing a typically developing peer, and the experimenter engaged in a simple social interactive play using one toy. For all children, social initiation and reciprocal play skills were enhanced, and these effects were maintained at 1- and 3-month follow-up periods. PMID:15154221
A simple model of the effect of ocean ventilation on ocean heat uptake
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nadiga, Balasubramanya T.; Urban, Nathan Mark
Presentation includes slides on Earth System Models vs. Simple Climate Models; A Popular SCM: Energy Balance Model of Anomalies; On calibrating against one ESM experiment, the SCM correctly captures that ESM's surface warming response with other forcings; Multi-Model Analysis: Multiple ESMs, Single SCM; Posterior Distributions of ECS; However In Excess of 90% of TOA Energy Imbalance is Sequestered in the World Oceans; Heat Storage in the Two Layer Model; Heat Storage in the Two Layer Model; Including TOA Rad. Imbalance and Ocean Heat in Calibration Improves Repr., but Significant Errors Persist; Improved Vertical Resolution Does Not Fix Problem; A Seriesmore » of Expts. Confirms That Anomaly-Diffusing Models Cannot Properly Represent Ocean Heat Uptake; Physics of the Thermocline; Outcropping Isopycnals and Horizontally-Averaged Layers; Local interactions between outcropping isopycnals leads to non-local interactions between horizontally-averaged layers; Both Surface Warming and Ocean Heat are Well Represented With Just 4 Layers; A Series of Expts. Confirms That When Non-Local Interactions are Allowed, the SCMs Can Represent Both Surface Warming and Ocean Heat Uptake; and Summary and Conclusions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Sudipta; Nelson, Austin; Hoke, Anderson
2016-12-12
Traditional testing methods fall short in evaluating interactions between multiple smart inverters providing advanced grid support functions due to the fact that such interactions largely depend on their placements on the electric distribution systems with impedances between them. Even though significant concerns have been raised by the utilities on the effects of such interactions, little effort has been made to evaluate them. In this paper, power hardware-in-the-loop (PHIL) based testing was utilized to evaluate autonomous volt-var operations of multiple smart photovoltaic (PV) inverters connected to a simple distribution feeder model. The results provided in this paper show that depending onmore » volt-var control (VVC) parameters and grid parameters, interaction between inverters and between the inverter and the grid is possible in some extreme cases with very high VVC slopes, fast response times and large VVC response delays.« less
Current Fragmentation and Particle Acceleration in Solar Flares
NASA Astrophysics Data System (ADS)
Cargill, P. J.; Vlahos, L.; Baumann, G.; Drake, J. F.; Nordlund, Å.
2012-11-01
Particle acceleration in solar flares remains an outstanding problem in plasma physics and space science. While the observed particle energies and timescales can perhaps be understood in terms of acceleration at a simple current sheet or turbulence site, the vast number of accelerated particles, and the fraction of flare energy in them, defies any simple explanation. The nature of energy storage and dissipation in the global coronal magnetic field is essential for understanding flare acceleration. Scenarios where the coronal field is stressed by complex photospheric motions lead to the formation of multiple current sheets, rather than the single monolithic current sheet proposed by some. The currents sheets in turn can fragment into multiple, smaller dissipation sites. MHD, kinetic and cellular automata models are used to demonstrate this feature. Particle acceleration in this environment thus involves interaction with many distributed accelerators. A series of examples demonstrate how acceleration works in such an environment. As required, acceleration is fast, and relativistic energies are readily attained. It is also shown that accelerated particles do indeed interact with multiple acceleration sites. Test particle models also demonstrate that a large number of particles can be accelerated, with a significant fraction of the flare energy associated with them. However, in the absence of feedback, and with limited numerical resolution, these results need to be viewed with caution. Particle in cell models can incorporate feedback and in one scenario suggest that acceleration can be limited by the energetic particles reaching the condition for firehose marginal stability. Contemporary issues such as footpoint particle acceleration are also discussed. It is also noted that the idea of a "standard flare model" is ill-conceived when the entire distribution of flare energies is considered.
Crack Path Selection in Thermally Loaded Borosilicate/Steel Bibeam Specimen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grutzik, Scott Joseph; Reedy, Jr., E. D.
Here, we have developed a novel specimen for studying crack paths in glass. Under certain conditions, the specimen reaches a state where the crack must select between multiple paths satisfying the K II = 0 condition. This path selection is a simple but challenging benchmark case for both analytical and numerical methods of predicting crack propagation. We document the development of the specimen, using an uncracked and instrumented test case to study the effect of adhesive choice and validate the accuracy of both a simple beam theory model and a finite element model. In addition, we present preliminary fracture testmore » results and provide a comparison to the path predicted by two numerical methods (mesh restructuring and XFEM). The directional stability of the crack path and differences in kink angle predicted by various crack kinking criteria is analyzed with a finite element model.« less
Kanematsu, Nobuyuki
2009-03-07
Dose calculation for radiotherapy with protons and heavier ions deals with a large volume of path integrals involving a scattering power of body tissue. This work provides a simple model for such demanding applications. There is an approximate linearity between RMS end-point displacement and range of incident particles in water, empirically found in measurements and detailed calculations. This fact was translated into a simple linear formula, from which the scattering power that is only inversely proportional to the residual range was derived. The simplicity enabled the analytical formulation for ions stopping in water, which was designed to be equivalent with the extended Highland model and agreed with measurements within 2% or 0.02 cm in RMS displacement. The simplicity will also improve the efficiency of numerical path integrals in the presence of heterogeneity.
Crack Path Selection in Thermally Loaded Borosilicate/Steel Bibeam Specimen
Grutzik, Scott Joseph; Reedy, Jr., E. D.
2017-08-04
Here, we have developed a novel specimen for studying crack paths in glass. Under certain conditions, the specimen reaches a state where the crack must select between multiple paths satisfying the K II = 0 condition. This path selection is a simple but challenging benchmark case for both analytical and numerical methods of predicting crack propagation. We document the development of the specimen, using an uncracked and instrumented test case to study the effect of adhesive choice and validate the accuracy of both a simple beam theory model and a finite element model. In addition, we present preliminary fracture testmore » results and provide a comparison to the path predicted by two numerical methods (mesh restructuring and XFEM). The directional stability of the crack path and differences in kink angle predicted by various crack kinking criteria is analyzed with a finite element model.« less
Constraints on genes shape long-term conservation of macro-synteny in metazoan genomes.
Lv, Jie; Havlak, Paul; Putnam, Nicholas H
2011-10-05
Many metazoan genomes conserve chromosome-scale gene linkage relationships ("macro-synteny") from the common ancestor of multicellular animal life 1234, but the biological explanation for this conservation is still unknown. Double cut and join (DCJ) is a simple, well-studied model of neutral genome evolution amenable to both simulation and mathematical analysis 5, but as we show here, it is not sufficent to explain long-term macro-synteny conservation. We examine a family of simple (one-parameter) extensions of DCJ to identify models and choices of parameters consistent with the levels of macro- and micro-synteny conservation observed among animal genomes. Our software implements a flexible strategy for incorporating genomic context into the DCJ model to incorporate various types of genomic context ("DCJ-[C]"), and is available as open source software from http://github.com/putnamlab/dcj-c. A simple model of genome evolution, in which DCJ moves are allowed only if they maintain chromosomal linkage among a set of constrained genes, can simultaneously account for the level of macro-synteny conservation and for correlated conservation among multiple pairs of species. Simulations under this model indicate that a constraint on approximately 7% of metazoan genes is sufficient to constrain genome rearrangement to an average rate of 25 inversions and 1.7 translocations per million years.
Nature of collective decision-making by simple yes/no decision units.
Hasegawa, Eisuke; Mizumoto, Nobuaki; Kobayashi, Kazuya; Dobata, Shigeto; Yoshimura, Jin; Watanabe, Saori; Murakami, Yuuka; Matsuura, Kenji
2017-10-31
The study of collective decision-making spans various fields such as brain and behavioural sciences, economics, management sciences, and artificial intelligence. Despite these interdisciplinary applications, little is known regarding how a group of simple 'yes/no' units, such as neurons in the brain, can select the best option among multiple options. One prerequisite for achieving such correct choices by the brain is correct evaluation of relative option quality, which enables a collective decision maker to efficiently choose the best option. Here, we applied a sensory discrimination mechanism using yes/no units with differential thresholds to a model for making a collective choice among multiple options. The performance corresponding to the correct choice was shown to be affected by various parameters. High performance can be achieved by tuning the threshold distribution with the options' quality distribution. The number of yes/no units allocated to each option and its variability profoundly affects performance. When this variability is large, a quorum decision becomes superior to a majority decision under some conditions. The general features of this collective decision-making by a group of simple yes/no units revealed in this study suggest that this mechanism may be useful in applications across various fields.
Power-Laws and Scaling in Finance: Empirical Evidence and Simple Models
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe
We discuss several models that may explain the origin of power-law distributions and power-law correlations in financial time series. From an empirical point of view, the exponents describing the tails of the price increments distribution and the decay of the volatility correlations are rather robust and suggest universality. However, many of the models that appear naturally (for example, to account for the distribution of wealth) contain some multiplicative noise, which generically leads to non universal exponents. Recent progress in the empirical study of the volatility suggests that the volatility results from some sort of multiplicative cascade. A convincing `microscopic' (i.e. trader based) model that explains this observation is however not yet available. We discuss a rather generic mechanism for long-ranged volatility correlations based on the idea that agents constantly switch between active and inactive strategies depending on their relative performance.
Bayesian function-on-function regression for multilevel functional data.
Meyer, Mark J; Coull, Brent A; Versace, Francesco; Cinciripini, Paul; Morris, Jeffrey S
2015-09-01
Medical and public health research increasingly involves the collection of complex and high dimensional data. In particular, functional data-where the unit of observation is a curve or set of curves that are finely sampled over a grid-is frequently obtained. Moreover, researchers often sample multiple curves per person resulting in repeated functional measures. A common question is how to analyze the relationship between two functional variables. We propose a general function-on-function regression model for repeatedly sampled functional data on a fine grid, presenting a simple model as well as a more extensive mixed model framework, and introducing various functional Bayesian inferential procedures that account for multiple testing. We examine these models via simulation and a data analysis with data from a study that used event-related potentials to examine how the brain processes various types of images. © 2015, The International Biometric Society.
Excitatory and Inhibitory Interactions in Localized Populations of Model Neurons
Wilson, Hugh R.; Cowan, Jack D.
1972-01-01
Coupled nonlinear differential equations are derived for the dynamics of spatially localized populations containing both excitatory and inhibitory model neurons. Phase plane methods and numerical solutions are then used to investigate population responses to various types of stimuli. The results obtained show simple and multiple hysteresis phenomena and limit cycle activity. The latter is particularly interesting since the frequency of the limit cycle oscillation is found to be a monotonic function of stimulus intensity. Finally, it is proved that the existence of limit cycle dynamics in response to one class of stimuli implies the existence of multiple stable states and hysteresis in response to a different class of stimuli. The relation between these findings and a number of experiments is discussed. PMID:4332108
NASA Astrophysics Data System (ADS)
Wang, Xiu-lin; Wei, Zheng; Wang, Rui; Huang, Wen-cai
2018-05-01
A self-mixing interferometer (SMI) with resolution twenty times higher than that of a conventional interferometer is developed by multiple reflections. Only by employing a simple external reflecting mirror, the multiple-pass optical configuration can be constructed. The advantage of the configuration is simple and easy to make the light re-injected back into the laser cavity. Theoretical analysis shows that the resolution of measurement is scalable by adjusting the number of reflections. The experiment shows that the proposed method has the optical resolution of approximate λ/40. The influence of displacement sensitivity gain ( G) is further analyzed and discussed in practical experiments.
Moderation analysis with missing data in the predictors.
Zhang, Qian; Wang, Lijuan
2017-12-01
The most widely used statistical model for conducting moderation analysis is the moderated multiple regression (MMR) model. In MMR modeling, missing data could pose a challenge, mainly because the interaction term is a product of two or more variables and thus is a nonlinear function of the involved variables. In this study, we consider a simple MMR model, where the effect of the focal predictor X on the outcome Y is moderated by a moderator U. The primary interest is to find ways of estimating and testing the moderation effect with the existence of missing data in X. We mainly focus on cases when X is missing completely at random (MCAR) and missing at random (MAR). Three methods are compared: (a) Normal-distribution-based maximum likelihood estimation (NML); (b) Normal-distribution-based multiple imputation (NMI); and (c) Bayesian estimation (BE). Via simulations, we found that NML and NMI could lead to biased estimates of moderation effects under MAR missingness mechanism. The BE method outperformed NMI and NML for MMR modeling with missing data in the focal predictor, missingness depending on the moderator and/or auxiliary variables, and correctly specified distributions for the focal predictor. In addition, more robust BE methods are needed in terms of the distribution mis-specification problem of the focal predictor. An empirical example was used to illustrate the applications of the methods with a simple sensitivity analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
On the interrelation of multiplication and division in secondary school children.
Huber, Stefan; Fischer, Ursula; Moeller, Korbinian; Nuerk, Hans-Christoph
2013-01-01
Each division problem can be transformed into as a multiplication problem and vice versa. Recent research has indicated strong developmental parallels between multiplication and division in primary school children. In this study, we were interested in (i) whether these developmental parallels persist into secondary school, (ii) whether similar developmental parallels can be observed for simple and complex problems, (iii) whether skill level modulates this relationship, and (iv) whether the correlations are specific and not driven by general cognitive or arithmetic abilities. Therefore, we assessed performance of 5th and 6th graders attending two secondary school types of the German educational system in simple and complex multiplication as well as division while controlling for non-verbal intelligence, short-term memory, and other arithmetic abilities. Accordingly, we collected data from students differing in skills levels due to either age (5th < 6th grade) or school type (general < intermediate secondary school). We observed moderate to strong bivariate and partial correlations between multiplication and division with correlations being higher for simple tasks but nevertheless reliable for complex tasks. Moreover, the association between simple multiplication and division depended on students' skill levels as reflected by school types, but not by age. Partial correlations were higher for intermediate than for general secondary school children. In sum, these findings emphasize the importance of the inverse relationship between multiplication and division which persists into later developmental stages. However, evidence for skill-related differences in the relationship between multiplication and division was restricted to the differences for school types.
NASA Technical Reports Server (NTRS)
Mckenzie, R. L.
1975-01-01
A semiclassical model of the inelastic collision between a vibrationally excited anharmonic oscillator and a structureless atom was used to predict the variation of thermally averaged vibration-translation rate coefficients with temperature and initial-state quantum number. Multiple oscillator states were included in a numerical solution for collinear encounters. The results are compared with CO-He experimental values for both ground and excited initial states using several simplified forms of the interaction potential. The numerical model was also used as a basis for evaluating several less complete but analytic models. Two computationally simple analytic approximations were found that successfully reproduced the numerical rate coefficients for a wide range of molecular properties and collision partners. Their limitations were also identified. The relative rates of multiple-quantum transitions from excited states were evaluated for several molecular types.
Nuclear Stability and Nucleon-Nucleon Interactions in Introductory and General Chemistry Textbooks
ERIC Educational Resources Information Center
Millevolte, Anthony
2010-01-01
The nucleus is a highly dense and highly charged substructure of atoms. In the nuclei of all atoms beyond hydrogen, multiple protons are in close proximity to each other in spite of strong electrostatic repulsions between them. The attractive internucleon strong force is described and its origin explained by using a simple quark model for the…
Mouse epileptic seizure detection with multiple EEG features and simple thresholding technique
NASA Astrophysics Data System (ADS)
Tieng, Quang M.; Anbazhagan, Ashwin; Chen, Min; Reutens, David C.
2017-12-01
Objective. Epilepsy is a common neurological disorder characterized by recurrent, unprovoked seizures. The search for new treatments for seizures and epilepsy relies upon studies in animal models of epilepsy. To capture data on seizures, many applications require prolonged electroencephalography (EEG) with recordings that generate voluminous data. The desire for efficient evaluation of these recordings motivates the development of automated seizure detection algorithms. Approach. A new seizure detection method is proposed, based on multiple features and a simple thresholding technique. The features are derived from chaos theory, information theory and the power spectrum of EEG recordings and optimally exploit both linear and nonlinear characteristics of EEG data. Main result. The proposed method was tested with real EEG data from an experimental mouse model of epilepsy and distinguished seizures from other patterns with high sensitivity and specificity. Significance. The proposed approach introduces two new features: negative logarithm of adaptive correlation integral and power spectral coherence ratio. The combination of these new features with two previously described features, entropy and phase coherence, improved seizure detection accuracy significantly. Negative logarithm of adaptive correlation integral can also be used to compute the duration of automatically detected seizures.
On the interrelation of multiplication and division in secondary school children
Huber, Stefan; Fischer, Ursula; Moeller, Korbinian; Nuerk, Hans-Christoph
2013-01-01
Multiplication and division are conceptually inversely related: Each division problem can be transformed into as a multiplication problem and vice versa. Recent research has indicated strong developmental parallels between multiplication and division in primary school children. In this study, we were interested in (i) whether these developmental parallels persist into secondary school, (ii) whether similar developmental parallels can be observed for simple and complex problems, (iii) whether skill level modulates this relationship, and (iv) whether the correlations are specific and not driven by general cognitive or arithmetic abilities. Therefore, we assessed performance of 5th and 6th graders attending two secondary school types of the German educational system in simple and complex multiplication as well as division while controlling for non-verbal intelligence, short-term memory, and other arithmetic abilities. Accordingly, we collected data from students differing in skills levels due to either age (5th < 6th grade) or school type (general < intermediate secondary school). We observed moderate to strong bivariate and partial correlations between multiplication and division with correlations being higher for simple tasks but nevertheless reliable for complex tasks. Moreover, the association between simple multiplication and division depended on students' skill levels as reflected by school types, but not by age. Partial correlations were higher for intermediate than for general secondary school children. In sum, these findings emphasize the importance of the inverse relationship between multiplication and division which persists into later developmental stages. However, evidence for skill-related differences in the relationship between multiplication and division was restricted to the differences for school types. PMID:24133476
Tsunami Simulators in Physical Modelling - Concept to Practical Solutions
NASA Astrophysics Data System (ADS)
Chandler, Ian; Allsop, William; Robinson, David; Rossetto, Tiziana; McGovern, David; Todd, David
2017-04-01
Whilst many researchers have conducted simple 'tsunami impact' studies, few engineering tools are available to assess the onshore impacts of tsunami, with no agreed methods available to predict loadings on coastal defences, buildings or related infrastructure. Most previous impact studies have relied upon unrealistic waveforms (solitary or dam-break waves and bores) rather than full-duration tsunami waves, or have used simplified models of nearshore and over-land flows. Over the last 10+ years, pneumatic Tsunami Simulators for the hydraulic laboratory have been developed into an exciting and versatile technology, allowing the forces of real-world tsunami to be reproduced and measured in a laboratory environment for the first time. These devices have been used to model generic elevated and N-wave tsunamis up to and over simple shorelines, and at example coastal defences and infrastructure. They have also reproduced full-duration tsunamis including Mercator 2004 and Tohoku 2011, both at 1:50 scale. Engineering scale models of these tsunamis have measured wave run-up on simple slopes, forces on idealised sea defences, pressures / forces on buildings, and scour at idealised buildings. This presentation will describe how these Tsunami Simulators work, demonstrate how they have generated tsunami waves longer than the facilities within which they operate, and will present research results from three generations of Tsunami Simulators. Highlights of direct importance to natural hazard modellers and coastal engineers include measurements of wave run-up levels, forces on single and multiple buildings and comparison with previous theoretical predictions. Multiple buildings have two malign effects. The density of buildings to flow area (blockage ratio) increases water depths and flow velocities in the 'streets'. But the increased building densities themselves also increase the cost of flow per unit area (both personal and monetary). The most recent study with the Tsunami Simulators therefore focussed on the influence of multiple buildings (up to 4 rows) which showed (for instance) that the greatest forces can act on the landward (not seaward) rows of buildings. Studies in the 70m long, 4m wide main channel of the Fast Flow Facility on tsunami defence structures have also measured forces on buildings in the lee of a failed defence wall and tsunami induced scour. Supporting presentations at this conference: McGovern et al on tsunami induced scour at coastal structures and Foster et al on building loads.
Chen, Weisheng; Sun, Cheng; Wei, Ru; Zhang, Yanlin; Ye, Heng; Chi, Ruibin; Zhang, Yichen; Hu, Bei; Lv, Bo; Chen, Lifang; Zhang, Xiunong; Lan, Huilan; Chen, Chunbo
2016-08-31
Despite the use of prokinetic agents, the overall success rate for postpyloric placement via a self-propelled spiral nasoenteric tube is quite low. This retrospective study was conducted in the intensive care units of 11 university hospitals from 2006 to 2016 among adult patients who underwent self-propelled spiral nasoenteric tube insertion. Success was defined as postpyloric nasoenteric tube placement confirmed by abdominal x-ray scan 24 hours after tube insertion. Chi-square automatic interaction detection (CHAID), simple classification and regression trees (SimpleCart), and J48 methodologies were used to develop decision tree models, and multiple logistic regression (LR) methodology was used to develop an LR model for predicting successful postpyloric nasoenteric tube placement. The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance of these models. Successful postpyloric nasoenteric tube placement was confirmed in 427 of 939 patients enrolled. For predicting successful postpyloric nasoenteric tube placement, the performance of the 3 decision trees was similar in terms of the AUCs: 0.715 for the CHAID model, 0.682 for the SimpleCart model, and 0.671 for the J48 model. The AUC of the LR model was 0.729, which outperformed the J48 model. Both the CHAID and LR models achieved an acceptable discrimination for predicting successful postpyloric nasoenteric tube placement and were useful for intensivists in the setting of self-propelled spiral nasoenteric tube insertion. © 2016 American Society for Parenteral and Enteral Nutrition.
Chen, Weisheng; Sun, Cheng; Wei, Ru; Zhang, Yanlin; Ye, Heng; Chi, Ruibin; Zhang, Yichen; Hu, Bei; Lv, Bo; Chen, Lifang; Zhang, Xiunong; Lan, Huilan; Chen, Chunbo
2018-01-01
Despite the use of prokinetic agents, the overall success rate for postpyloric placement via a self-propelled spiral nasoenteric tube is quite low. This retrospective study was conducted in the intensive care units of 11 university hospitals from 2006 to 2016 among adult patients who underwent self-propelled spiral nasoenteric tube insertion. Success was defined as postpyloric nasoenteric tube placement confirmed by abdominal x-ray scan 24 hours after tube insertion. Chi-square automatic interaction detection (CHAID), simple classification and regression trees (SimpleCart), and J48 methodologies were used to develop decision tree models, and multiple logistic regression (LR) methodology was used to develop an LR model for predicting successful postpyloric nasoenteric tube placement. The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance of these models. Successful postpyloric nasoenteric tube placement was confirmed in 427 of 939 patients enrolled. For predicting successful postpyloric nasoenteric tube placement, the performance of the 3 decision trees was similar in terms of the AUCs: 0.715 for the CHAID model, 0.682 for the SimpleCart model, and 0.671 for the J48 model. The AUC of the LR model was 0.729, which outperformed the J48 model. Both the CHAID and LR models achieved an acceptable discrimination for predicting successful postpyloric nasoenteric tube placement and were useful for intensivists in the setting of self-propelled spiral nasoenteric tube insertion. © 2016 American Society for Parenteral and Enteral Nutrition.
NASA Astrophysics Data System (ADS)
Ghosh, Dipak; Sarkar, Sharmila; Sen, Sanjib; Roy, Jaya
1995-06-01
In this paper the behavior of factorial moments with rapidity window size, which is usually explained in terms of ``intermittency,'' has been interpreted by simple quantum statistical properties of the emitting system using the concept of ``modified two-source model'' as recently proposed by Ghosh and Sarkar [Phys. Lett. B 278, 465 (1992)]. The analysis has been performed using our own data of 16Ag/Br and 24Ag/Br interactions at a few tens of GeV energy regime.
A Comparison between Multiple Regression Models and CUN-BAE Equation to Predict Body Fat in Adults
Fuster-Parra, Pilar; Bennasar-Veny, Miquel; Tauler, Pedro; Yañez, Aina; López-González, Angel A.; Aguiló, Antoni
2015-01-01
Background Because the accurate measure of body fat (BF) is difficult, several prediction equations have been proposed. The aim of this study was to compare different multiple regression models to predict BF, including the recently reported CUN-BAE equation. Methods Multi regression models using body mass index (BMI) and body adiposity index (BAI) as predictors of BF will be compared. These models will be also compared with the CUN-BAE equation. For all the analysis a sample including all the participants and another one including only the overweight and obese subjects will be considered. The BF reference measure was made using Bioelectrical Impedance Analysis. Results The simplest models including only BMI or BAI as independent variables showed that BAI is a better predictor of BF. However, adding the variable sex to both models made BMI a better predictor than the BAI. For both the whole group of participants and the group of overweight and obese participants, using simple models (BMI, age and sex as variables) allowed obtaining similar correlations with BF as when the more complex CUN-BAE was used (ρ = 0:87 vs. ρ = 0:86 for the whole sample and ρ = 0:88 vs. ρ = 0:89 for overweight and obese subjects, being the second value the one for CUN-BAE). Conclusions There are simpler models than CUN-BAE equation that fits BF as well as CUN-BAE does. Therefore, it could be considered that CUN-BAE overfits. Using a simple linear regression model, the BAI, as the only variable, predicts BF better than BMI. However, when the sex variable is introduced, BMI becomes the indicator of choice to predict BF. PMID:25821960
A comparison between multiple regression models and CUN-BAE equation to predict body fat in adults.
Fuster-Parra, Pilar; Bennasar-Veny, Miquel; Tauler, Pedro; Yañez, Aina; López-González, Angel A; Aguiló, Antoni
2015-01-01
Because the accurate measure of body fat (BF) is difficult, several prediction equations have been proposed. The aim of this study was to compare different multiple regression models to predict BF, including the recently reported CUN-BAE equation. Multi regression models using body mass index (BMI) and body adiposity index (BAI) as predictors of BF will be compared. These models will be also compared with the CUN-BAE equation. For all the analysis a sample including all the participants and another one including only the overweight and obese subjects will be considered. The BF reference measure was made using Bioelectrical Impedance Analysis. The simplest models including only BMI or BAI as independent variables showed that BAI is a better predictor of BF. However, adding the variable sex to both models made BMI a better predictor than the BAI. For both the whole group of participants and the group of overweight and obese participants, using simple models (BMI, age and sex as variables) allowed obtaining similar correlations with BF as when the more complex CUN-BAE was used (ρ = 0:87 vs. ρ = 0:86 for the whole sample and ρ = 0:88 vs. ρ = 0:89 for overweight and obese subjects, being the second value the one for CUN-BAE). There are simpler models than CUN-BAE equation that fits BF as well as CUN-BAE does. Therefore, it could be considered that CUN-BAE overfits. Using a simple linear regression model, the BAI, as the only variable, predicts BF better than BMI. However, when the sex variable is introduced, BMI becomes the indicator of choice to predict BF.
Motion and force control of multiple robotic manipulators
NASA Technical Reports Server (NTRS)
Wen, John T.; Kreutz-Delgado, Kenneth
1992-01-01
This paper addresses the motion and force control problem of multiple robot arms manipulating a cooperatively held object. A general control paradigm is introduced which decouples the motion and force control problems. For motion control, different control strategies are constructed based on the variables used as the control input in the controller design. There are three natural choices; acceleration of a generalized coordinate, arm tip force vectors, and the joint torques. The first two choices require full model information but produce simple models for the control design problem. The last choice results in a class of relatively model independent control laws by exploiting the Hamiltonian structure of the open loop system. The motion control only determines the joint torque to within a manifold, due to the multiple-arm kinematic constraint. To resolve the nonuniqueness of the joint torques, two methods are introduced. If the arm and object models are available, an optimization can be performed to best allocate the desired and effector control force to the joint actuators. The other possibility is to control the internal force about some set point. It is shown that effective force regulation can be achieved even if little model information is available.
Two methods for parameter estimation using multiple-trait models and beef cattle field data.
Bertrand, J K; Kriese, L A
1990-08-01
Two methods are presented for estimating variances and covariances from beef cattle field data using multiple-trait sire models. Both methods require that the first trait have no missing records and that the contemporary groups for the second trait be subsets of the contemporary groups for the first trait; however, the second trait may have missing records. One method uses pseudo expectations involving quadratics composed of the solutions and the right-hand sides of the mixed model equations. The other method is an extension of Henderson's Simple Method to the multiple trait case. Neither of these methods requires any inversions of large matrices in the computation of the parameters; therefore, both methods can handle very large sets of data. Four simulated data sets were generated to evaluate the methods. In general, both methods estimated genetic correlations and heritabilities that were close to the Restricted Maximum Likelihood estimates and the true data set values, even when selection within contemporary groups was practiced. The estimates of residual correlations by both methods, however, were biased by selection. These two methods can be useful in estimating variances and covariances from multiple-trait models in large populations that have undergone a minimal amount of selection within contemporary groups.
MEG evidence that the central auditory system simultaneously encodes multiple temporal cues.
Simpson, Michael I G; Barnes, Gareth R; Johnson, Sam R; Hillebrand, Arjan; Singh, Krish D; Green, Gary G R
2009-09-01
Speech contains complex amplitude modulations that have envelopes with multiple temporal cues. The processing of these complex envelopes is not well explained by the classical models of amplitude modulation processing. This may be because the evidence for the models typically comes from the use of simple sinusoidal amplitude modulations. In this study we used magnetoencephalography (MEG) to generate source space current estimates of the steady-state responses to simple one-component amplitude modulations and to a two-component amplitude modulation. A two-component modulation introduces the simplest form of modulation complexity into the waveform; the summation of the two-modulation rates introduces a beat-like modulation at the difference frequency between the two modulation rates. We compared the cortical representations of responses to the one-component and two-component modulations. In particular, we show that the temporal complexity in the two-component amplitude modulation stimuli was preserved at the cortical level. The method of stimulus normalization that we used also allows us to interpret these results as evidence that the important feature in sound modulations is the relative depth of one modulation rate with respect to another, rather than the absolute carrier-to-sideband modulation depth. More generally, this may be interpreted as evidence that modulation detection accurately preserves a representation of the modulation envelope. This is an important observation with respect to models of modulation processing, as it suggests that models may need a dynamic processing step to effectively model non-stationary stimuli. We suggest that the classic modulation filterbank model needs to be modified to take these findings into account.
Equivalent circuit models for interpreting impedance perturbation spectroscopy data
NASA Astrophysics Data System (ADS)
Smith, R. Lowell
2004-07-01
As in-situ structural integrity monitoring disciplines mature, there is a growing need to process sensor/actuator data efficiently in real time. Although smaller, faster embedded processors will contribute to this, it is also important to develop straightforward, robust methods to reduce the overall computational burden for practical applications of interest. This paper addresses the use of equivalent circuit modeling techniques for inferring structure attributes monitored using impedance perturbation spectroscopy. In pioneering work about ten years ago significant progress was associated with the development of simple impedance models derived from the piezoelectric equations. Using mathematical modeling tools currently available from research in ultrasonics and impedance spectroscopy is expected to provide additional synergistic benefits. For purposes of structural health monitoring the objective is to use impedance spectroscopy data to infer the physical condition of structures to which small piezoelectric actuators are bonded. Features of interest include stiffness changes, mass loading, and damping or mechanical losses. Equivalent circuit models are typically simple enough to facilitate the development of practical analytical models of the actuator-structure interaction. This type of parametric structure model allows raw impedance/admittance data to be interpreted optimally using standard multiple, nonlinear regression analysis. One potential long-term outcome is the possibility of cataloging measured viscoelastic properties of the mechanical subsystems of interest as simple lists of attributes and their statistical uncertainties, whose evolution can be followed in time. Equivalent circuit models are well suited for addressing calibration and self-consistency issues such as temperature corrections, Poisson mode coupling, and distributed relaxation processes.
NASA Astrophysics Data System (ADS)
Nagasaka, Yosuke; Nozu, Atsushi
2017-02-01
The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Ong, J. S. L.; Charin, C.; Leong, J. H.
2017-12-01
Avalanche photodiodes (APDs) with steep electric field gradients generally have low excess noise that arises from carrier multiplication within the internal gain of the devices, and the Monte Carlo (MC) method is among popular device simulation tools for such devices. However, there are few articles relating to carrier trajectory modeling in MC models for such devices. In this work, a set of electric-field-gradient-dependent carrier trajectory tracking equations are developed and used to update the positions of carriers along the path during Simple-band Monte Carlo (SMC) simulations of APDs with non-uniform electric fields. The mean gain and excess noise results obtained from the SMC model employing these equations show good agreement with the results reported for a series of silicon diodes, including a p+n diode with steep electric field gradients. These results confirm the validity and demonstrate the feasibility of the trajectory tracking equations applied in SMC models for simulating mean gain and excess noise in APDs with non-uniform electric fields. Also, the simulation results of mean gain, excess noise, and carrier ionization positions obtained from the SMC model of this work agree well with those of the conventional SMC model employing the concept of a uniform electric field within a carrier free-flight. These results demonstrate that the electric field variation within a carrier free-flight has an insignificant effect on the predicted mean gain and excess noise results. Therefore, both the SMC model of this work and the conventional SMC model can be used to predict the mean gain and excess noise in APDs with highly non-uniform electric fields.
Studies of aerothermal loads generated in regions of shock/shock interaction in hypersonic flow
NASA Technical Reports Server (NTRS)
Holden, Michael S.; Moselle, John R.; Lee, Jinho
1991-01-01
Experimental studies were conducted to examine the aerothermal characteristics of shock/shock/boundary layer interaction regions generated by single and multiple incident shocks. The presented experimental studies were conducted over a Mach number range from 6 to 19 for a range of Reynolds numbers to obtain both laminar and turbulent interaction regions. Detailed heat transfer and pressure measurements were made for a range of interaction types and incident shock strengths over a transverse cylinder, with emphasis on the 3 and 4 type interaction regions. The measurements were compared with the simple Edney, Keyes, and Hains models for a range of interaction configurations and freestream conditions. The complex flowfields and aerothermal loads generated by multiple-shock impingement, while not generating as large peak loads, provide important test cases for code prediction. The detailed heat transfer and pressure measurements proved a good basis for evaluating the accuracy of simple prediction methods and detailed numerical solutions for laminar and transitional regions or shock/shock interactions.
Scott, Finlay; Jardim, Ernesto; Millar, Colin P; Cerviño, Santiago
2016-01-01
Estimating fish stock status is very challenging given the many sources and high levels of uncertainty surrounding the biological processes (e.g. natural variability in the demographic rates), model selection (e.g. choosing growth or stock assessment models) and parameter estimation. Incorporating multiple sources of uncertainty in a stock assessment allows advice to better account for the risks associated with proposed management options, promoting decisions that are more robust to such uncertainty. However, a typical assessment only reports the model fit and variance of estimated parameters, thereby underreporting the overall uncertainty. Additionally, although multiple candidate models may be considered, only one is selected as the 'best' result, effectively rejecting the plausible assumptions behind the other models. We present an applied framework to integrate multiple sources of uncertainty in the stock assessment process. The first step is the generation and conditioning of a suite of stock assessment models that contain different assumptions about the stock and the fishery. The second step is the estimation of parameters, including fitting of the stock assessment models. The final step integrates across all of the results to reconcile the multi-model outcome. The framework is flexible enough to be tailored to particular stocks and fisheries and can draw on information from multiple sources to implement a broad variety of assumptions, making it applicable to stocks with varying levels of data availability The Iberian hake stock in International Council for the Exploration of the Sea (ICES) Divisions VIIIc and IXa is used to demonstrate the framework, starting from length-based stock and indices data. Process and model uncertainty are considered through the growth, natural mortality, fishing mortality, survey catchability and stock-recruitment relationship. Estimation uncertainty is included as part of the fitting process. Simple model averaging is used to integrate across the results and produce a single assessment that considers the multiple sources of uncertainty.
NASA Astrophysics Data System (ADS)
Yuanyuan, Zhang
The stochastic branching model of multi-particle productions in high energy collision has theoretical basis in perturbative QCD, and also successfully describes the experimental data for a wide energy range. However, over the years, little attention has been put on the branching model for supersymmetric (SUSY) particles. In this thesis, a stochastic branching model has been built to describe the pure supersymmetric particle jets evolution. This model is a modified two-phase stochastic branching process, or more precisely a two phase Simple Birth Process plus Poisson Process. The general case that the jets contain both ordinary particle jets and supersymmetric particle jets has also been investigated. We get the multiplicity distribution of the general case, which contains a Hypergeometric function in its expression. We apply this new multiplicity distribution to the current experimental data of pp collision at center of mass energy √s = 0.9, 2.36, 7 TeV. The fitting shows the supersymmetric particles haven't participate branching at current collision energy.
HIA, the next step: Defining models and roles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putters, Kim
If HIA is to be an effective instrument for optimising health interests in the policy making process it has to recognise the different contests in which policy is made and the relevance of both technical rationality and political rationality. Policy making may adopt a rational perspective in which there is a systematic and orderly progression from problem formulation to solution or a network perspective in which there are multiple interdependencies, extensive negotiation and compromise, and the steps from problem to formulation are not followed sequentially or in any particular order. Policy problems may be simple with clear causal pathways andmore » responsibilities or complex with unclear causal pathways and disputed responsibilities. Network analysis is required to show which stakeholders are involved, their support for health issues and the degree of consensus. From this analysis three models of HIA emerge. The first is the phases model which is fitted to simple problems and a rational perspective of policymaking. This model involves following structured steps. The second model is the rounds (Echternach) model that is fitted to complex problems and a network perspective of policymaking. This model is dynamic and concentrates on network solutions taking these steps in no particular order. The final model is the 'garbage can' model fitted to contexts which combine simple and complex problems. In this model HIA functions as a problem solver and signpost keeping all possible solutions and stakeholders in play and allowing solutions to emerge over time. HIA models should be the beginning rather than the conclusion of discussion the worlds of HIA and policymaking.« less
Whole tree xylem sap flow responses to multiple environmental variables in a wet tropical forest
J.J. O' Brien; S.F. Oberbauer; D.B. Clark
2004-01-01
In order to quantify and characterize the variance in rain-forest tree physiology, whole tree sap flow responses to local environmental conditions were investigated in 10 species of trees with diverse traits at La Selva Biological Station, Costa Rica. A simple model was developed to predict tree sap flow responses to a synthetic environmental variable generated by a...
J.M. Warren; F.C. Meinzer; J.R. Brooks; J.-C. Domec; R. Coulombe
2006-01-01
We incorporated soil/plant biophysical properties into a simple model to predict seasonal trajectories of hydraulic redistribution (HR). We measured soil water content, water potential root conductivity, and climate across multiple years in two old-growth coniferous forests. The HR variability within sites (0 to 0.5 mm/d) was linked to spatial patterns of roots, soil...
New simple radiological criteria proposed for multiple primary lung cancers.
Matsunaga, Takeshi; Suzuki, Kenji; Takamochi, Kazuya; Oh, Shiaki
2017-11-01
Controversies remain as to the differential diagnosis between multiple primary lung cancer (MPLC) and intrapulmonary metastasis (IM) in lung cancers. We have investigated the clinical criteria for MPLC and here propose a set of new and simple criteria from the stand point of prognosis. A retrospective study was conducted on 588 consecutive patients with resected lung cancer of clinical Stage IA between 2009 and 2012. Multiple lung cancers (MLCs) were observed in 103 (17.5%) of the 588 patients. All main and other tumors were divided into solid tumor (ST) and non-solid tumor (non-ST). We defined Group A as MLCs having at least one non-ST and Group B as all tumors being ST. Cox's proportional hazard model was used for the multivariate analyses to investigate the preoperative prognostic factors. We divided the MLCs into MPLC and IM based on the preoperative prognostic factors, and survival was estimated by the Kaplan-Meier method. A multivariate analysis with Cox's proportional hazards model revealed that Group A independently predicted good overall survival (HR = 0.165, 95% CI: 0.041-0.672).Differences in the 3- and 5-year overall survivals between Groups A and B were statistically significant (96.3%/92.2% vs. 70.0%/60.0%, Pvalue = 0.0002). We suggest that Group A, defined as the presence of at least one tumor with a ground glass opacity component and clinical N0, should be excluded from the conventional concept of multiple lung cancers based on the criteria of Martini and Melamed as it has a very good prognosis. This group would be considered to be radiological MPLC. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Hamers, Adrian S.
2018-05-01
We extend the formalism of a previous paper to include the effects of flybys and instantaneous perturbations such as supernovae on the long-term secular evolution of hierarchical multiple systems with an arbitrary number of bodies and hierarchy, provided that the system is composed of nested binary orbits. To model secular encounters, we expand the Hamiltonian in terms of the ratio of the separation of the perturber with respect to the barycentre of the multiple system, to the separation of the widest orbit. Subsequently, we integrate over the perturber orbit numerically or analytically. We verify our method for secular encounters and illustrate it with an example. Furthermore, we describe a method to compute instantaneous orbital changes to multiple systems, such as asymmetric supernovae and impulsive encounters. The secular code, with implementation of the extensions described in this paper, is publicly available within AMUSE, and we provide a number of simple example scripts to illustrate its usage for secular and impulsive encounters and asymmetric supernovae. The extensions presented in this paper are a next step towards efficiently modelling the evolution of complex multiple systems embedded in star clusters.
Variance-based selection may explain general mating patterns in social insects.
Rueppell, Olav; Johnson, Nels; Rychtár, Jan
2008-06-23
Female mating frequency is one of the key parameters of social insect evolution. Several hypotheses have been suggested to explain multiple mating and considerable empirical research has led to conflicting results. Building on several earlier analyses, we present a simple general model that links the number of queen matings to variance in colony performance and this variance to average colony fitness. The model predicts selection for multiple mating if the average colony succeeds in a focal task, and selection for single mating if the average colony fails, irrespective of the proximate mechanism that links genetic diversity to colony fitness. Empirical support comes from interspecific comparisons, e.g. between the bee genera Apis and Bombus, and from data on several ant species, but more comprehensive empirical tests are needed.
Performance analysis of cross-seeding WDM-PON system using transfer matrix method
NASA Astrophysics Data System (ADS)
Simatupang, Joni Welman; Pukhrambam, Puspa Devi; Huang, Yen-Ru
2016-12-01
In this paper, a model based on the transfer matrix method is adopted to analyze the effects of Rayleigh backscattering and Fresnel multiple reflections on a cross-seeding WDM-PON system. As part of analytical approximation methods, this time-independent model is quite simple but very efficient when it is applied to various WDM-PON transmission systems, including the cross-seeding scheme. The cross seeding scheme is most beneficial for systems with low loop-back ONU gain or low reflection loss at the drop fiber for upstream data in bidirectional transmission. However for downstream data transmission, multiple reflections power could destroy the usefulness of the cross-seeding scheme when the reflectivity is high enough and the RN is positioned near OLT or close to ONU.
Martins, Raquel R; McCracken, Andrew W; Simons, Mirre J P; Henriques, Catarina M; Rera, Michael
2018-02-05
The Smurf Assay (SA) was initially developed in the model organism Drosophila melanogaster where a dramatic increase of intestinal permeability has been shown to occur during aging (Rera et al. , 2011). We have since validated the protocol in multiple other model organisms (Dambroise et al. , 2016) and have utilized the assay to further our understanding of aging (Tricoire and Rera, 2015; Rera et al. , 2018). The SA has now also been used by other labs to assess intestinal barrier permeability (Clark et al. , 2015; Katzenberger et al. , 2015; Barekat et al. , 2016; Chakrabarti et al. , 2016; Gelino et al. , 2016). The SA in itself is simple; however, numerous small details can have a considerable impact on its experimental validity and subsequent interpretation. Here, we provide a detailed update on the SA technique and explain how to catch a Smurf while avoiding the most common experimental fallacies.
Series solution for two-frequency Bragg interaction using the Korpel-Poon multiple-scattering model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, R.K.; Somekh, M.G.
1993-03-01
The two-frequency acousto-optic interaction is analytically solved in the Bragg regime by use of a multiple-scattering model that was previously described by Korpel and Poon [J. Opt. Soc. Am. 70, 817-820 (1980)]. The method uses Feynman diagrams to conceptualize the problem and demonstrates the applicability of such a method to model a relatively complex system. The solution presented is compared with that derived by Hecht [IEEE Trans. Sonics Ultrason. SU-24, 7-18 (1977)], who used a coupled-mode approach. The derivation of the authors' solution is relatively simple and leads to a formulation that appears to be more compact. Numerical evaluations havemore » demonstrated their equivalence. The authors present results that illustrate the dependence of the diffracted beam intensities on the amplitude of the two acoustic waves. 21 refs., 8 figs.« less
Learning to represent spatial transformations with factored higher-order Boltzmann machines.
Memisevic, Roland; Hinton, Geoffrey E
2010-06-01
To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans.
Caccavale, Justin; Fiumara, David; Stapf, Michael; Sweitzer, Liedeke; Anderson, Hannah J; Gorky, Jonathan; Dhurjati, Prasad; Galileo, Deni S
2017-12-11
Glioblastoma multiforme (GBM) is a devastating brain cancer for which there is no known cure. Its malignancy is due to rapid cell division along with high motility and invasiveness of cells into the brain tissue. Simple 2-dimensional laboratory assays (e.g., a scratch assay) commonly are used to measure the effects of various experimental perturbations, such as treatment with chemical inhibitors. Several mathematical models have been developed to aid the understanding of the motile behavior and proliferation of GBM cells. However, many are mathematically complicated, look at multiple interdependent phenomena, and/or use modeling software not freely available to the research community. These attributes make the adoption of models and simulations of even simple 2-dimensional cell behavior an uncommon practice by cancer cell biologists. Herein, we developed an accurate, yet simple, rule-based modeling framework to describe the in vitro behavior of GBM cells that are stimulated by the L1CAM protein using freely available NetLogo software. In our model L1CAM is released by cells to act through two cell surface receptors and a point of signaling convergence to increase cell motility and proliferation. A simple graphical interface is provided so that changes can be made easily to several parameters controlling cell behavior, and behavior of the cells is viewed both pictorially and with dedicated graphs. We fully describe the hierarchical rule-based modeling framework, show simulation results under several settings, describe the accuracy compared to experimental data, and discuss the potential usefulness for predicting future experimental outcomes and for use as a teaching tool for cell biology students. It is concluded that this simple modeling framework and its simulations accurately reflect much of the GBM cell motility behavior observed experimentally in vitro in the laboratory. Our framework can be modified easily to suit the needs of investigators interested in other similar intrinsic or extrinsic stimuli that influence cancer or other cell behavior. This modeling framework of a commonly used experimental motility assay (scratch assay) should be useful to both researchers of cell motility and students in a cell biology teaching laboratory.
Hewitt, Angela L; Popa, Laurentiu S; Pasalar, Siavash; Hendrix, Claudia M; Ebner, Timothy J
2011-11-01
Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of R(adj)(2)), followed by position (28 ± 24% of R(adj)(2)) and speed (11 ± 19% of R(adj)(2)). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower R(adj)(2) values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics.
Di Donato, Violante; Kontopantelis, Evangelos; Aletti, Giovanni; Casorelli, Assunta; Piacenti, Ilaria; Bogani, Giorgio; Lecce, Francesca; Benedetti Panici, Pierluigi
2017-06-01
Primary cytoreductive surgery (PDS) followed by platinum-based chemotherapy is the cornerstone of treatment and the absence of residual tumor after PDS is universally considered the most important prognostic factor. The aim of the present analysis was to evaluate trend and predictors of 30-day mortality in patients undergoing primary cytoreduction for ovarian cancer. Literature was searched for records reporting 30-day mortality after PDS. All cohorts were rated for quality. Simple and multiple Poisson regression models were used to quantify the association between 30-day mortality and the following: overall or severe complications, proportion of patients with stage IV disease, median age, year of publication, and weighted surgical complexity index. Using the multiple regression model, we calculated the risk of perioperative mortality at different levels for statistically significant covariates of interest. Simple regression identified median age and proportion of patients with stage IV disease as statistically significant predictors of 30-day mortality. When included in the multiple Poisson regression model, both remained statistically significant, with an incidence rate ratio of 1.087 for median age and 1.017 for stage IV disease. Disease stage was a strong predictor, with the risk estimated to increase from 2.8% (95% confidence interval 2.02-3.66) for stage III to 16.1% (95% confidence interval 6.18-25.93) for stage IV, for a cohort with a median age of 65 years. Metaregression demonstrated that increased age and advanced clinical stage were independently associated with an increased risk of mortality, and the combined effects of both factors greatly increased the risk.
Causal Loop Analysis of coastal geomorphological systems
NASA Astrophysics Data System (ADS)
Payo, Andres; Hall, Jim W.; French, Jon; Sutherland, James; van Maanen, Barend; Nicholls, Robert J.; Reeve, Dominic E.
2016-03-01
As geomorphologists embrace ever more sophisticated theoretical frameworks that shift from simple notions of evolution towards single steady equilibria to recognise the possibility of multiple response pathways and outcomes, morphodynamic modellers are facing the problem of how to keep track of an ever-greater number of system feedbacks. Within coastal geomorphology, capturing these feedbacks is critically important, especially as the focus of activity shifts from reductionist models founded on sediment transport fundamentals to more synthesist ones intended to resolve emergent behaviours at decadal to centennial scales. This paper addresses the challenge of mapping the feedback structure of processes controlling geomorphic system behaviour with reference to illustrative applications of Causal Loop Analysis at two study cases: (1) the erosion-accretion behaviour of graded (mixed) sediment beds, and (2) the local alongshore sediment fluxes of sand-rich shorelines. These case study examples are chosen on account of their central role in the quantitative modelling of geomorphological futures and as they illustrate different types of causation. Causal loop diagrams, a form of directed graph, are used to distil the feedback structure to reveal, in advance of more quantitative modelling, multi-response pathways and multiple outcomes. In the case of graded sediment bed, up to three different outcomes (no response, and two disequilibrium states) can be derived from a simple qualitative stability analysis. For the sand-rich local shoreline behaviour case, two fundamentally different responses of the shoreline (diffusive and anti-diffusive), triggered by small changes of the shoreline cross-shore position, can be inferred purely through analysis of the causal pathways. Explicit depiction of feedback-structure diagrams is beneficial when developing numerical models to explore coastal morphological futures. By explicitly mapping the feedbacks included and neglected within a model, the modeller can readily assess if critical feedback loops are included.
Products of multiple Fourier series with application to the multiblade transformation
NASA Technical Reports Server (NTRS)
Kunz, D. L.
1981-01-01
A relatively simple and systematic method for forming the products of multiple Fourier series using tensor like operations is demonstrated. This symbolic multiplication can be performed for any arbitrary number of series, and the coefficients of a set of linear differential equations with periodic coefficients from a rotating coordinate system to a nonrotating system is also demonstrated. It is shown that using Fourier operations to perform this transformation make it easily understood, simple to apply, and generally applicable.
NASA Astrophysics Data System (ADS)
Baer, P.; Mastrandrea, M.
2006-12-01
Simple probabilistic models which attempt to estimate likely transient temperature change from specified CO2 emissions scenarios must make assumptions about at least six uncertain aspects of the causal chain between emissions and temperature: current radiative forcing (including but not limited to aerosols), current land use emissions, carbon sinks, future non-CO2 forcing, ocean heat uptake, and climate sensitivity. Of these, multiple PDFs (probability density functions) have been published for the climate sensitivity, a couple for current forcing and ocean heat uptake, one for future non-CO2 forcing, and none for current land use emissions or carbon cycle uncertainty (which are interdependent). Different assumptions about these parameters, as well as different model structures, will lead to different estimates of likely temperature increase from the same emissions pathway. Thus policymakers will be faced with a range of temperature probability distributions for the same emissions scenarios, each described by a central tendency and spread. Because our conventional understanding of uncertainty and probability requires that a probabilistically defined variable of interest have only a single mean (or median, or modal) value and a well-defined spread, this "multidimensional" uncertainty defies straightforward utilization in policymaking. We suggest that there are no simple solutions to the questions raised. Crucially, we must dispel the notion that there is a "true" probability probabilities of this type are necessarily subjective, and reasonable people may disagree. Indeed, we suggest that what is at stake is precisely the question, what is it reasonable to believe, and to act as if we believe? As a preliminary suggestion, we demonstrate how the output of a simple probabilistic climate model might be evaluated regarding the reasonableness of the outputs it calculates with different input PDFs. We suggest further that where there is insufficient evidence to clearly favor one range of probabilistic projections over another, that the choice of results on which to base policy must necessarily involve ethical considerations, as they have inevitable consequences for the distribution of risk In particular, the choice to use a more "optimistic" PDF for climate sensitivity (or other components of the causal chain) leads to the allowance of higher emissions consistent with any specified goal for risk reduction, and thus leads to higher climate impacts, in exchange for lower mitigation costs.
Jbabdi, Saad; Sotiropoulos, Stamatios N; Savio, Alexander M; Graña, Manuel; Behrens, Timothy EJ
2012-01-01
In this article, we highlight an issue that arises when using multiple b-values in a model-based analysis of diffusion MR data for tractography. The non-mono-exponential decay, commonly observed in experimental data, is shown to induce over-fitting in the distribution of fibre orientations when not considered in the model. Extra fibre orientations perpendicular to the main orientation arise to compensate for the slower apparent signal decay at higher b-values. We propose a simple extension to the ball and stick model based on a continuous Gamma distribution of diffusivities, which significantly improves the fitting and reduces the over-fitting. Using in-vivo experimental data, we show that this model outperforms a simpler, noise floor model, especially at the interfaces between brain tissues, suggesting that partial volume effects are a major cause of the observed non-mono-exponential decay. This model may be helpful for future data acquisition strategies that may attempt to combine multiple shells to improve estimates of fibre orientations in white matter and near the cortex. PMID:22334356
Locomotion of C. elegans: A Piecewise-Harmonic Curvature Representation of Nematode Behavior
Padmanabhan, Venkat; Khan, Zeina S.; Solomon, Deepak E.; Armstrong, Andrew; Rumbaugh, Kendra P.; Vanapalli, Siva A.; Blawzdziewicz, Jerzy
2012-01-01
Caenorhabditis elegans, a free-living soil nematode, displays a rich variety of body shapes and trajectories during its undulatory locomotion in complex environments. Here we show that the individual body postures and entire trails of C. elegans have a simple analytical description in curvature representation. Our model is based on the assumption that the curvature wave is generated in the head segment of the worm body and propagates backwards. We have found that a simple harmonic function for the curvature can capture multiple worm shapes during the undulatory movement. The worm body trajectories can be well represented in terms of piecewise sinusoidal curvature with abrupt changes in amplitude, wavevector, and phase. PMID:22792224
Context Switching with Multiple Register Windows: A RISC Performance Study
NASA Technical Reports Server (NTRS)
Konsek, Marion B.; Reed, Daniel A.; Watcharawittayakul, Wittaya
1987-01-01
Although previous studies have shown that a large file of overlapping register windows can greatly reduce procedure call/return overhead, the effects of register windows in a multiprogramming environment are poorly understood. This paper investigates the performance of multiprogrammed, reduced instruction set computers (RISCs) as a function of window management strategy. Using an analytic model that reflects context switch and procedure call overheads, we analyze the performance of simple, linearly self-recursive programs. For more complex programs, we present the results of a simulation study. These studies show that a simple strategy that saves all windows prior to a context switch, but restores only a single window following a context switch, performs near optimally.
Våge, Selina; Thingstad, T Frede
2015-01-01
Trophic interactions are highly complex and modern sequencing techniques reveal enormous biodiversity across multiple scales in marine microbial communities. Within the chemically and physically relatively homogeneous pelagic environment, this calls for an explanation beyond spatial and temporal heterogeneity. Based on observations of simple parasite-host and predator-prey interactions occurring at different trophic levels and levels of phylogenetic resolution, we present a theoretical perspective on this enormous biodiversity, discussing in particular self-similar aspects of pelagic microbial food web organization. Fractal methods have been used to describe a variety of natural phenomena, with studies of habitat structures being an application in ecology. In contrast to mathematical fractals where pattern generating rules are readily known, however, identifying mechanisms that lead to natural fractals is not straight-forward. Here we put forward the hypothesis that trophic interactions between pelagic microbes may be organized in a fractal-like manner, with the emergent network resembling the structure of the Sierpinski triangle. We discuss a mechanism that could be underlying the formation of repeated patterns at different trophic levels and discuss how this may help understand characteristic biomass size-spectra that hint at scale-invariant properties of the pelagic environment. If the idea of simple underlying principles leading to a fractal-like organization of the pelagic food web could be formalized, this would extend an ecologists mindset on how biological complexity could be accounted for. It may furthermore benefit ecosystem modeling by facilitating adequate model resolution across multiple scales.
Våge, Selina; Thingstad, T. Frede
2015-01-01
Trophic interactions are highly complex and modern sequencing techniques reveal enormous biodiversity across multiple scales in marine microbial communities. Within the chemically and physically relatively homogeneous pelagic environment, this calls for an explanation beyond spatial and temporal heterogeneity. Based on observations of simple parasite-host and predator-prey interactions occurring at different trophic levels and levels of phylogenetic resolution, we present a theoretical perspective on this enormous biodiversity, discussing in particular self-similar aspects of pelagic microbial food web organization. Fractal methods have been used to describe a variety of natural phenomena, with studies of habitat structures being an application in ecology. In contrast to mathematical fractals where pattern generating rules are readily known, however, identifying mechanisms that lead to natural fractals is not straight-forward. Here we put forward the hypothesis that trophic interactions between pelagic microbes may be organized in a fractal-like manner, with the emergent network resembling the structure of the Sierpinski triangle. We discuss a mechanism that could be underlying the formation of repeated patterns at different trophic levels and discuss how this may help understand characteristic biomass size-spectra that hint at scale-invariant properties of the pelagic environment. If the idea of simple underlying principles leading to a fractal-like organization of the pelagic food web could be formalized, this would extend an ecologists mindset on how biological complexity could be accounted for. It may furthermore benefit ecosystem modeling by facilitating adequate model resolution across multiple scales. PMID:26648929
Capillarity Guided Patterning of Microliquids.
Kang, Myeongwoo; Park, Woohyun; Na, Sangcheol; Paik, Sang-Min; Lee, Hyunjae; Park, Jae Woo; Kim, Ho-Young; Jeon, Noo Li
2015-06-01
Soft lithography and other techniques have been developed to investigate biological and chemical phenomena as an alternative to photolithography-based patterning methods that have compatibility problems. Here, a simple approach for nonlithographic patterning of liquids and gels inside microchannels is described. Using a design that incorporates strategically placed microstructures inside the channel, microliquids or gels can be spontaneously trapped and patterned when the channel is drained. The ability to form microscale patterns inside microfluidic channels using simple fluid drain motion offers many advantages. This method is geometrically analyzed based on hydrodynamics and verified with simulation and experiments. Various materials (i.e., water, hydrogels, and other liquids) are successfully patterned with complex shapes that are isolated from each other. Multiple cell types are patterned within the gels. Capillarity guided patterning (CGP) is fast, simple, and robust. It is not limited by pattern shape, size, cell type, and material. In a simple three-step process, a 3D cancer model that mimics cell-cell and cell-extracellular matrix interactions is engineered. The simplicity and robustness of the CGP will be attractive for developing novel in vitro models of organ-on-a-chip and other biological experimental platforms amenable to long-term observation of dynamic events using advanced imaging and analytical techniques. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Neural-Fuzzy model Based Steel Pipeline Multiple Cracks Classification
NASA Astrophysics Data System (ADS)
Elwalwal, Hatem Mostafa; Mahzan, Shahruddin Bin Hj.; Abdalla, Ahmed N.
2017-10-01
While pipes are cheaper than other means of transportation, this cost saving comes with a major price: pipes are subject to cracks, corrosion etc., which in turn can cause leakage and environmental damage. In this paper, Neural-Fuzzy model for multiple cracks classification based on Lamb Guide Wave. Simulation results for 42 sample were collected using ANSYS software. The current research object to carry on the numerical simulation and experimental study, aiming at finding an effective way to detection and the localization of cracks and holes defects in the main body of pipeline. Considering the damage form of multiple cracks and holes which may exist in pipeline, to determine the respective position in the steel pipe. In addition, the technique used in this research a guided lamb wave based structural health monitoring method whereas piezoelectric transducers will use as exciting and receiving sensors by Pitch-Catch method. Implementation of simple learning mechanism has been developed specially for the ANN for fuzzy the system represented.
Surface roughness effects on the solar reflectance of cool asphalt shingles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akbari, Hashem; Berdahl, Paul; Akbari, Hashem
2008-02-17
We analyze the solar reflectance of asphalt roofing shingles that are covered with pigmented mineral roofing granules. The reflecting surface is rough, with a total area approximately twice the nominal area. We introduce a simple analytical model that relates the 'micro-reflectance' of a small surface region to the 'macro-reflectance' of the shingle. This model uses a mean field approximation to account for multiple scattering effects. The model is then used to compute the reflectance of shingles with a mixture of different colored granules, when the reflectances of the corresponding mono-color shingles are known. Simple linear averaging works well, with smallmore » corrections to linear averaging derived for highly reflective materials. Reflective base granules and reflective surface coatings aid achievement of high solar reflectance. Other factors that influence the solar reflectance are the size distribution of the granules, coverage of the asphalt substrate, and orientation of the granules as affected by rollers during fabrication.« less
Pulsed Rabi oscillations in quantum two-level systems: beyond the area theorem
NASA Astrophysics Data System (ADS)
Fischer, Kevin A.; Hanschke, Lukas; Kremser, Malte; Finley, Jonathan J.; Müller, Kai; Vučković, Jelena
2018-01-01
The area theorem states that when a short optical pulse drives a quantum two-level system, it undergoes Rabi oscillations in the probability of scattering a single photon. In this work, we investigate the breakdown of the area theorem as both the pulse length becomes non-negligible and for certain pulse areas. Using simple quantum trajectories, we provide an analytic approximation to the photon emission dynamics of a two-level system. Our model provides an intuitive way to understand re-excitation, which elucidates the mechanism behind the two-photon emission events that can spoil single-photon emission. We experimentally measure the emission statistics from a semiconductor quantum dot, acting as a two-level system, and show good agreement with our simple model for short pulses. Additionally, the model clearly explains our recent results (Fischer and Hanschke 2017 et al Nat. Phys.) showing dominant two-photon emission from a two-level system for pulses with interaction areas equal to an even multiple of π.
NASA Technical Reports Server (NTRS)
Deepak, A.; Fluellen, A.
1978-01-01
An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.
Operator priming and generalization of practice in adults' simple arithmetic.
Chen, Yalin; Campbell, Jamie I D
2016-04-01
There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication, suggesting that a general addition procedure was primed by the + sign. In Experiment 1 (n = 36), we applied this operator-priming paradigm to rule-based problems (0 + N = N, 1 × N = N, 0 × N = 0) and 1 + N problems with N ranging from 0 to 9. For the rule-based problems, we found both operator-preview facilitation and generalization of practice (e.g., practicing 0 + 3 sped up unpracticed 0 + 8), the latter being a signature of procedure use; however, we also found operator-preview facilitation for 1 + N in the absence of generalization, which implies the 1 + N problems were solved by fact retrieval but nonetheless were facilitated by an operator preview. Thus, the operator preview effect does not discriminate procedure use from fact retrieval. Experiment 2 (n = 36) investigated whether a population with advanced mathematical training-engineering and computer science students-would show generalization of practice for nonrule-based simple addition problems (e.g., 1 + 4, 4 + 7). The 0 + N problems again presented generalization, whereas no nonzero problem type did; but all nonzero problems sped up when the identical problems were retested, as predicted by item-specific fact retrieval. The results pose a strong challenge to the generality of the proposal that skilled adults' simple addition is based on fast procedural algorithms, and instead support a fact-retrieval model of fast addition performance. (c) 2016 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, D.; Sarkar, S.; Sen, S.
1995-06-01
In this paper the behavior of factorial moments with rapidity window size, which is usually explained in terms of ``intermittency,`` has been interpreted by simple quantum statistical properties of the emitting system using the concept of ``modified two-source model`` as recently proposed by Ghosh and Sarkar [Phys. Lett. B 278, 465 (1992)]. The analysis has been performed using our own data of {sup 16}O-Ag/Br and {sup 24}Mg-Ag/Br interactions at a few tens of GeV energy regime.
A minimal model for multiple epidemics and immunity spreading.
Sneppen, Kim; Trusina, Ala; Jensen, Mogens H; Bornholdt, Stefan
2010-10-18
Pathogens and parasites are ubiquitous in the living world, being limited only by availability of suitable hosts. The ability to transmit a particular disease depends on competing infections as well as on the status of host immunity. Multiple diseases compete for the same resource and their fate is coupled to each other. Such couplings have many facets, for example cross-immunization between related influenza strains, mutual inhibition by killing the host, or possible even a mutual catalytic effect if host immunity is impaired. We here introduce a minimal model for an unlimited number of unrelated pathogens whose interaction is simplified to simple mutual exclusion. The model incorporates an ongoing development of host immunity to past diseases, while leaving the system open for emergence of new diseases. The model exhibits a rich dynamical behavior with interacting infection waves, leaving broad trails of immunization in the host population. This obtained immunization pattern depends only on the system size and on the mutation rate that initiates new diseases.
Theoretical design study of the MSFC wind-wheel turbine
NASA Technical Reports Server (NTRS)
Frost, W.; Kessel, P. A.
1982-01-01
A wind wheel turbine (WWT) is studied. Evaluation of the probable performance, possible practical applications, and economic viability as compared to other conventional wind energy systems is discussed. The WWT apparatus is essentially a bladed wheel which is directly exposed to the wind on the upper half and exposed to wind through multiple ducting on the lower half. The multiple ducts consist of a forward duct (front concentrator) and two side ducts (side concentrators). The forced rotation of the wheel is then converted to power through appropriate subsystems. Test results on two simple models, a paper model and a stainless steel model, are reported. Measured values of power coefficients over wind speeds ranging from 4 to 16 m/s are given. An analytical model of a four bladed wheel is also developed. Overall design features of the wind turbine are evaluated and discussed. Turbine sizing is specified for a 5 and 25 kW machine. Suggested improvements to the original design to increase performance and performance predictions for an improved WWT design are given.
Audible sonar images generated with proprioception for target analysis.
Kuc, Roman B
2017-05-01
Some blind humans have demonstrated the ability to detect and classify objects with echolocation using palatal clicks. An audible-sonar robot mimics human click emissions, binaural hearing, and head movements to extract interaural time and level differences from target echoes. Targets of various complexity are examined by transverse displacements of the sonar and by target pose rotations that model movements performed by the blind. Controlled sonar movements executed by the robot provide data that model proprioception information available to blind humans for examining targets from various aspects. The audible sonar uses this sonar location and orientation information to form two-dimensional target images that are similar to medical diagnostic ultrasound tomograms. Simple targets, such as single round and square posts, produce distinguishable and recognizable images. More complex targets configured with several simple objects generate diffraction effects and multiple reflections that produce image artifacts. The presentation illustrates the capabilities and limitations of target classification from audible sonar images.
Solar Thermal Propulsion for Microsatellite Manoeuvring
2004-09-01
of 14-cm and 56-cm diameter solar concentrating mirrors has clearly validated initial optical ray trace modelling and suggests that there is...concentrating mirror’s focus, permitting multiple mirror inputs to heat a single receiver and allowing the receiver to be placed anywhere on the host...The STE is conceptually simple, relying on a mirror or lens assembly to collect and concentrate incident solar radiation. This energy is focused, by
Angular intensity and polarization dependence of diffuse transmission through random media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliyahu, D.; Rosenbluh, M.; Feund, I.
1993-03-01
A simple theoretical model involving only a single sample parameter, the depolarization ratio [rho] for linearly polarized normally incident and normally scattered light, is developed to describe the angular intensity and all other polarization-dependent properties of diffuse transmission through multiple-scattering media. Initial experimental results that tend to support the theory are presented. Results for diffuse reflection are also described. 63 refs., 15 figs.
Rohani, Nazanin; Parmeggiani, Andrea; Winklbauer, Rudolf; Fagotto, François
2014-01-01
Ephrins and Eph receptors are involved in the establishment of vertebrate tissue boundaries. The complexity of the system is puzzling, however in many instances, tissues express multiple ephrins and Ephs on both sides of the boundary, a situation that should in principle cause repulsion between cells within each tissue. Although co-expression of ephrins and Eph receptors is widespread in embryonic tissues, neurons, and cancer cells, it is still unresolved how the respective signals are integrated into a coherent output. We present a simple explanation for the confinement of repulsion to the tissue interface: Using the dorsal ectoderm–mesoderm boundary of the Xenopus embryo as a model, we identify selective functional interactions between ephrin–Eph pairs that are expressed in partial complementary patterns. The combined repulsive signals add up to be strongest across the boundary, where they reach sufficient intensity to trigger cell detachments. The process can be largely explained using a simple model based exclusively on relative ephrin and Eph concentrations and binding affinities. We generalize these findings for the ventral ectoderm–mesoderm boundary and the notochord boundary, both of which appear to function on the same principles. These results provide a paradigm for how developmental systems may integrate multiple cues to generate discrete local outcomes. PMID:25247423
Rohani, Nazanin; Parmeggiani, Andrea; Winklbauer, Rudolf; Fagotto, François
2014-09-01
Ephrins and Eph receptors are involved in the establishment of vertebrate tissue boundaries. The complexity of the system is puzzling, however in many instances, tissues express multiple ephrins and Ephs on both sides of the boundary, a situation that should in principle cause repulsion between cells within each tissue. Although co-expression of ephrins and Eph receptors is widespread in embryonic tissues, neurons, and cancer cells, it is still unresolved how the respective signals are integrated into a coherent output. We present a simple explanation for the confinement of repulsion to the tissue interface: Using the dorsal ectoderm-mesoderm boundary of the Xenopus embryo as a model, we identify selective functional interactions between ephrin-Eph pairs that are expressed in partial complementary patterns. The combined repulsive signals add up to be strongest across the boundary, where they reach sufficient intensity to trigger cell detachments. The process can be largely explained using a simple model based exclusively on relative ephrin and Eph concentrations and binding affinities. We generalize these findings for the ventral ectoderm-mesoderm boundary and the notochord boundary, both of which appear to function on the same principles. These results provide a paradigm for how developmental systems may integrate multiple cues to generate discrete local outcomes.
Web-based segmentation and display of three-dimensional radiologic image data.
Silverstein, J; Rubenstein, J; Millman, A; Panko, W
1998-01-01
In many clinical circumstances, viewing sequential radiological image data as three-dimensional models is proving beneficial. However, designing customized computer-generated radiological models is beyond the scope of most physicians, due to specialized hardware and software requirements. We have created a simple method for Internet users to remotely construct and locally display three-dimensional radiological models using only a standard web browser. Rapid model construction is achieved by distributing the hardware intensive steps to a remote server. Once created, the model is automatically displayed on the requesting browser and is accessible to multiple geographically distributed users. Implementation of our server software on large scale systems could be of great service to the worldwide medical community.
Rasch analysis of SF-Qualiveen in multiple sclerosis.
Milinis, Kristijonas; Tennant, Alan; A Young, Carolyn
2017-04-01
A 30-item Qualiveen questionnaire was developed to measure the impact of urinary problems on everyday living in spinal cord injury, and subsequently an 8-item SF-Qualiveen was developed for those with multiple sclerosis (MS). The validity of this short form has not been previously examined using modern psychometric techniques, such as the Rasch measurement model. The aim of this study is to test if the short form meets the requirements of the Rasch model. A total of 401 patients with clinically definite MS were given the questionnaire at three neuroscience centres in the UK. A total of 258 patients (64.3% response) completed the questionnaire. The original scale failed to meet the expectations of the Rasch model. A two-testlet solution was sought to account for local dependence, differential item functioning and disordered thresholds. After the modifications were made the scale fitted the model (χ 2 = 5.93 P = 0.4305), had high internal consistency (α = 0.88) and was unidimensional. SF-Qualiveen is a simple and valid measure of the impact of urinary problems in multiple sclerosis, which meets the requirements of the Rasch measurement model. Summed ordinal scores can be converted to interval-level using the transformation table provided. © 2016 Wiley Periodicals, Inc.
Independent component model for cognitive functions of multiple subjects using [15O]H2O PET images.
Park, Hae-Jeong; Kim, Jae-Jin; Youn, Tak; Lee, Dong Soo; Lee, Myung Chul; Kwon, Jun Soo
2003-04-01
An independent component model of multiple subjects' positron emission tomography (PET) images is proposed to explore the overall functional components involved in a task and to explain subject specific variations of metabolic activities under altered experimental conditions utilizing the Independent component analysis (ICA) concept. As PET images represent time-compressed activities of several cognitive components, we derived a mathematical model to decompose functional components from cross-sectional images based on two fundamental hypotheses: (1) all subjects share basic functional components that are common to subjects and spatially independent of each other in relation to the given experimental task, and (2) all subjects share common functional components throughout tasks which are also spatially independent. The variations of hemodynamic activities according to subjects or tasks can be explained by the variations in the usage weight of the functional components. We investigated the plausibility of the model using serial cognitive experiments of simple object perception, object recognition, two-back working memory, and divided attention of a syntactic process. We found that the independent component model satisfactorily explained the functional components involved in the task and discuss here the application of ICA in multiple subjects' PET images to explore the functional association of brain activations. Copyright 2003 Wiley-Liss, Inc.
Mutation of SIMPLE in Charcot–Marie–Tooth 1C alters production of exosomes
Zhu, Hong; Guariglia, Sara; Yu, Raymond Y. L.; Li, Wenjing; Brancho, Deborah; Peinado, Hector; Lyden, David; Salzer, James; Bennett, Craig; Chow, Chi-Wing
2013-01-01
Charcot–Marie–Tooth (CMT) disease is an inherited neurological disorder. Mutations in the small integral membrane protein of the lysosome/late endosome (SIMPLE) account for the rare autosomal-dominant demyelination in CMT1C patients. Understanding the molecular basis of CMT1C pathogenesis is impeded, in part, by perplexity about the role of SIMPLE, which is expressed in multiple cell types. Here we show that SIMPLE resides within the intraluminal vesicles of multivesicular bodies (MVBs) and inside exosomes, which are nanovesicles secreted extracellularly. Targeting of SIMPLE to exosomes is modulated by positive and negative regulatory motifs. We also find that expression of SIMPLE increases the number of exosomes and secretion of exosome proteins. We engineer a point mutation on the SIMPLE allele and generate a physiological mouse model that expresses CMT1C-mutated SIMPLE at the endogenous level. We find that CMT1C mouse primary embryonic fibroblasts show decreased number of exosomes and reduced secretion of exosome proteins, in part due to improper formation of MVBs. CMT1C patient B cells and CMT1C mouse primary Schwann cells show similar defects. Together the data indicate that SIMPLE regulates the production of exosomes by modulating the formation of MVBs. Dysregulated endosomal trafficking and changes in the landscape of exosome-mediated intercellular communications may place an overwhelming burden on the nervous system and account for CMT1C molecular pathogenesis. PMID:23576546
Simple model for multiple-choice collective decision making
NASA Astrophysics Data System (ADS)
Lee, Ching Hua; Lucas, Andrew
2014-11-01
We describe a simple model of heterogeneous, interacting agents making decisions between n ≥2 discrete choices. For a special class of interactions, our model is the mean field description of random field Potts-like models and is effectively solved by finding the extrema of the average energy E per agent. In these cases, by studying the propagation of decision changes via avalanches, we argue that macroscopic dynamics is well captured by a gradient flow along E . We focus on the permutation symmetric case, where all n choices are (on average) the same, and spontaneous symmetry breaking (SSB) arises purely from cooperative social interactions. As examples, we show that bimodal heterogeneity naturally provides a mechanism for the spontaneous formation of hierarchies between decisions and that SSB is a preferred instability to discontinuous phase transitions between two symmetric points. Beyond the mean field limit, exponentially many stable equilibria emerge when we place this model on a graph of finite mean degree. We conclude with speculation on decision making with persistent collective oscillations. Throughout the paper, we emphasize analogies between methods of solution to our model and common intuition from diverse areas of physics, including statistical physics and electromagnetism.
Integration of multiple theories for the simulation of laser interference lithography processes
NASA Astrophysics Data System (ADS)
Lin, Te-Hsun; Yang, Yin-Kuang; Fu, Chien-Chung
2017-11-01
The periodic structure of laser interference lithography (LIL) fabrication is superior to other lithography technologies. In contrast to traditional lithography, LIL has the advantages of being a simple optical system with no mask requirements, low cost, high depth of focus, and large patterning area in a single exposure. Generally, a simulation pattern for the periodic structure is obtained through optical interference prior to its fabrication through LIL. However, the LIL process is complex and combines the fields of optical and polymer materials; thus, a single simulation theory cannot reflect the real situation. Therefore, this research integrates multiple theories, including those of optical interference, standing waves, and photoresist characteristics, to create a mathematical model for the LIL process. The mathematical model can accurately estimate the exposure time and reduce the LIL process duration through trial and error.
Alignment and integration of complex networks by hypergraph-based spectral clustering
NASA Astrophysics Data System (ADS)
Michoel, Tom; Nachtergaele, Bruno
2012-11-01
Complex networks possess a rich, multiscale structure reflecting the dynamical and functional organization of the systems they model. Often there is a need to analyze multiple networks simultaneously, to model a system by more than one type of interaction, or to go beyond simple pairwise interactions, but currently there is a lack of theoretical and computational methods to address these problems. Here we introduce a framework for clustering and community detection in such systems using hypergraph representations. Our main result is a generalization of the Perron-Frobenius theorem from which we derive spectral clustering algorithms for directed and undirected hypergraphs. We illustrate our approach with applications for local and global alignment of protein-protein interaction networks between multiple species, for tripartite community detection in folksonomies, and for detecting clusters of overlapping regulatory pathways in directed networks.
Alignment and integration of complex networks by hypergraph-based spectral clustering.
Michoel, Tom; Nachtergaele, Bruno
2012-11-01
Complex networks possess a rich, multiscale structure reflecting the dynamical and functional organization of the systems they model. Often there is a need to analyze multiple networks simultaneously, to model a system by more than one type of interaction, or to go beyond simple pairwise interactions, but currently there is a lack of theoretical and computational methods to address these problems. Here we introduce a framework for clustering and community detection in such systems using hypergraph representations. Our main result is a generalization of the Perron-Frobenius theorem from which we derive spectral clustering algorithms for directed and undirected hypergraphs. We illustrate our approach with applications for local and global alignment of protein-protein interaction networks between multiple species, for tripartite community detection in folksonomies, and for detecting clusters of overlapping regulatory pathways in directed networks.
Integration of multiple theories for the simulation of laser interference lithography processes.
Lin, Te-Hsun; Yang, Yin-Kuang; Fu, Chien-Chung
2017-11-24
The periodic structure of laser interference lithography (LIL) fabrication is superior to other lithography technologies. In contrast to traditional lithography, LIL has the advantages of being a simple optical system with no mask requirements, low cost, high depth of focus, and large patterning area in a single exposure. Generally, a simulation pattern for the periodic structure is obtained through optical interference prior to its fabrication through LIL. However, the LIL process is complex and combines the fields of optical and polymer materials; thus, a single simulation theory cannot reflect the real situation. Therefore, this research integrates multiple theories, including those of optical interference, standing waves, and photoresist characteristics, to create a mathematical model for the LIL process. The mathematical model can accurately estimate the exposure time and reduce the LIL process duration through trial and error.
Zhou, Xian; Zhong, Kangping; Gao, Yuliang; Sui, Qi; Dong, Zhenghua; Yuan, Jinhui; Wang, Liang; Long, Keping; Lau, Alan Pak Tao; Lu, Chao
2015-04-06
Discrete multi-tone (DMT) modulation is an attractive modulation format for short-reach applications to achieve the best use of available channel bandwidth and signal noise ratio (SNR). In order to realize polarization-multiplexed DMT modulation with direct detection, we derive an analytical transmission model for dual polarizations with intensity modulation and direct diction (IM-DD) in this paper. Based on the model, we propose a novel polarization-interleave-multiplexed DMT modulation with direct diction (PIM-DMT-DD) transmission system, where the polarization de-multiplexing can be achieved by using a simple multiple-input-multiple-output (MIMO) equalizer and the transmission performance is optimized over two distinct received polarization states to eliminate the singularity issue of MIMO demultiplexing algorithms. The feasibility and effectiveness of the proposed PIM-DMT-DD system are investigated via theoretical analyses and simulation studies.
Rapid contemporary evolution and clonal food web dynamics
Jones, Laura E.; Becks, Lutz; Ellner, Stephen P.; Hairston, Nelson G.; Yoshida, Takehito; Fussmann, Gregor F.
2009-01-01
Character evolution that affects ecological community interactions often occurs contemporaneously with temporal changes in population size, potentially altering the very nature of those dynamics. Such eco-evolutionary processes may be most readily explored in systems with short generations and simple genetics. Asexual and cyclically parthenogenetic organisms such as microalgae, cladocerans and rotifers, which frequently dominate freshwater plankton communities, meet these requirements. Multiple clonal lines can coexist within each species over extended periods, until either fixation occurs or a sexual phase reshuffles the genetic material. When clones differ in traits affecting interspecific interactions, within-species clonal dynamics can have major effects on the population dynamics. We first consider a simple predator–prey system with two prey genotypes, parametrized with data from a well-studied experimental system, and explore how the extent of differences in defence against predation within the prey population determine dynamic stability versus instability of the system. We then explore how increased potential for evolution affects the community dynamics in a more general community model with multiple predator and multiple prey genotypes. These examples illustrate how microevolutionary ‘details’ that enhance or limit the potential for heritable phenotypic change can have significant effects on contemporaneous community-level dynamics and the persistence and coexistence of species. PMID:19414472
Rising tides, cumulative impacts and cascading changes to estuarine ecosystem functions.
O'Meara, Theresa A; Hillman, Jenny R; Thrush, Simon F
2017-08-31
In coastal ecosystems, climate change affects multiple environmental factors, yet most predictive models are based on simple cause-and-effect relationships. Multiple stressor scenarios are difficult to predict because they can create a ripple effect through networked ecosystem functions. Estuarine ecosystem function relies on an interconnected network of physical and biological processes. Estuarine habitats play critical roles in service provision and represent global hotspots for organic matter processing, nutrient cycling and primary production. Within these systems, we predicted functional changes in the impacts of land-based stressors, mediated by changing light climate and sediment permeability. Our in-situ field experiment manipulated sea level, nutrient supply, and mud content. We used these stressors to determine how interacting environmental stressors influence ecosystem function and compared results with data collected along elevation gradients to substitute space for time. We show non-linear, multi-stressor effects deconstruct networks governing ecosystem function. Sea level rise altered nutrient processing and impacted broader estuarine services ameliorating nutrient and sediment pollution. Our experiment demonstrates how the relationships between nutrient processing and biological/physical controls degrade with environmental stress. Our results emphasise the importance of moving beyond simple physically-forced relationships to assess consequences of climate change in the context of ecosystem interactions and multiple stressors.
Normal uniform mixture differential gene expression detection for cDNA microarrays
Dean, Nema; Raftery, Adrian E
2005-01-01
Background One of the primary tasks in analysing gene expression data is finding genes that are differentially expressed in different samples. Multiple testing issues due to the thousands of tests run make some of the more popular methods for doing this problematic. Results We propose a simple method, Normal Uniform Differential Gene Expression (NUDGE) detection for finding differentially expressed genes in cDNA microarrays. The method uses a simple univariate normal-uniform mixture model, in combination with new normalization methods for spread as well as mean that extend the lowess normalization of Dudoit, Yang, Callow and Speed (2002) [1]. It takes account of multiple testing, and gives probabilities of differential expression as part of its output. It can be applied to either single-slide or replicated experiments, and it is very fast. Three datasets are analyzed using NUDGE, and the results are compared to those given by other popular methods: unadjusted and Bonferroni-adjusted t tests, Significance Analysis of Microarrays (SAM), and Empirical Bayes for microarrays (EBarrays) with both Gamma-Gamma and Lognormal-Normal models. Conclusion The method gives a high probability of differential expression to genes known/suspected a priori to be differentially expressed and a low probability to the others. In terms of known false positives and false negatives, the method outperforms all multiple-replicate methods except for the Gamma-Gamma EBarrays method to which it offers comparable results with the added advantages of greater simplicity, speed, fewer assumptions and applicability to the single replicate case. An R package called nudge to implement the methods in this paper will be made available soon at . PMID:16011807
Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E
2016-06-01
Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.
Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161
Mapping tree density in forests of the southwestern USA using Landsat 8 data
Humagain, Kamal; Portillo-Quintero, Carlos; Cox, Robert D.; Cain, James W.
2017-01-01
The increase of tree density in forests of the American Southwest promotes extreme fire events, understory biodiversity losses, and degraded habitat conditions for many wildlife species. To ameliorate these changes, managers and scientists have begun planning treatments aimed at reducing fuels and increasing understory biodiversity. However, spatial variability in tree density across the landscape is not well-characterized, and if better known, could greatly influence planning efforts. We used reflectance values from individual Landsat 8 bands (bands 2, 3, 4, 5, 6, and 7) and calculated vegetation indices (difference vegetation index, simple ratios, and normalized vegetation indices) to estimate tree density in an area planned for treatment in the Jemez Mountains, New Mexico, characterized by multiple vegetation types and a complex topography. Because different vegetation types have different spectral signatures, we derived models with multiple predictor variables for each vegetation type, rather than using a single model for the entire project area, and compared the model-derived values to values collected from on-the-ground transects. Among conifer-dominated areas (73% of the project area), the best models (as determined by corrected Akaike Information Criteria (AICc)) included Landsat bands 2, 3, 4, and 7 along with simple ratios, normalized vegetation indices, and the difference vegetation index (R2 values for ponderosa: 0.47, piñon-juniper: 0.52, and spruce-fir: 0.66). On the other hand, in aspen-dominated areas (9% of the project area), the best model included individual bands 4 and 2, simple ratio, and normalized vegetation index (R2 value: 0.97). Most areas dominated by ponderosa, pinyon-juniper, or spruce-fir had more than 100 trees per hectare. About 54% of the study area has medium to high density of trees (100–1000 trees/hectare), and a small fraction (4.5%) of the area has very high density (>1000 trees/hectare). Our results provide a better understanding of tree density for identifying areas in need of treatment and planning for more effective treatment. Our analysis also provides an integrated method of estimating tree density across complex landscapes that could be useful for further restoration planning.
Multiple Contact Dates and SARS Incubation Periods
2004-01-01
Many severe acute respiratory syndrome (SARS) patients have multiple possible incubation periods due to multiple contact dates. Multiple contact dates cannot be used in standard statistical analytic techniques, however. I present a simple spreadsheet-based method that uses multiple contact dates to calculate the possible incubation periods of SARS. PMID:15030684
NASA Astrophysics Data System (ADS)
Froggatt*, C. D.
2003-01-01
The quark-lepton mass problem and the ideas of mass protection are reviewed. The hierarchy problem and suggestions for its resolution, including Little Higgs models, are discussed. The Multiple Point Principle (MPP) is introduced and used within the Standard Model (SM) to predict the top quark and Higgs particle masses. Mass matrix ansätze are considered; in particular we discuss the lightest family mass generation model, in which all the quark mixing angles are successfully expressed in terms of simple expressions involving quark mass ratios. It is argued that an underlying chiral flavour symmetry is responsible for the hierarchical texture of the fermion mass matrices. The phenomenology of neutrino mass matrices is briefly discussed.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2013-01-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286
New generation of elastic network models.
López-Blanco, José Ramón; Chacón, Pablo
2016-04-01
The intrinsic flexibility of proteins and nucleic acids can be grasped from remarkably simple mechanical models of particles connected by springs. In recent decades, Elastic Network Models (ENMs) combined with Normal Model Analysis widely confirmed their ability to predict biologically relevant motions of biomolecules and soon became a popular methodology to reveal large-scale dynamics in multiple structural biology scenarios. The simplicity, robustness, low computational cost, and relatively high accuracy are the reasons behind the success of ENMs. This review focuses on recent advances in the development and application of ENMs, paying particular attention to combinations with experimental data. Successful application scenarios include large macromolecular machines, structural refinement, docking, and evolutionary conservation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Observability of discretized partial differential equations
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.; Dee, Dick P.
1988-01-01
It is shown that complete observability of the discrete model used to assimilate data from a linear partial differential equation (PDE) system is necessary and sufficient for asymptotic stability of the data assimilation process. The observability theory for discrete systems is reviewed and applied to obtain simple observability tests for discretized constant-coefficient PDEs. Examples are used to show how numerical dispersion can result in discrete dynamics with multiple eigenvalues, thereby detracting from observability.
A Solution to the Cosmic Conundrum including Cosmological Constant and Dark Energy Problems
NASA Astrophysics Data System (ADS)
Singh, A.
2009-12-01
A comprehensive solution to the cosmic conundrum is presented that also resolves key paradoxes of quantum mechanics and relativity. A simple mathematical model, the Gravity Nullification model (GNM), is proposed that integrates the missing physics of the spontaneous relativistic conversion of mass to energy into the existing physics theories, specifically a simplified general theory of relativity. Mechanistic mathematical expressions are derived for a relativistic universe expansion, which predict both the observed linear Hubble expansion in the nearby universe and the accelerating expansion exhibited by the supernova observations. The integrated model addresses the key questions haunting physics and Big Bang cosmology. It also provides a fresh perspective on the misconceived birth and evolution of the universe, especially the creation and dissolution of matter. The proposed model eliminates singularities from existing models and the need for the incredible and unverifiable assumptions including the superluminous inflation scenario, multiple universes, multiple dimensions, Anthropic principle, and quantum gravity. GNM predicts the observed features of the universe without any explicit consideration of time as a governing parameter.
Using Nucleon Multiplicities to Analyze Anti-Neutrino Interactions with Nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elkins, Miranda J.
The most commonly used, simple interaction models have not accurately described the nuclear effects on either neutrino-nucleus or anti-neutrino-nucleus interactions. Comparison of data collected by the MINERvA experiment and these models shows a discrepancy in the reconstructed hadronic energy distribution at momentum transfers below 0.8 GeV. Two nuclear model effects that were previously not modeled are possible culprits of this discrepancy. The first is known as random-phase-approximation and the second is the addition of a meson exchange current process, also known as two-particle two-hole due to its result in two particles leaving the nucleus with two holes left in theirmore » place. For the first time a neutron counting software algorithm has been created and used to compare the multiplicity and spatial distributions of neutrons between the simulation and data. There is localized sensitivity to the RPA and 2p2h effects and both help the simulation better describe the data. Ad ditional systematic or model effects are present which cause the simulation to overproduce neutrons, and potential causes are discussed.« less
Detection of epistatic effects with logic regression and a classical linear regression model.
Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata
2014-02-01
To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis.
Long-term forecasting of internet backbone traffic.
Papagiannaki, Konstantina; Taft, Nina; Zhang, Zhi-Li; Diot, Christophe
2005-09-01
We introduce a methodology to predict when and where link additions/upgrades have to take place in an Internet protocol (IP) backbone network. Using simple network management protocol (SNMP) statistics, collected continuously since 1999, we compute aggregate demand between any two adjacent points of presence (PoPs) and look at its evolution at time scales larger than 1 h. We show that IP backbone traffic exhibits visible long term trends, strong periodicities, and variability at multiple time scales. Our methodology relies on the wavelet multiresolution analysis (MRA) and linear time series models. Using wavelet MRA, we smooth the collected measurements until we identify the overall long-term trend. The fluctuations around the obtained trend are further analyzed at multiple time scales. We show that the largest amount of variability in the original signal is due to its fluctuations at the 12-h time scale. We model inter-PoP aggregate demand as a multiple linear regression model, consisting of the two identified components. We show that this model accounts for 98% of the total energy in the original signal, while explaining 90% of its variance. Weekly approximations of those components can be accurately modeled with low-order autoregressive integrated moving average (ARIMA) models. We show that forecasting the long term trend and the fluctuations of the traffic at the 12-h time scale yields accurate estimates for at least 6 months in the future.
Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry
2013-08-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.
Kim, Yoonsang; Emery, Sherry
2013-01-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415
Azevedo-Silva, J; Queirós, O; Baltazar, F; Ułaszewski, S; Goffeau, A; Ko, Y H; Pedersen, P L; Preto, A; Casal, M
2016-08-01
At the beginning of the twenty-first century, 3-bromopyruvate (3BP), a simple alkylating chemical compound was presented to the scientific community as a potent anticancer agent, able to cause rapid toxicity to cancer cells without bystander effects on normal tissues. The altered metabolism of cancers, an essential hallmark for their progression, also became their Achilles heel by facilitating 3BP's selective entry and specific targeting. Treatment with 3BP has been administered in several cancer type models both in vitro and in vivo, either alone or in combination with other anticancer therapeutic approaches. These studies clearly demonstrate 3BP's broad action against multiple cancer types. Clinical trials using 3BP are needed to further support its anticancer efficacy against multiple cancer types thus making it available to more than 30 million patients living with cancer worldwide. This review discusses current knowledge about 3BP related to cancer and discusses also the possibility of its use in future clinical applications as it relates to safety and treatment issues.
Beauchaine, Theodore P.; Gatzke-Kopp, Lisa M.
2014-01-01
During the last quarter century, developmental psychopathology has become increasingly inclusive and now spans disciplines ranging from psychiatric genetics to primary prevention. As a result, developmental psychopathologists have extended traditional diathesis–stress and transactional models to include causal processes at and across all relevant levels of analysis. Such research is embodied in what is known as the multiple levels of analysis perspective. We describe how multiple levels of analysis research has informed our current thinking about antisocial and borderline personality development among trait impulsive and therefore vulnerable individuals. Our approach extends the multiple levels of analysis perspective beyond simple Biology × Environment interactions by evaluating impulsivity across physiological systems (genetic, autonomic, hormonal, neural), psychological constructs (social, affective, motivational), developmental epochs (preschool, middle childhood, adolescence, adulthood), sexes (male, female), and methods of inquiry (self-report, informant report, treatment outcome, cardiovascular, electrophysiological, neuroimaging). By conducting our research using any and all available methods across these levels of analysis, we have arrived at a developmental model of trait impulsivity that we believe confers a greater understanding of this highly heritable trait and captures at least some heterogeneity in key behavioral outcomes, including delinquency and suicide. PMID:22781868
Runtime and Architecture Support for Efficient Data Exchange in Multi-Accelerator Applications.
Cabezas, Javier; Gelado, Isaac; Stone, John E; Navarro, Nacho; Kirk, David B; Hwu, Wen-Mei
2015-05-01
Heterogeneous parallel computing applications often process large data sets that require multiple GPUs to jointly meet their needs for physical memory capacity and compute throughput. However, the lack of high-level abstractions in previous heterogeneous parallel programming models force programmers to resort to multiple code versions, complex data copy steps and synchronization schemes when exchanging data between multiple GPU devices, which results in high software development cost, poor maintainability, and even poor performance. This paper describes the HPE runtime system, and the associated architecture support, which enables a simple, efficient programming interface for exchanging data between multiple GPUs through either interconnects or cross-node network interfaces. The runtime and architecture support presented in this paper can also be used to support other types of accelerators. We show that the simplified programming interface reduces programming complexity. The research presented in this paper started in 2009. It has been implemented and tested extensively in several generations of HPE runtime systems as well as adopted into the NVIDIA GPU hardware and drivers for CUDA 4.0 and beyond since 2011. The availability of real hardware that support key HPE features gives rise to a rare opportunity for studying the effectiveness of the hardware support by running important benchmarks on real runtime and hardware. Experimental results show that in a exemplar heterogeneous system, peer DMA and double-buffering, pinned buffers, and software techniques can improve the inter-accelerator data communication bandwidth by 2×. They can also improve the execution speed by 1.6× for a 3D finite difference, 2.5× for 1D FFT, and 1.6× for merge sort, all measured on real hardware. The proposed architecture support enables the HPE runtime to transparently deploy these optimizations under simple portable user code, allowing system designers to freely employ devices of different capabilities. We further argue that simple interfaces such as HPE are needed for most applications to benefit from advanced hardware features in practice.
Runtime and Architecture Support for Efficient Data Exchange in Multi-Accelerator Applications
Cabezas, Javier; Gelado, Isaac; Stone, John E.; Navarro, Nacho; Kirk, David B.; Hwu, Wen-mei
2014-01-01
Heterogeneous parallel computing applications often process large data sets that require multiple GPUs to jointly meet their needs for physical memory capacity and compute throughput. However, the lack of high-level abstractions in previous heterogeneous parallel programming models force programmers to resort to multiple code versions, complex data copy steps and synchronization schemes when exchanging data between multiple GPU devices, which results in high software development cost, poor maintainability, and even poor performance. This paper describes the HPE runtime system, and the associated architecture support, which enables a simple, efficient programming interface for exchanging data between multiple GPUs through either interconnects or cross-node network interfaces. The runtime and architecture support presented in this paper can also be used to support other types of accelerators. We show that the simplified programming interface reduces programming complexity. The research presented in this paper started in 2009. It has been implemented and tested extensively in several generations of HPE runtime systems as well as adopted into the NVIDIA GPU hardware and drivers for CUDA 4.0 and beyond since 2011. The availability of real hardware that support key HPE features gives rise to a rare opportunity for studying the effectiveness of the hardware support by running important benchmarks on real runtime and hardware. Experimental results show that in a exemplar heterogeneous system, peer DMA and double-buffering, pinned buffers, and software techniques can improve the inter-accelerator data communication bandwidth by 2×. They can also improve the execution speed by 1.6× for a 3D finite difference, 2.5× for 1D FFT, and 1.6× for merge sort, all measured on real hardware. The proposed architecture support enables the HPE runtime to transparently deploy these optimizations under simple portable user code, allowing system designers to freely employ devices of different capabilities. We further argue that simple interfaces such as HPE are needed for most applications to benefit from advanced hardware features in practice. PMID:26180487
The propagation of sound in narrow street canyons
NASA Astrophysics Data System (ADS)
Iu, K. K.; Li, K. M.
2002-08-01
This paper addresses an important problem of predicting sound propagation in narrow street canyons with width less than 10 m, which are commonly found in a built-up urban district. Major noise sources are, for example, air conditioners installed on building facades and powered mechanical equipment for repair and construction work. Interference effects due to multiple reflections from building facades and ground surfaces are important contributions in these complex environments. Although the studies of sound transmission in urban areas can be traced back to as early as the 1960s, the resulting mathematical and numerical models are still unable to predict sound fields accurately in city streets. This is understandable because sound propagation in city streets involves many intriguing phenomena such as reflections and scattering at the building facades, diffusion effects due to recessions and protrusions of building surfaces, geometric spreading, and atmospheric absorption. This paper describes the development of a numerical model for the prediction of sound fields in city streets. To simplify the problem, a typical city street is represented by two parallel reflecting walls and a flat impedance ground. The numerical model is based on a simple ray theory that takes account of multiple reflections from the building facades. The sound fields due to the point source and its images are summed coherently such that mutual interference effects between contributing rays can be included in the analysis. Indoor experiments are conducted in an anechoic chamber. Experimental data are compared with theoretical predictions to establish the validity and usefulness of this simple model. Outdoor experimental measurements have also been conducted to further validate the model. copyright 2002 Acoustical Society of America.
Waubert de Puiseau, Berenike; Greving, Sven; Aßfalg, André; Musch, Jochen
2017-09-01
Aggregating information across multiple testimonies may improve crime reconstructions. However, different aggregation methods are available, and research on which method is best suited for aggregating multiple observations is lacking. Furthermore, little is known about how variance in the accuracy of individual testimonies impacts the performance of competing aggregation procedures. We investigated the superiority of aggregation-based crime reconstructions involving multiple individual testimonies and whether this superiority varied as a function of the number of witnesses and the degree of heterogeneity in witnesses' ability to accurately report their observations. Moreover, we examined whether heterogeneity in competence levels differentially affected the relative accuracy of two aggregation procedures: a simple majority rule, which ignores individual differences, and the more complex general Condorcet model (Romney et al., Am Anthropol 88(2):313-338, 1986; Batchelder and Romney, Psychometrika 53(1):71-92, 1988), which takes into account differences in competence between individuals. 121 participants viewed a simulated crime and subsequently answered 128 true/false questions about the crime. We experimentally generated groups of witnesses with homogeneous or heterogeneous competences. Both the majority rule and the general Condorcet model provided more accurate reconstructions of the observed crime than individual testimonies. The superiority of aggregated crime reconstructions involving multiple individual testimonies increased with an increasing number of witnesses. Crime reconstructions were most accurate when competences were heterogeneous and aggregation was based on the general Condorcet model. We argue that a formal aggregation should be considered more often when eyewitness testimonies have to be assessed and that the general Condorcet model provides a good framework for such aggregations.
A data-led comparison of simple canopy radiative transfer models for the boreal forest
NASA Astrophysics Data System (ADS)
Reid, T.; Essery, R.; Rutter, N.; King, M.
2012-12-01
Given the computational complexity of numerical weather and climate models, it is worthwhile developing very simple parameterizations for processes such as the transmission of radiation through forest canopies. For this reason, the land surface schemes in global models, and most snow hydrological models, tend to use simple one-dimensional approaches based on Beer's Law or two-stream approximations. Such approaches assume a continuous canopy structure that may not be suitable for the varied, heterogeneous forest cover in boreal regions, especially in winter when snow in the canopy and on the ground may either block radiation or produce multiple reflections between the ground and the trees. There is great benefit in comparing models to real transmissivity values calculated from radiation measurements below and above Arctic canopies. In particular, there is a lack of data for leafless boreal deciduous forests, where canopy gaps are prevalent even at low solar elevation angles near the horizon. In this study, models are compared to radiation data collected in an area of boreal birch forest near Abisko, Sweden in March/April 2011 and mixed conifer forest at Sodankylä, Finland in March/April 2012. Arrays comprising ten shortwave pyranometers were deployed for periods of up to 50 days, under forest plots of varying canopy structures and densities. In addition, global and diffuse shortwave irradiances were recorded at nearby open sites representing the top-of-canopy conditions. A model is developed that explicitly accounts for both diffuse radiation and direct beam transmission on a 5-minute timestep, by using upward-looking hemispherical photographs taken from every pyranometer site. This model reproduces measured transmissivity, although with a slight underestimation, especially at low solar elevations - this could be attributed to multiple reflections that are not accounted for in the model. On the other hand, models based on Beer's Law tend to underestimate the canopy transmissivity significantly, especially for leafless birch canopies where the required assumption of a continuous canopy breaks down. These findings are important for the often sparse, heterogeneous forest cover in boreal regions, where forest edges and canopy gaps are plentiful. They could also have an impact on estimations of overall land surface albedo. Moreover, all models are sensitive to the partitioning of top-of-canopy radiation into its direct and diffuse components, which is complicated by the low solar elevations in the Arctic. More research is required to decide the best way of quantifying the diffuse fraction, using data alongside both physical and empirical models.
Multiple rings around Wolf-Rayet evolution
NASA Technical Reports Server (NTRS)
Marston, A. P.
1995-01-01
We present optical narrow-band imaging of multiple rings existing around galactic Wolf-Rayet (WR) stars. The existence of multiple rings of material around Wolf-Rayet stars clearly illustrates the various phases of evolution that massive stars go through. The objects presented here show evidence of a three stage evolution. O stars produce an outer ring with the cavity being partially filled by ejecta from a red supergiant of luminous blue variable phase. A wind from the Wolf-Rayet star then passes into the ejecta materials. A simple model is presented for this three stage evolution. Using observations of the size and dynamics of the rings allows estimates of time scales for each stage of the massive star evolution. These are consistent with recent theoretical evolutionary models. Mass estimates for the ejecta, from the model presented, are consistent with previous ring nebula mass estimates from IRAS data, showing a number of ring nebulae to have large masses, most of which must in be in the form of neutral material. Finally, we illustrate how further observations will allow the determination of many of the parameters of the evolution of massive stars such as total mass loss, average mass loss rates, stellar abundances, and total time spent in each evolutionary phase.
NASA Astrophysics Data System (ADS)
Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.
2012-12-01
Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis shows that terrestrial carbon and water cycle simulations in monsoon Asia were greatly improved, and the use of multiple satellite observations with this framework is an effective way for improving terrestrial biosphere models.
Computational models of the Posner simple and choice reaction time tasks
Feher da Silva, Carolina; Baldo, Marcus V. C.
2015-01-01
The landmark experiments by Posner in the late 1970s have shown that reaction time (RT) is faster when the stimulus appears in an expected location, as indicated by a cue; since then, the so-called Posner task has been considered a “gold standard” test of spatial attention. It is thus fundamental to understand the neural mechanisms involved in performing it. To this end, we have developed a Bayesian detection system and small integrate-and-fire neural networks, which modeled sensory and motor circuits, respectively, and optimized them to perform the Posner task under different cue type proportions and noise levels. In doing so, main findings of experimental research on RT were replicated: the relative frequency effect, suboptimal RTs and significant error rates due to noise and invalid cues, slower RT for choice RT tasks than for simple RT tasks, fastest RTs for valid cues and slowest RTs for invalid cues. Analysis of the optimized systems revealed that the employed mechanisms were consistent with related findings in neurophysiology. Our models predict that (1) the results of a Posner task may be affected by the relative frequency of valid and neutral trials, (2) in simple RT tasks, input from multiple locations are added together to compose a stronger signal, and (3) the cue affects motor circuits more strongly in choice RT tasks than in simple RT tasks. In discussing the computational demands of the Posner task, attention has often been described as a filter that protects the nervous system, whose capacity is limited, from information overload. Our models, however, reveal that the main problems that must be overcome to perform the Posner task effectively are distinguishing signal from external noise and selecting the appropriate response in the presence of internal noise. PMID:26190997
Kleiman, Evan M; Riskind, John H
2013-01-01
While perceived social support has received considerable research as a protective factor for suicide ideation, little attention has been given to the mechanisms that mediate its effects. We integrated two theoretical models, Joiner's (2005) interpersonal theory of suicide and Leary's (Leary, Tambor, Terdal, & Downs, 1995) sociometer theory of self-esteem to investigate two hypothesized mechanisms, utilization of social support and self-esteem. Specifically, we hypothesized that individuals must utilize the social support they perceive that would result in increased self-esteem, which in turn buffers them from suicide ideation. Participants were 172 college students who completed measures of social support, self-esteem, and suicide ideation. Tests of simple mediation indicate that utilization of social support and self-esteem may each individually help to mediate the perceived social support/suicide ideation relationship. Additionally, a test of multiple mediators using bootstrapping supported the hypothesized multiple-mediator model. The use of a cross-sectional design limited our ability to find true cause-and-effect relationships. Results suggested that utilized social support and self-esteem both operate as individual moderators in the social support/self-esteem relationship. Results further suggested, in a comprehensive model, that perceived social support buffers suicide ideation through utilization of social support and increases in self-esteem.
Animal models to study microRNA function
Pal, Arpita S.; Kasinski, Andrea L.
2018-01-01
The discovery of the microRNAs, lin-4 and let-7 as critical mediators of normal development in Caenorhabditis elegans and their conservation throughout evolution has spearheaded research towards identifying novel roles of microRNAs in other cellular processes. To accurately elucidate these fundamental functions, especially in the context of an intact organism various microRNA transgenic models have been generated and evaluated. Transgenic C. elegans (worms), Drosophila melanogaster (flies), Danio rerio (zebrafish), and Mus musculus (mouse) have contributed immensely towards uncovering the roles of multiple microRNAs in cellular processes such as proliferation, differentiation, and apoptosis, pathways that are severely altered in human diseases such as cancer. The simple model organisms, C. elegans, D. melanogaster and D. rerio do not develop cancers, but have proved to be convenient systesm in microRNA research, especially in characterizing the microRNA biogenesis machinery which is often dysregulated during human tumorigenesis. The microRNA-dependent events delineated via these simple in vivo systems have been further verified in vitro, and in more complex models of cancers, such as M. musculus. The focus of this review is to provide an overview of the important contributions made in the microRNA field using model organisms. The simple model systems provided the basis for the importance of microRNAs in normal cellular physiology, while the more complex animal systems provided evidence for the role of microRNAs dysregulation in cancers. Highlights include an overview of the various strategies used to generate transgenic organisms and a review of the use of transgenic mice for evaluating pre-clinical efficacy of microRNA-based cancer therapeutics. PMID:28882225
Adjusted variable plots for Cox's proportional hazards regression model.
Hall, C B; Zeger, S L; Bandeen-Roche, K J
1996-01-01
Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.
An efficient approach to ARMA modeling of biological systems with multiple inputs and delays
NASA Technical Reports Server (NTRS)
Perrott, M. H.; Cohen, R. J.
1996-01-01
This paper presents a new approach to AutoRegressive Moving Average (ARMA or ARX) modeling which automatically seeks the best model order to represent investigated linear, time invariant systems using their input/output data. The algorithm seeks the ARMA parameterization which accounts for variability in the output of the system due to input activity and contains the fewest number of parameters required to do so. The unique characteristics of the proposed system identification algorithm are its simplicity and efficiency in handling systems with delays and multiple inputs. We present results of applying the algorithm to simulated data and experimental biological data In addition, a technique for assessing the error associated with the impulse responses calculated from estimated ARMA parameterizations is presented. The mapping from ARMA coefficients to impulse response estimates is nonlinear, which complicates any effort to construct confidence bounds for the obtained impulse responses. Here a method for obtaining a linearization of this mapping is derived, which leads to a simple procedure to approximate the confidence bounds.
Multiple memory stores and operant conditioning: a rationale for memory's complexity.
Meeter, Martijn; Veldkamp, Rob; Jin, Yaochu
2009-02-01
Why does the brain contain more than one memory system? Genetic algorithms can play a role in elucidating this question. Here, model animals were constructed containing a dorsal striatal layer that controlled actions, and a ventral striatal layer that controlled a dopaminergic learning signal. Both layers could gain access to three modeled memory stores, but such access was penalized as energy expenditure. Model animals were then selected on their fitness in simulated operant conditioning tasks. Results suggest that having access to multiple memory stores and their representations is important in learning to regulate dopamine release, as well as in contextual discrimination. For simple operant conditioning, as well as stimulus discrimination, hippocampal compound representations turned out to suffice, a counterintuitive result given findings that hippocampal lesions tend not to affect performance in such tasks. We argue that there is in fact evidence to support a role for compound representations and the hippocampus in even the simplest conditioning tasks.
The effect of data structures on INGRES performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Creighton, J.R.
1987-01-01
Computer experiments were conducted to determine the effect of using Heap, ISAM, Hash and B-tree data structures for INGRES relations. Average times for retrieve, append and update were determined for searches by unique key and non-key data. The experiments were conducted on relations of approximately 1000 tuples of 332 byte width. Multiple operations were performed, where appropriate, to obtain average times. Simple models of the data structures are presented and shown to be consistent with experimental results. The models can be used to predict performance, and to select the appropriate data structure for various applications.
Interesting examples of supervised continuous variable systems
NASA Technical Reports Server (NTRS)
Chase, Christopher; Serrano, Joe; Ramadge, Peter
1990-01-01
The authors analyze two simple deterministic flow models for multiple buffer servers which are examples of the supervision of continuous variable systems by a discrete controller. These systems exhibit what may be regarded as the two extremes of complexity of the closed loop behavior: one is eventually periodic, the other is chaotic. The first example exhibits chaotic behavior that could be characterized statistically. The dual system, the switched server system, exhibits very predictable behavior, which is modeled by a finite state automaton. This research has application to multimodal discrete time systems where the controller can choose from a set of transition maps to implement.
Upgrades to the REA method for producing probabilistic climate change projections
NASA Astrophysics Data System (ADS)
Xu, Ying; Gao, Xuejie; Giorgi, Filippo
2010-05-01
We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3
Theory and applications of a deterministic approximation to the coalescent model
Jewett, Ethan M.; Rosenberg, Noah A.
2014-01-01
Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419
Nielsen, Morten; Andreatta, Massimo
2016-03-30
Binding of peptides to MHC class I molecules (MHC-I) is essential for antigen presentation to cytotoxic T-cells. Here, we demonstrate how a simple alignment step allowing insertions and deletions in a pan-specific MHC-I binding machine-learning model enables combining information across both multiple MHC molecules and peptide lengths. This pan-allele/pan-length algorithm significantly outperforms state-of-the-art methods, and captures differences in the length profile of binders to different MHC molecules leading to increased accuracy for ligand identification. Using this model, we demonstrate that percentile ranks in contrast to affinity-based thresholds are optimal for ligand identification due to uniform sampling of the MHC space. We have developed a neural network-based machine-learning algorithm leveraging information across multiple receptor specificities and ligand length scales, and demonstrated how this approach significantly improves the accuracy for prediction of peptide binding and identification of MHC ligands. The method is available at www.cbs.dtu.dk/services/NetMHCpan-3.0 .
New data model with better functionality for VLab
NASA Astrophysics Data System (ADS)
da Silveira, P. R.; Wentzcovitch, R. M.; Karki, B. B.
2009-12-01
The VLab infrastructure and architecture was further developed to allow for several new features. First, workflows for first principles calculations of thermodynamics properties and static elasticity programmed in Java as Web Services can now be executed by multiple users. Second, jobs generated by these workflows can now be executed in batch in multiple servers. A simple internal schedule was implemented to handle hundreds of execution packages generated by multiple users and avoid the overload on servers. Third, a new data model was implemented to guarantee integrity of a project (workflow execution) in case of failure. The latter can happen in an execution package or in a workflow phase. By recording all executed steps of a project, its execution can be resumed after dynamic alteration of parameters through the VLab Portal. Fourth, batch jobs can also be monitored through the portal. Now, better and faster interaction with servers is achieved using Ajax technology. Finally, plots are now created on the Vlab server using Gnuplot 4.2.2. Research supported by NSF grants ATM 0428774 (VLab). Vlab is hosted by the Minnesota Supercomputing Institute.
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Moore, Catherine
2018-04-01
The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.
MAGDM linear-programming models with distinct uncertain preference structures.
Xu, Zeshui S; Chen, Jian
2008-10-01
Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.
Thermal-Hydraulic Transient Analysis of a Packed Particle Bed Reactor Fuel Element
1990-06-01
long fuel elements, arranged to form a core , were analyzed for an up-power transient from 0 MWt to approximately 18 MWt. The simple model significantly...VARIATIONS IN FUEL ELEMENT GEOMETRY ............. 60 4.4 VARIATIONS IN THE MANNER OF TRANSIENT CONTROL ..... 62 4.5 CORE REPRESENTATION BY MULTIPLE FUEL ...the HTGR , however, the PBR packs small fuel particles between inner and outer retention elements, designated as frits. The PBR is appropriate for a
2014-07-25
composition of simple temporal structures to a speaker diarization task with the goal of segmenting conference audio in the presence of an unknown number of...application domains including neuroimaging, diverse document selection, speaker diarization , stock modeling, and target tracking. We detail each of...recall performance than competing methods in a task of discovering articles preferred by the user • a gold-standard speaker diarization method, as
Nonlinear dynamics in cardiac conduction
NASA Technical Reports Server (NTRS)
Kaplan, D. T.; Smith, J. M.; Saxberg, B. E.; Cohen, R. J.
1988-01-01
Electrical conduction in the heart shows many phenomena familiar from nonlinear dynamics. Among these phenomena are multiple basins of attraction, phase locking, and perhaps period-doubling bifurcations and chaos. We describe a simple cellular-automation model of electrical conduction which simulates normal conduction patterns in the heart as well as a wide range of disturbances of heart rhythm. In addition, we review the application of percolation theory to the analysis of the development of complex, self-sustaining conduction patterns.
2012-03-22
2003). This is particularly true at shallow depths where the shorter periods, which are primarily sensitive to upper crustal structures, are difficult...to measure, and especially true in tectonically and geologically complex areas. On the other hand, regional gravity inversions have the greatest...the slower deep crustal speeds into the Caspian region does not make sense geologically. These effects are driven by the simple Laplacian smoothness
Design Of Feedforward Controllers For Multivariable Plants
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1989-01-01
Controllers based on simple low-order transfer functions. Mathematical criteria derived for design of feedforward controllers for class of multiple-input/multiple-output linear plants. Represented by simple low-order transfer functions, obtained without reconstruction of states of commands and disturbances. Enables plant to track command while remaining unresponsive to disturbance in steady state. Feedback controller added independently to stabilize plant or to make control system less susceptible to variations in parameters of plant.
Tewarie, P.; Bright, M.G.; Hillebrand, A.; Robson, S.E.; Gascoyne, L.E.; Morris, P.G.; Meier, J.; Van Mieghem, P.; Brookes, M.J.
2016-01-01
Understanding the electrophysiological basis of resting state networks (RSNs) in the human brain is a critical step towards elucidating how inter-areal connectivity supports healthy brain function. In recent years, the relationship between RSNs (typically measured using haemodynamic signals) and electrophysiology has been explored using functional Magnetic Resonance Imaging (fMRI) and magnetoencephalography (MEG). Significant progress has been made, with similar spatial structure observable in both modalities. However, there is a pressing need to understand this relationship beyond simple visual similarity of RSN patterns. Here, we introduce a mathematical model to predict fMRI-based RSNs using MEG. Our unique model, based upon a multivariate Taylor series, incorporates both phase and amplitude based MEG connectivity metrics, as well as linear and non-linear interactions within and between neural oscillations measured in multiple frequency bands. We show that including non-linear interactions, multiple frequency bands and cross-frequency terms significantly improves fMRI network prediction. This shows that fMRI connectivity is not only the result of direct electrophysiological connections, but is also driven by the overlap of connectivity profiles between separate regions. Our results indicate that a complete understanding of the electrophysiological basis of RSNs goes beyond simple frequency-specific analysis, and further exploration of non-linear and cross-frequency interactions will shed new light on distributed network connectivity, and its perturbation in pathology. PMID:26827811
Pistonesi, Marcelo F; Di Nezio, María S; Centurión, María E; Lista, Adriana G; Fragoso, Wallace D; Pontes, Márcio J C; Araújo, Mário C U; Band, Beatriz S Fernández
2010-12-15
In this study, a novel, simple, and efficient spectrofluorimetric method to determine directly and simultaneously five phenolic compounds (hydroquinone, resorcinol, phenol, m-cresol and p-cresol) in air samples is presented. For this purpose, variable selection by the successive projections algorithm (SPA) is used in order to obtain simple multiple linear regression (MLR) models based on a small subset of wavelengths. For comparison, partial least square (PLS) regression is also employed in full-spectrum. The concentrations of the calibration matrix ranged from 0.02 to 0.2 mg L(-1) for hydroquinone, from 0.05 to 0.6 mg L(-1) for resorcinol, and from 0.05 to 0.4 mg L(-1) for phenol, m-cresol and p-cresol; incidentally, such ranges are in accordance with the Argentinean environmental legislation. To verify the accuracy of the proposed method a recovery study on real air samples of smoking environment was carried out with satisfactory results (94-104%). The advantage of the proposed method is that it requires only spectrofluorimetric measurements of samples and chemometric modeling for simultaneous determination of five phenols. With it, air is simply sampled and no pre-treatment sample is needed (i.e., separation steps and derivatization reagents are avoided) that means a great saving of time. Copyright © 2010 Elsevier B.V. All rights reserved.
Statistical self-similarity of width function maxima with implications to floods
Veitzer, S.A.; Gupta, V.K.
2001-01-01
Recently a new theory of random self-similar river networks, called the RSN model, was introduced to explain empirical observations regarding the scaling properties of distributions of various topologic and geometric variables in natural basins. The RSN model predicts that such variables exhibit statistical simple scaling, when indexed by Horton-Strahler order. The average side tributary structure of RSN networks also exhibits Tokunaga-type self-similarity which is widely observed in nature. We examine the scaling structure of distributions of the maximum of the width function for RSNs for nested, complete Strahler basins by performing ensemble simulations. The maximum of the width function exhibits distributional simple scaling, when indexed by Horton-Strahler order, for both RSNs and natural river networks extracted from digital elevation models (DEMs). We also test a powerlaw relationship between Horton ratios for the maximum of the width function and drainage areas. These results represent first steps in formulating a comprehensive physical statistical theory of floods at multiple space-time scales for RSNs as discrete hierarchical branching structures. ?? 2001 Published by Elsevier Science Ltd.
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfeiffer, M., E-mail: mpfeiffer@irs.uni-stuttgart.de; Nizenkov, P., E-mail: nizenkov@irs.uni-stuttgart.de; Mirza, A., E-mail: mirza@irs.uni-stuttgart.de
2016-02-15
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methodsmore » are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.« less
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
NASA Astrophysics Data System (ADS)
Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.
2016-02-01
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.
Regularized lattice Boltzmann model for immiscible two-phase flows with power-law rheology
NASA Astrophysics Data System (ADS)
Ba, Yan; Wang, Ningning; Liu, Haihu; Li, Qiang; He, Guoqiang
2018-03-01
In this work, a regularized lattice Boltzmann color-gradient model is developed for the simulation of immiscible two-phase flows with power-law rheology. This model is as simple as the Bhatnagar-Gross-Krook (BGK) color-gradient model except that an additional regularization step is introduced prior to the collision step. In the regularization step, the pseudo-inverse method is adopted as an alternative solution for the nonequilibrium part of the total distribution function, and it can be easily extended to other discrete velocity models no matter whether a forcing term is considered or not. The obtained expressions for the nonequilibrium part are merely related to macroscopic variables and velocity gradients that can be evaluated locally. Several numerical examples, including the single-phase and two-phase layered power-law fluid flows between two parallel plates, and the droplet deformation and breakup in a simple shear flow, are conducted to test the capability and accuracy of the proposed color-gradient model. Results show that the present model is more stable and accurate than the BGK color-gradient model for power-law fluids with a wide range of power-law indices. Compared to its multiple-relaxation-time counterpart, the present model can increase the computing efficiency by around 15%, while keeping the same accuracy and stability. Also, the present model is found to be capable of reasonably predicting the critical capillary number of droplet breakup.
Modeling influenza-like illnesses through composite compartmental models
NASA Astrophysics Data System (ADS)
Levy, Nir; , Michael, Iv; Yom-Tov, Elad
2018-03-01
Epidemiological models for the spread of pathogens in a population are usually only able to describe a single pathogen. This makes their application unrealistic in cases where multiple pathogens with similar symptoms are spreading concurrently within the same population. Here we describe a method which makes possible the application of multiple single-strain models under minimal conditions. As such, our method provides a bridge between theoretical models of epidemiology and data-driven approaches for modeling of influenza and other similar viruses. Our model extends the Susceptible-Infected-Recovered model to higher dimensions, allowing the modeling of a population infected by multiple viruses. We further provide a method, based on an overcomplete dictionary of feasible realizations of SIR solutions, to blindly partition the time series representing the number of infected people in a population into individual components, each representing the effect of a single pathogen. We demonstrate the applicability of our proposed method on five years of seasonal influenza-like illness (ILI) rates, estimated from Twitter data. We demonstrate that our method describes, on average, 44% of the variance in the ILI time series. The individual infectious components derived from our model are matched to known viral profiles in the populations, which we demonstrate matches that of independently collected epidemiological data. We further show that the basic reproductive numbers (R 0) of the matched components are in range known for these pathogens. Our results suggest that the proposed method can be applied to other pathogens and geographies, providing a simple method for estimating the parameters of epidemics in a population.
Multispectral processing without spectra.
Drew, Mark S; Finlayson, Graham D
2003-07-01
It is often the case that multiplications of whole spectra, component by component, must be carried out,for example when light reflects from or is transmitted through materials. This leads to particularly taxing calculations, especially in spectrally based ray tracing or radiosity in graphics, making a full-spectrum method prohibitively expensive. Nevertheless, using full spectra is attractive because of the many important phenomena that can be modeled only by using all the physics at hand. We apply to the task of spectral multiplication a method previously used in modeling RGB-based light propagation. We show that we can often multiply spectra without carrying out spectral multiplication. In previous work [J. Opt. Soc. Am. A 11, 1553 (1994)] we developed a method called spectral sharpening, which took camera RGBs to a special sharp basis that was designed to render illuminant change simple to model. Specifically, in the new basis, one can effectively model illuminant change by using a diagonal matrix rather than the 3 x 3 linear transform that results from a three-component finite-dimensional model [G. Healey and D. Slater, J. Opt. Soc. Am. A 11, 3003 (1994)]. We apply this idea of sharpening to the set of principal components vectors derived from a representative set of spectra that might reasonably be encountered in a given application. With respect to the sharp spectral basis, we show that spectral multiplications can be modeled as the multiplication of the basis coefficients. These new product coefficients applied to the sharp basis serve to accurately reconstruct the spectral product. Although the method is quite general, we show how to use spectral modeling by taking advantage of metameric surfaces, ones that match under one light but not another, for tasks such as volume rendering. The use of metamers allows a user to pick out or merge different volume structures in real time simply by changing the lighting.
Multispectral processing without spectra
NASA Astrophysics Data System (ADS)
Drew, Mark S.; Finlayson, Graham D.
2003-07-01
It is often the case that multiplications of whole spectra, component by component, must be carried out, for example when light reflects from or is transmitted through materials. This leads to particularly taxing calculations, especially in spectrally based ray tracing or radiosity in graphics, making a full-spectrum method prohibitively expensive. Nevertheless, using full spectra is attractive because of the many important phenomena that can be modeled only by using all the physics at hand. We apply to the task of spectral multiplication a method previously used in modeling RGB-based light propagation. We show that we can often multiply spectra without carrying out spectral multiplication. In previous work J. Opt. Soc. Am. A 11 , 1553 (1994) we developed a method called spectral sharpening, which took camera RGBs to a special sharp basis that was designed to render illuminant change simple to model. Specifically, in the new basis, one can effectively model illuminant change by using a diagonal matrix rather than the 33 linear transform that results from a three-component finite-dimensional model G. Healey and D. Slater, J. Opt. Soc. Am. A 11 , 3003 (1994). We apply this idea of sharpening to the set of principal components vectors derived from a representative set of spectra that might reasonably be encountered in a given application. With respect to the sharp spectral basis, we show that spectral multiplications can be modeled as the multiplication of the basis coefficients. These new product coefficients applied to the sharp basis serve to accurately reconstruct the spectral product. Although the method is quite general, we show how to use spectral modeling by taking advantage of metameric surfaces, ones that match under one light but not another, for tasks such as volume rendering. The use of metamers allows a user to pick out or merge different volume structures in real time simply by changing the lighting. 2003 Optical Society of America
NASA Astrophysics Data System (ADS)
Biteau, J.; Giebels, B.
2012-12-01
Very high energy gamma-ray variability of blazar emission remains of puzzling origin. Fast flux variations down to the minute time scale, as observed with H.E.S.S. during flares of the blazar PKS 2155-304, suggests that variability originates from the jet, where Doppler boosting can be invoked to relax causal constraints on the size of the emission region. The observation of log-normality in the flux distributions should rule out additive processes, such as those resulting from uncorrelated multiple-zone emission models, and favour an origin of the variability from multiplicative processes not unlike those observed in a broad class of accreting systems. We show, using a simple kinematic model, that Doppler boosting of randomly oriented emitting regions generates flux distributions following a Pareto law, that the linear flux-r.m.s. relation found for a single zone holds for a large number of emitting regions, and that the skewed distribution of the total flux is close to a log-normal, despite arising from an additive process.
NASA Astrophysics Data System (ADS)
Zelený, J.; Pérez-Fontán, F.; Pechac, P.; Mariño-Espiñeira, P.
2017-05-01
In civil surveillance applications, unmanned aerial vehicles (UAV) are being increasingly used in floods, fires, and law enforcement scenarios. In order to transfer large amounts of information from UAV-mounted cameras, relays, or sensors, large bandwidths are needed in comparison to those required for remotely commanding the UAV. This demands the use of higher-frequency bands, in all probability in the vicinity of 2 or 5 GHz. Novel hardware developments need propagation channel models for the ample range of operational scenarios envisaged, including multiple-input, multiple-output (MIMO) deployments. These configurations may enable a more robust transmission by increasing either the carrier-to-noise ratio statistics or the achievable capacity. In this paper, a 2 × 2 MIMO propagation channel model for an open-field environment capable of synthesizing a narrowband time series at 2 GHz is described. Maximal ratio combining diversity and capacity improvements are also evaluated through synthetic series and compared with measurement results. A simple flat, open scenario was evaluated based on which other, more complex environments can be modeled.
Initial Kernel Timing Using a Simple PIM Performance Model
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Block, Gary L.; Springer, Paul L.; Sterling, Thomas; Brockman, Jay B.; Callahan, David
2005-01-01
This presentation will describe some initial results of paper-and-pencil studies of 4 or 5 application kernels applied to a processor-in-memory (PIM) system roughly similar to the Cascade Lightweight Processor (LWP). The application kernels are: * Linked list traversal * Sun of leaf nodes on a tree * Bitonic sort * Vector sum * Gaussian elimination The intent of this work is to guide and validate work on the Cascade project in the areas of compilers, simulators, and languages. We will first discuss the generic PIM structure. Then, we will explain the concepts needed to program a parallel PIM system (locality, threads, parcels). Next, we will present a simple PIM performance model that will be used in the remainder of the presentation. For each kernel, we will then present a set of codes, including codes for a single PIM node, and codes for multiple PIM nodes that move data to threads and move threads to data. These codes are written at a fairly low level, between assembly and C, but much closer to C than to assembly. For each code, we will present some hand-drafted timing forecasts, based on the simple PIM performance model. Finally, we will conclude by discussing what we have learned from this work, including what programming styles seem to work best, from the point-of-view of both expressiveness and performance.
Modeling and control of flexible space platforms with articulated payloads
NASA Technical Reports Server (NTRS)
Graves, Philip C.; Joshi, Suresh M.
1989-01-01
The first steps in developing a methodology for spacecraft control-structure interaction (CSI) optimization are identification and classification of anticipated missions, and the development of tractable mathematical models in each mission class. A mathematical model of a generic large flexible space platform (LFSP) with multiple independently pointed rigid payloads is considered. The objective is not to develop a general purpose numerical simulation, but rather to develop an analytically tractable mathematical model of such composite systems. The equations of motion for a single payload case are derived, and are linearized about zero steady-state. The resulting model is then extended to include multiple rigid payloads, yielding the desired analytical form. The mathematical models developed clearly show the internal inertial/elastic couplings, and are therefore suitable for analytical and numerical studies. A simple decentralized control law is proposed for fine pointing the payloads and LFSP attitude control, and simulation results are presented for an example problem. The decentralized controller is shown to be adequate for the example problem chosen, but does not, in general, guarantee stability. A centralized dissipative controller is then proposed, requiring a symmetric form of the composite system equations. Such a controller guarantees robust closed loop stability despite unmodeled elastic dynamics and parameter uncertainties.
Foundations of the Bandera Abstraction Tools
NASA Technical Reports Server (NTRS)
Hatcliff, John; Dwyer, Matthew B.; Pasareanu, Corina S.; Robby
2003-01-01
Current research is demonstrating that model-checking and other forms of automated finite-state verification can be effective for checking properties of software systems. Due to the exponential costs associated with model-checking, multiple forms of abstraction are often necessary to obtain system models that are tractable for automated checking. The Bandera Tool Set provides multiple forms of automated support for compiling concurrent Java software systems to models that can be supplied to several different model-checking tools. In this paper, we describe the foundations of Bandera's data abstraction mechanism which is used to reduce the cardinality (and the program's state-space) of data domains in software to be model-checked. From a technical standpoint, the form of data abstraction used in Bandera is simple, and it is based on classical presentations of abstract interpretation. We describe the mechanisms that Bandera provides for declaring abstractions, for attaching abstractions to programs, and for generating abstracted programs and properties. The contributions of this work are the design and implementation of various forms of tool support required for effective application of data abstraction to software components written in a programming language like Java which has a rich set of linguistic features.
PyFolding: Open-Source Graphing, Simulation, and Analysis of the Biophysical Properties of Proteins.
Lowe, Alan R; Perez-Riba, Albert; Itzhaki, Laura S; Main, Ewan R G
2018-02-06
For many years, curve-fitting software has been heavily utilized to fit simple models to various types of biophysical data. Although such software packages are easy to use for simple functions, they are often expensive and present substantial impediments to applying more complex models or for the analysis of large data sets. One field that is reliant on such data analysis is the thermodynamics and kinetics of protein folding. Over the past decade, increasingly sophisticated analytical models have been generated, but without simple tools to enable routine analysis. Consequently, users have needed to generate their own tools or otherwise find willing collaborators. Here we present PyFolding, a free, open-source, and extensible Python framework for graphing, analysis, and simulation of the biophysical properties of proteins. To demonstrate the utility of PyFolding, we have used it to analyze and model experimental protein folding and thermodynamic data. Examples include: 1) multiphase kinetic folding fitted to linked equations, 2) global fitting of multiple data sets, and 3) analysis of repeat protein thermodynamics with Ising model variants. Moreover, we demonstrate how PyFolding is easily extensible to novel functionality beyond applications in protein folding via the addition of new models. Example scripts to perform these and other operations are supplied with the software, and we encourage users to contribute notebooks and models to create a community resource. Finally, we show that PyFolding can be used in conjunction with Jupyter notebooks as an easy way to share methods and analysis for publication and among research teams. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Cwik, Tom; Fu, Chuigang; Imbriale, William A.; Jamnejad, Vahraz; Springer, Paul L.; Borgioli, Andrea
2000-01-01
The process of designing and analyzing a multiple-reflector system has traditionally been time-intensive, requiring large amounts of both computational and human time. At many frequencies, a discrete approximation of the radiation integral may be used to model the system. The code which implements this physical optics (PO) algorithm was developed at the Jet Propulsion Laboratory. It analyzes systems of antennas in pairs, and for each pair, the analysis can be computationally time-consuming. Additionally, the antennas must be described using a local coordinate system for each antenna, which makes it difficult to integrate the design into a multi-disciplinary framework in which there is traditionally one global coordinate system, even before considering deforming the antenna as prescribed by external structural and/or thermal factors. Finally, setting up the code to correctly analyze all the antenna pairs in the system can take a fair amount of time, and introduces possible human error. The use of parallel computing to reduce the computational time required for the analysis of a given pair of antennas has been previously discussed. This paper focuses on the other problems mentioned above. It will present a methodology and examples of use of an automated tool that performs the analysis of a complete multiple-reflector system in an integrated multi-disciplinary environment (including CAD modeling, and structural and thermal analysis) at the click of a button. This tool, named MOD Tool (Millimeter-wave Optics Design Tool), has been designed and implemented as a distributed tool, with a client that runs almost identically on Unix, Mac, and Windows platforms, and a server that runs primarily on a Unix workstation and can interact with parallel supercomputers with simple instruction from the user interacting with the client.
Memory Applications Using Resonant Tunneling Diodes
NASA Astrophysics Data System (ADS)
Shieh, Ming-Huei
Resonant tunneling diodes (RTDs) producing unique folding current-voltage (I-V) characteristics have attracted considerable research attention due to their promising application in signal processing and multi-valued logic. The negative differential resistance of RTDs renders the operating points self-latching and stable. We have proposed a multiple -dimensional multiple-state RTD-based static random-access memory (SRAM) cell in which the number of stable states can significantly be increased to (N + 1)^ m or more for m number of N-peak RTDs connected in series. The proposed cells take advantage of the hysteresis and folding I-V characteristics of RTD. Several cell designs are presented and evaluated. A two-dimensional nine-state memory cell has been implemented and demonstrated by a breadboard circuit using two 2-peak RTDs. The hysteresis phenomenon in a series of RTDs is also further analyzed. The switch model provided in SPICE 3 can be utilized to simulate the hysteretic I-V characteristics of RTDs. A simple macro-circuit is described to model the hysteretic I-V characteristic of RTD for circuit simulation. A new scheme for storing word-wide multiple-bit information very efficiently in a single memory cell using RTDs is proposed. An efficient and inexpensive periphery circuit to read from and write into the cell is also described. Simulation results on the design of a 3-bit memory cell scheme using one-peak RTDs are also presented. Finally, a binary transistor-less memory cell which is only composed of a pair of RTDs and an ordinary rectifier diode is presented and investigated. A simple means for reading and writing information from or into the memory cell is also discussed.
Early Prediction of Reading Comprehension within the Simple View Framework
ERIC Educational Resources Information Center
Catts, Hugh W.; Herrera, Sarah; Nielsen, Diane Corcoran; Bridges, Mindy Sittner
2015-01-01
The simple view of reading proposes that reading comprehension is the product of word reading and language comprehension. In this study, we used the simple view framework to examine the early prediction of reading comprehension abilities. Using multiple measures for all constructs, we assessed word reading precursors (i.e., letter knowledge,…
Laser Speckle Photography: Some Simple Experiments for the Undergraduate Laboratory.
ERIC Educational Resources Information Center
Bates, B.; And Others
1986-01-01
Describes simple speckle photography experiments which are easy to set up and require only low cost standard laboratory equipment. Included are procedures for taking single, double, and multiple exposures. (JN)
Development and application of air quality models at the US ...
Overview of the development and application of air quality models at the U.S. EPA, particularly focused on the development and application of the Community Multiscale Air Quality (CMAQ) model developed within the Computation Exposure Division (CED) of the National Exposure Research Laboratory (NERL). This presentation will provide a simple overview of air quality model development and application geared toward a non-technical student audience. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.
The big challenges in modeling human and environmental well-being.
Tuljapurkar, Shripad
2016-01-01
This article is a selective review of quantitative research, historical and prospective, that is needed to inform sustainable development policy. I start with a simple framework to highlight how demography and productivity shape human well-being. I use that to discuss three sets of issues and corresponding challenges to modeling: first, population prehistory and early human development and their implications for the future; second, the multiple distinct dimensions of human and environmental well-being and the meaning of sustainability; and, third, inequality as a phenomenon triggered by development and models to examine changing inequality and its consequences. I conclude with a few words about other important factors: political, institutional, and cultural.
Framework for adaptive multiscale analysis of nonhomogeneous point processes.
Helgason, Hannes; Bartroff, Jay; Abry, Patrice
2011-01-01
We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.
Wagner, Peter J
2012-02-23
Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution.
An object-based approach for detecting small brain lesions: application to Virchow-Robin spaces.
Descombes, Xavier; Kruggel, Frithjof; Wollny, Gert; Gertz, Hermann Josef
2004-02-01
This paper is concerned with the detection of multiple small brain lesions from magnetic resonance imaging (MRI) data. A model based on the marked point process framework is designed to detect Virchow-Robin spaces (VRSs). These tubular shaped spaces are due to retraction of the brain parenchyma from its supplying arteries. VRS are described by simple geometrical objects that are introduced as small tubular structures. Their radiometric properties are embedded in a data term. A prior model includes interactions describing the clustering property of VRS. A Reversible Jump Markov Chain Monte Carlo algorithm (RJMCMC) optimizes the proposed model, obtained by multiplying the prior and the data model. Example results are shown on T1-weighted MRI datasets of elderly subjects.
Structure and atomic correlations in molecular systems probed by XAS reverse Monte Carlo refinement
NASA Astrophysics Data System (ADS)
Di Cicco, Andrea; Iesari, Fabio; Trapananti, Angela; D'Angelo, Paola; Filipponi, Adriano
2018-03-01
The Reverse Monte Carlo (RMC) algorithm for structure refinement has been applied to x-ray absorption spectroscopy (XAS) multiple-edge data sets for six gas phase molecular systems (SnI2, CdI2, BBr3, GaI3, GeBr4, GeI4). Sets of thousands of molecular replicas were involved in the refinement process, driven by the XAS data and constrained by available electron diffraction results. The equilibrated configurations were analysed to determine the average tridimensional structure and obtain reliable bond and bond-angle distributions. Detectable deviations from Gaussian models were found in some cases. This work shows that a RMC refinement of XAS data is able to provide geometrical models for molecular structures compatible with present experimental evidence. The validation of this approach on simple molecular systems is particularly important in view of its possible simple extension to more complex and extended systems including metal-organic complexes, biomolecules, or nanocrystalline systems.
Tenax extraction as a simple approach to improve environmental risk assessments.
Harwood, Amanda D; Nutile, Samuel A; Landrum, Peter F; Lydy, Michael J
2015-07-01
It is well documented that using exhaustive chemical extractions is not an effective means of assessing exposure of hydrophobic organic compounds in sediments and that bioavailability-based techniques are an improvement over traditional methods. One technique that has shown special promise as a method for assessing the bioavailability of hydrophobic organic compounds in sediment is the use of Tenax-extractable concentrations. A 6-h or 24-h single-point Tenax-extractable concentration correlates to both bioaccumulation and toxicity. This method has demonstrated effectiveness for several hydrophobic organic compounds in various organisms under both field and laboratory conditions. In addition, a Tenax bioaccumulation model was developed for multiple compounds relating 24-h Tenax-extractable concentrations to oligochaete tissue concentrations exposed in both the laboratory and field. This model has demonstrated predictive capacity for additional compounds and species. Use of Tenax-extractable concentrations to estimate exposure is rapid, simple, straightforward, and relatively inexpensive, as well as accurate. Therefore, this method would be an invaluable tool if implemented in risk assessments. © 2015 SETAC.
Mak, D O; Webb, W W
1997-03-01
A Green's function approach is developed from first principles to evaluate the power spectral density of conductance fluctuations caused by ion concentration fluctuations via diffusion in an electrolyte system. This is applied to simple geometric models of transmembrane ion channels to obtain an estimate of the magnitude of ion concentration fluctuation noise in the channel current. Pure polypeptide alamethicin forms stable ion channels with multiple conductance states in artificial phospholipid bilayers isolated onto tips of micropipettes with gigaohm seals. In the single-channel current recorded by voltage-clamp techniques, excess noise was found after the background instrumental noise and the intrinsic Johnson and shot noises were removed. The noise que to ion concentration fluctuations via diffusion was isolated by the dependence of the excess current noise on buffer ion concentration. The magnitude of the concentration fluctuation noise derived from experimental data lies within limits estimated using our simple geometric channel models. Variation of the noise magnitude for alamethicin channels in various conductance states agrees with theoretical prediction.
Herman, Agnieszka
2010-06-01
Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x(-1-α) exp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.
Sea-ice floe-size distribution in the context of spontaneous scaling emergence in stochastic systems
NASA Astrophysics Data System (ADS)
Herman, Agnieszka
2010-06-01
Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x-1-αexp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.
Psychophysical and perceptual performance in a simulated-scotoma model of human eye injury
NASA Astrophysics Data System (ADS)
Brandeis, R.; Egoz, I.; Peri, D.; Sapiens, N.; Turetz, J.
2008-02-01
Macular scotomas, affecting visual functioning, characterize many eye and neurological diseases like AMD, diabetes mellitus, multiple sclerosis, and macular hole. In this work, foveal visual field defects were modeled, and their effects were evaluated on spatial contrast sensitivity and a task of stimulus detection and aiming. The modeled occluding scotomas, of different size, were superimposed on the stimuli presented on the computer display, and were stabilized on the retina using a mono Purkinje Eye-Tracker. Spatial contrast sensitivity was evaluated using square-wave grating stimuli, whose contrast thresholds were measured using the method of constant stimuli with "catch trials". The detection task consisted of a triple conjunctive visual search display of: size (in visual angle), contrast and background (simple, low-level features vs. complex, high-level features). Search/aiming accuracy as well as R.T. measures used for performance evaluation. Artificially generated scotomas suppressed spatial contrast sensitivity in a size dependent manner, similar to previous studies. Deprivation effect was dependent on spatial frequency, consistent with retinal inhomogeneity models. Stimulus detection time was slowed in complex background search situation more than in simple background. Detection speed was dependent on scotoma size and size of stimulus. In contrast, visually guided aiming was more sensitive to scotoma effect in simple background search situation than in complex background. Both stimulus aiming R.T. and accuracy (precision targeting) were impaired, as a function of scotoma size and size of stimulus. The data can be explained by models distinguishing between saliency-based, parallel and serial search processes, guiding visual attention, which are supported by underlying retinal as well as neural mechanisms.
Versatile microrobotics using simple modular subunits
NASA Astrophysics Data System (ADS)
Cheang, U. Kei; Meshkati, Farshad; Kim, Hoyeon; Lee, Kyoungwoo; Fu, Henry Chien; Kim, Min Jun
2016-07-01
The realization of reconfigurable modular microrobots could aid drug delivery and microsurgery by allowing a single system to navigate diverse environments and perform multiple tasks. So far, microrobotic systems are limited by insufficient versatility; for instance, helical shapes commonly used for magnetic swimmers cannot effectively assemble and disassemble into different size and shapes. Here by using microswimmers with simple geometries constructed of spherical particles, we show how magnetohydrodynamics can be used to assemble and disassemble modular microrobots with different physical characteristics. We develop a mechanistic physical model that we use to improve assembly strategies. Furthermore, we experimentally demonstrate the feasibility of dynamically changing the physical properties of microswimmers through assembly and disassembly in a controlled fluidic environment. Finally, we show that different configurations have different swimming properties by examining swimming speed dependence on configuration size.
Versatile microrobotics using simple modular subunits
Cheang, U Kei; Meshkati, Farshad; Kim, Hoyeon; Lee, Kyoungwoo; Fu, Henry Chien; Kim, Min Jun
2016-01-01
The realization of reconfigurable modular microrobots could aid drug delivery and microsurgery by allowing a single system to navigate diverse environments and perform multiple tasks. So far, microrobotic systems are limited by insufficient versatility; for instance, helical shapes commonly used for magnetic swimmers cannot effectively assemble and disassemble into different size and shapes. Here by using microswimmers with simple geometries constructed of spherical particles, we show how magnetohydrodynamics can be used to assemble and disassemble modular microrobots with different physical characteristics. We develop a mechanistic physical model that we use to improve assembly strategies. Furthermore, we experimentally demonstrate the feasibility of dynamically changing the physical properties of microswimmers through assembly and disassembly in a controlled fluidic environment. Finally, we show that different configurations have different swimming properties by examining swimming speed dependence on configuration size. PMID:27464852
An on-board near-optimal climb-dash energy management
NASA Technical Reports Server (NTRS)
Weston, A. R.; Cliff, E. M.; Kelley, H. J.
1982-01-01
On-board real time flight control is studied in order to develop algorithms which are simple enough to be used in practice, for a variety of missions involving three dimensional flight. The intercept mission in symmetric flight is emphasized. Extensive computation is required on the ground prior to the mission but the ensuing on-board exploitation is extremely simple. The scheme takes advantage of the boundary layer structure common in singular perturbations, arising with the multiple time scales appropriate to aircraft dynamics. Energy modelling of aircraft is used as the starting point for the analysis. In the symmetric case, a nominal path is generated which fairs into the dash or cruise state. Feedback coefficients are found as functions of the remaining energy to go (dash energy less current energy) along the nominal path.
Building a Database for a Quantitative Model
NASA Technical Reports Server (NTRS)
Kahn, C. Joseph; Kleinhammer, Roger
2014-01-01
A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.
Towards the construction of high-quality mutagenesis libraries.
Li, Heng; Li, Jing; Jin, Ruinan; Chen, Wei; Liang, Chaoning; Wu, Jieyuan; Jin, Jian-Ming; Tang, Shuang-Yan
2018-07-01
To improve the quality of mutagenesis libraries in directed evolution strategy. In the process of library transformation, transformants which have been shown to take up more than one plasmid might constitute more than 20% of the constructed library, thereby extensively impairing the quality of the library. We propose a practical transformation method to prevent the occurrence of multiple-plasmid transformants while maintaining high transformation efficiency. A visual library model containing plasmids expressing different fluorescent proteins was used. Multiple-plasmid transformants can be reduced through optimizing plasmid DNA amount used for transformation based on the positive correlation between the occurrence frequency of multiple-plasmid transformants and the logarithmic ratio of plasmid molecules to competent cells. This method provides a simple solution for a seemingly common but often neglected problem, and should be valuable for improving the quality of mutagenesis libraries to enhance the efficiency of directed evolution strategies.
Automated biosurveillance data from England and Wales, 1991-2011.
Enki, Doyo G; Noufaily, Angela; Garthwaite, Paul H; Andrews, Nick J; Charlett, André; Lane, Chris; Farrington, C Paddy
2013-01-01
Outbreak detection systems for use with very large multiple surveillance databases must be suited both to the data available and to the requirements of full automation. To inform the development of more effective outbreak detection algorithms, we analyzed 20 years of data (1991-2011) from a large laboratory surveillance database used for outbreak detection in England and Wales. The data relate to 3,303 distinct types of infectious pathogens, with a frequency range spanning 6 orders of magnitude. Several hundred organism types were reported each week. We describe the diversity of seasonal patterns, trends, artifacts, and extra-Poisson variability to which an effective multiple laboratory-based outbreak detection system must adjust. We provide empirical information to guide the selection of simple statistical models for automated surveillance of multiple organisms, in the light of the key requirements of such outbreak detection systems, namely, robustness, flexibility, and sensitivity.
Automated Biosurveillance Data from England and Wales, 1991–2011
Enki, Doyo G.; Noufaily, Angela; Garthwaite, Paul H.; Andrews, Nick J.; Charlett, André; Lane, Chris
2013-01-01
Outbreak detection systems for use with very large multiple surveillance databases must be suited both to the data available and to the requirements of full automation. To inform the development of more effective outbreak detection algorithms, we analyzed 20 years of data (1991–2011) from a large laboratory surveillance database used for outbreak detection in England and Wales. The data relate to 3,303 distinct types of infectious pathogens, with a frequency range spanning 6 orders of magnitude. Several hundred organism types were reported each week. We describe the diversity of seasonal patterns, trends, artifacts, and extra-Poisson variability to which an effective multiple laboratory-based outbreak detection system must adjust. We provide empirical information to guide the selection of simple statistical models for automated surveillance of multiple organisms, in the light of the key requirements of such outbreak detection systems, namely, robustness, flexibility, and sensitivity. PMID:23260848
Epithelial Integrity Is Maintained by a Matriptase-Dependent Proteolytic Pathway
List, Karin; Kosa, Peter; Szabo, Roman; Bey, Alexandra L.; Wang, Chao Becky; Molinolo, Alfredo; Bugge, Thomas H.
2009-01-01
A pericellular proteolytic pathway initiated by the transmembrane serine protease matriptase plays a critical role in the terminal differentiation of epidermal tissues. Matriptase is constitutively expressed in multiple other epithelia, suggesting a putative role of this membrane serine protease in general epithelial homeostasis. Here we generated mice with conditional deletion of the St14 gene, encoding matriptase, and show that matriptase indeed is essential for the maintenance of multiple types of epithelia in the mouse. Thus, embryonic or postnatal ablation of St14 in epithelial tissues of diverse origin and function caused severe organ dysfunction, which was often associated with increased permeability, loss of tight junction function, mislocation of tight junction-associated proteins, and generalized epithelial demise. The study reveals that the homeostasis of multiple simple and stratified epithelia is matriptase-dependent, and provides an important animal model for the exploration of this membrane serine protease in a range of physiological and pathological processes. PMID:19717635
NASA Astrophysics Data System (ADS)
Pilz, Tobias; Francke, Till; Bronstert, Axel
2016-04-01
Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.
Adali, Tülay; Levin-Schwartz, Yuri; Calhoun, Vince D.
2015-01-01
Fusion of information from multiple sets of data in order to extract a set of features that are most useful and relevant for the given task is inherent to many problems we deal with today. Since, usually, very little is known about the actual interaction among the datasets, it is highly desirable to minimize the underlying assumptions. This has been the main reason for the growing importance of data-driven methods, and in particular of independent component analysis (ICA) as it provides useful decompositions with a simple generative model and using only the assumption of statistical independence. A recent extension of ICA, independent vector analysis (IVA) generalizes ICA to multiple datasets by exploiting the statistical dependence across the datasets, and hence, as we discuss in this paper, provides an attractive solution to fusion of data from multiple datasets along with ICA. In this paper, we focus on two multivariate solutions for multi-modal data fusion that let multiple modalities fully interact for the estimation of underlying features that jointly report on all modalities. One solution is the Joint ICA model that has found wide application in medical imaging, and the second one is the the Transposed IVA model introduced here as a generalization of an approach based on multi-set canonical correlation analysis. In the discussion, we emphasize the role of diversity in the decompositions achieved by these two models, present their properties and implementation details to enable the user make informed decisions on the selection of a model along with its associated parameters. Discussions are supported by simulation results to help highlight the main issues in the implementation of these methods. PMID:26525830
Park, Byung-Jung; Lord, Dominique; Wu, Lingtao
2016-10-28
This study aimed to investigate the relative performance of two models (negative binomial (NB) model and two-component finite mixture of negative binomial models (FMNB-2)) in terms of developing crash modification factors (CMFs). Crash data on rural multilane divided highways in California and Texas were modeled with the two models, and crash modification functions (CMFunctions) were derived. The resultant CMFunction estimated from the FMNB-2 model showed several good properties over that from the NB model. First, the safety effect of a covariate was better reflected by the CMFunction developed using the FMNB-2 model, since the model takes into account the differential responsiveness of crash frequency to the covariate. Second, the CMFunction derived from the FMNB-2 model is able to capture nonlinear relationships between covariate and safety. Finally, following the same concept as those for NB models, the combined CMFs of multiple treatments were estimated using the FMNB-2 model. The results indicated that they are not the simple multiplicative of single ones (i.e., their safety effects are not independent under FMNB-2 models). Adjustment Factors (AFs) were then developed. It is revealed that current Highway Safety Manual's method could over- or under-estimate the combined CMFs under particular combination of covariates. Safety analysts are encouraged to consider using the FMNB-2 models for developing CMFs and AFs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Object detection in natural backgrounds predicted by discrimination performance and models
NASA Technical Reports Server (NTRS)
Rohaly, A. M.; Ahumada, A. J. Jr; Watson, A. B.
1997-01-01
Many models of visual performance predict image discriminability, the visibility of the difference between a pair of images. We compared the ability of three image discrimination models to predict the detectability of objects embedded in natural backgrounds. The three models were: a multiple channel Cortex transform model with within-channel masking; a single channel contrast sensitivity filter model; and a digital image difference metric. Each model used a Minkowski distance metric (generalized vector magnitude) to summate absolute differences between the background and object plus background images. For each model, this summation was implemented with three different exponents: 2, 4 and infinity. In addition, each combination of model and summation exponent was implemented with and without a simple contrast gain factor. The model outputs were compared to measures of object detectability obtained from 19 observers. Among the models without the contrast gain factor, the multiple channel model with a summation exponent of 4 performed best, predicting the pattern of observer d's with an RMS error of 2.3 dB. The contrast gain factor improved the predictions of all three models for all three exponents. With the factor, the best exponent was 4 for all three models, and their prediction errors were near 1 dB. These results demonstrate that image discrimination models can predict the relative detectability of objects in natural scenes.
MAI statistics estimation and analysis in a DS-CDMA system
NASA Astrophysics Data System (ADS)
Alami Hassani, A.; Zouak, M.; Mrabti, M.; Abdi, F.
2018-05-01
A primary limitation of Direct Sequence Code Division Multiple Access DS-CDMA link performance and system capacity is multiple access interference (MAI). To examine the performance of CDMA systems in the presence of MAI, i.e., in a multiuser environment, several works assumed that the interference can be approximated by a Gaussian random variable. In this paper, we first develop a new and simple approach to characterize the MAI in a multiuser system. In addition to statistically quantifying the MAI power, the paper also proposes a statistical model for both variance and mean of the MAI for synchronous and asynchronous CDMA transmission. We show that the MAI probability density function (PDF) is Gaussian for the equal-received-energy case and validate it by computer simulations.
A Meinardus Theorem with Multiple Singularities
NASA Astrophysics Data System (ADS)
Granovsky, Boris L.; Stark, Dudley
2012-09-01
Meinardus proved a general theorem about the asymptotics of the number of weighted partitions, when the Dirichlet generating function for weights has a single pole on the positive real axis. Continuing (Granovsky et al., Adv. Appl. Math. 41:307-328, 2008), we derive asymptotics for the numbers of three basic types of decomposable combinatorial structures (or, equivalently, ideal gas models in statistical mechanics) of size n, when their Dirichlet generating functions have multiple simple poles on the positive real axis. Examples to which our theorem applies include ones related to vector partitions and quantum field theory. Our asymptotic formula for the number of weighted partitions disproves the belief accepted in the physics literature that the main term in the asymptotics is determined by the rightmost pole.
Muthu, Pravin; Lutz, Stefan
2016-04-05
Fast, simple and cost-effective methods for detecting and quantifying pharmaceutical agents in patients are highly sought after to replace equipment and labor-intensive analytical procedures. The development of new diagnostic technology including portable detection devices also enables point-of-care by non-specialists in resource-limited environments. We have focused on the detection and dose monitoring of nucleoside analogues used in viral and cancer therapies. Using deoxyribonucleoside kinases (dNKs) as biosensors, our chemometric model compares observed time-resolved kinetics of unknown analytes to known substrate interactions across multiple enzymes. The resulting dataset can simultaneously identify and quantify multiple nucleosides and nucleoside analogues in complex sample mixtures. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Estimating radiofrequency power deposition in body NMR imaging.
Bottomley, P A; Redington, R W; Edelstein, W A; Schenck, J F
1985-08-01
Simple theoretical estimates of the average, maximum, and spatial variation of the radiofrequency power deposition (specific absorption rate) during hydrogen nuclear magnetic resonance imaging are deduced for homogeneous spheres and for cylinders of biological tissue with a uniformly penetrating linear rf field directed axially and transverse to the cylindrical axis. These are all simple scalar multiples of the expression for the cylinder in an axial field published earlier (Med. Phys. 8, 510 (1981]. Exact solutions for the power deposition in the cylinder with axial (Phys. Med. Biol. 23, 630 (1978] and transversely directed rf field are also presented, and the spatial variation of power deposition in head and body models is examined. In the exact models, the specific absorption rates decrease rapidly and monotonically with decreasing radius despite local increases in rf field amplitude. Conversion factors are provided for calculating the power deposited by Gaussian and sinc-modulated rf pulses used for slice selection in NMR imaging, relative to rectangular profiled pulses. Theoretical estimates are compared with direct measurements of the total power deposited in the bodies of nine adult males by a 63-MHz body-imaging system with transversely directed field, taking account of cable and NMR coil losses. The results for the average power deposition agree within about 20% for the exact model of the cylinder with axial field, when applied to the exposed torso volume enclosed by the rf coil. The average values predicted by the simple spherical and cylindrical models with axial fields, the exact cylindrical model with transverse field, and the simple truncated cylinder model with transverse field were about two to three times that measured, while the simple model consisting of an infinitely long cylinder with transverse field gave results about six times that measured. The surface power deposition measured by observing the incremental power as a function of external torso radius was comparable to the average value. This is consistent with the presence of a variable thickness peripheral adipose layer which does not substantially increase surface power deposition with increasing torso radius. The absence of highly localized intensity artifacts in 63-MHz body images does not suggest anomalously intense power deposition at localized internal sites, although peak power is difficult to measure.
1990-09-01
without the help from the DSXR staff. William Lyons, Charles Ramsey , and Martin Meeks went above and beyond to help complete this research. Special...develop a valid forecasting model that is significantly more accurate than the one presently used by DSXR and suggested the development and testing of a...method, Strom tested DSXR’s iterative linear regression forecasting technique by examining P1 in the simple regression equation to determine whether
Fractional noise destroys or induces a stochastic bifurcation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Qigui, E-mail: qgyang@scut.edu.cn; Zeng, Caibin, E-mail: zeng.cb@mail.scut.edu.cn; School of Automation Science and Engineering, South China University of Technology, Guangzhou 510640
2013-12-15
Little seems to be known about the stochastic bifurcation phenomena of non-Markovian systems. Our intention in this paper is to understand such complex dynamics by a simple system, namely, the Black-Scholes model driven by a mixed fractional Brownian motion. The most interesting finding is that the multiplicative fractional noise not only destroys but also induces a stochastic bifurcation under some suitable conditions. So it opens a possible way to explore the theory of stochastic bifurcation in the non-Markovian framework.
Sowande, O S; Oyewale, B F; Iyasere, O S
2010-06-01
The relationships between live weight and eight body measurements of West African Dwarf (WAD) goats were studied using 211 animals under farm condition. The animals were categorized based on age and sex. Data obtained on height at withers (HW), heart girth (HG), body length (BL), head length (HL), and length of hindquarter (LHQ) were fitted into simple linear, allometric, and multiple-regression models to predict live weight from the body measurements according to age group and sex. Results showed that live weight, HG, BL, LHQ, HL, and HW increased with the age of the animals. In multiple-regression model, HG and HL best fit the model for goat kids; HG, HW, and HL for goat aged 13-24 months; while HG, LHQ, HW, and HL best fit the model for goats aged 25-36 months. Coefficients of determination (R(2)) values for linear and allometric models for predicting the live weight of WAD goat increased with age in all the body measurements, with HG being the most satisfactory single measurement in predicting the live weight of WAD goat. Sex had significant influence on the model with R(2) values consistently higher in females except the models for LHQ and HW.
Improving Predictions of Multiple Binary Models in ILP
2014-01-01
Despite the success of ILP systems in learning first-order rules from small number of examples and complexly structured data in various domains, they struggle in dealing with multiclass problems. In most cases they boil down a multiclass problem into multiple black-box binary problems following the one-versus-one or one-versus-rest binarisation techniques and learn a theory for each one. When evaluating the learned theories of multiple class problems in one-versus-rest paradigm particularly, there is a bias caused by the default rule toward the negative classes leading to an unrealistic high performance beside the lack of prediction integrity between the theories. Here we discuss the problem of using one-versus-rest binarisation technique when it comes to evaluating multiclass data and propose several methods to remedy this problem. We also illustrate the methods and highlight their link to binary tree and Formal Concept Analysis (FCA). Our methods allow learning of a simple, consistent, and reliable multiclass theory by combining the rules of the multiple one-versus-rest theories into one rule list or rule set theory. Empirical evaluation over a number of data sets shows that our proposed methods produce coherent and accurate rule models from the rules learned by the ILP system of Aleph. PMID:24696657
Competitive advantage for multiple-memory strategies in an artificial market
NASA Astrophysics Data System (ADS)
Mitman, Kurt E.; Choe, Sehyo C.; Johnson, Neil F.
2005-05-01
We consider a simple binary market model containing N competitive agents. The novel feature of our model is that it incorporates the tendency shown by traders to look for patterns in past price movements over multiple time scales, i.e. multiple memory-lengths. In the regime where these memory-lengths are all small, the average winnings per agent exceed those obtained for either (1) a pure population where all agents have equal memory-length, or (2) a mixed population comprising sub-populations of equal-memory agents with each sub-population having a different memory-length. Agents who consistently play strategies of a given memory-length, are found to win more on average -- switching between strategies with different memory lengths incurs an effective penalty, while switching between strategies of equal memory does not. Agents employing short-memory strategies can outperform agents using long-memory strategies, even in the regime where an equal-memory system would have favored the use of long-memory strategies. Using the many-body 'Crowd-Anticrowd' theory, we obtain analytic expressions which are in good agreement with the observed numerical results. In the context of financial markets, our results suggest that multiple-memory agents have a better chance of identifying price patterns of unknown length and hence will typically have higher winnings.
Predicting acute pain after cesarean delivery using three simple questions.
Pan, Peter H; Tonidandel, Ashley M; Aschenbrenner, Carol A; Houle, Timothy T; Harris, Lynne C; Eisenach, James C
2013-05-01
Interindividual variability in postoperative pain presents a clinical challenge. Preoperative quantitative sensory testing is useful but time consuming in predicting postoperative pain intensity. The current study was conducted to develop and validate a predictive model of acute postcesarean pain using a simple three-item preoperative questionnaire. A total of 200 women scheduled for elective cesarean delivery under subarachnoid anesthesia were enrolled (192 subjects analyzed). Patients were asked to rate the intensity of loudness of audio tones, their level of anxiety and anticipated pain, and analgesic need from surgery. Postoperatively, patients reported the intensity of evoked pain. Regression analysis was performed to generate a predictive model for pain from these measures. A validation cohort of 151 women was enrolled to test the reliability of the model (131 subjects analyzed). Responses from each of the three preoperative questions correlated moderately with 24-h evoked pain intensity (r = 0.24-0.33, P < 0.001). Audio tone rating added uniquely, but minimally, to the model and was not included in the predictive model. The multiple regression analysis yielded a statistically significant model (R = 0.20, P < 0.001), whereas the validation cohort showed reliably a very similar regression line (R = 0.18). In predicting the upper 20th percentile of evoked pain scores, the optimal cut point was 46.9 (z =0.24) such that sensitivity of 0.68 and specificity of 0.67 were as balanced as possible. This simple three-item questionnaire is useful to help predict postcesarean evoked pain intensity, and could be applied to further research and clinical application to tailor analgesic therapy to those who need it most.
Visual Short-Term Memory for Complex Objects in 6- and 8-Month-Old Infants
ERIC Educational Resources Information Center
Kwon, Mee-Kyoung; Luck, Steven J.; Oakes, Lisa M.
2014-01-01
Infants' visual short-term memory (VSTM) for simple objects undergoes dramatic development: Six-month-old infants can store in VSTM information about only a simple object presented in isolation, whereas 8-month-old infants can store information about simple objects presented in multiple-item arrays. This study extended this work to examine…
The Impact of Prophage on the Equilibria and Stability of Phage and Host
NASA Astrophysics Data System (ADS)
Yu, Pei; Nadeem, Alina; Wahl, Lindi M.
2017-06-01
In this paper, we present a bacteriophage model that includes prophage, that is, phage genomes that are incorporated into the host cell genome. The general model is described by an 18-dimensional system of ordinary differential equations. This study focuses on asymptotic behaviour of the model, and thus the system is reduced to a simple six-dimensional model, involving uninfected host cells, infected host cells and phage. We use dynamical system theory to explore the dynamic behaviour of the model, studying in particular the impact of prophage on the equilibria and stability of phage and host. We employ bifurcation and stability theory, centre manifold and normal form theory to show that the system has multiple equilibrium solutions which undergo a series of bifurcations, finally leading to oscillating motions. Numerical simulations are presented to illustrate and confirm the analytical predictions. The results of this study indicate that in some parameter regimes, the host cell population may drive the phage to extinction through diversification, that is, if multiple types of host emerge; this prediction holds even if the phage population is likewise diverse. This parameter regime is restricted, however, if infecting phage are able to recombine with prophage sequences in the host cell genome.
Exploring the impact of multiple grain sizes in numerical landscape evolution model
NASA Astrophysics Data System (ADS)
Guerit, Laure; Braun, Jean; Yuan, Xiaoping; Rouby, Delphine
2017-04-01
Numerical evolution models have been widely developed in order to understand the evolution of landscape over different time-scales, but also the response of the topography to changes in external conditions, such as tectonics or climate, or to changes in the bedrock characteristics, such as its density or its erodability. Few models have coupled the evolution of the relief in erosion to the evolution of the related area in deposition, and in addition, such models generally do not consider the role of the size of the sediments reached the depositional domain. Here, we present a preliminary work based on an enhanced version of Fastscape, a very-efficient model solving the stream power equation, which now integrates a sedimentary basin at the front of a relief, together with the integration of multiple grain sizes in the system. Several simulations were performed in order to explore the impact of several grain sizes in terms of stratigraphy in the marine basin. A simple setting is considered, with uniform uplift rate, precipitation rate, and rock properties onshore. The pros and cons of this approach are discussed with respect to similar simulations performed considering only flux.
Honest Importance Sampling with Multiple Markov Chains
Tan, Aixin; Doss, Hani; Hobert, James P.
2017-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855
Honest Importance Sampling with Multiple Markov Chains.
Tan, Aixin; Doss, Hani; Hobert, James P
2015-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.
Inversion of Attributes and Full Waveforms of Ground Penetrating Radar Data Using PEST
NASA Astrophysics Data System (ADS)
Jazayeri, S.; Kruse, S.; Esmaeili, S.
2015-12-01
We seek to establish a method, based on freely available software, for inverting GPR signals for the underlying physical properties (electrical permittivity, magnetic permeability, target geometries). Such a procedure should be useful for classroom instruction and for analyzing surface GPR surveys over simple targets. We explore the applicability of the PEST parameter estimation software package for GPR inversion (www.pesthomepage.org). PEST is designed to invert data sets with large numbers of parameters, and offers a variety of inversion methods. Although primarily used in hydrogeology, the code has been applied to a wide variety of physical problems. The PEST code requires forward model input; the forward model of the GPR signal is done with the GPRMax package (www.gprmax.com). The problem of extracting the physical characteristics of a subsurface anomaly from the GPR data is highly nonlinear. For synthetic models of simple targets in homogeneous backgrounds, we find PEST's nonlinear Gauss-Marquardt-Levenberg algorithm is preferred. This method requires an initial model, for which the weighted differences between model-generated data and those of the "true" synthetic model (the objective function) are calculated. In order to do this, the Jacobian matrix and the derivatives of the observation data in respect to the model parameters are computed using a finite differences method. Next, the iterative process of building new models by updating the initial values starts in order to minimize the objective function. Another measure of the goodness of the final acceptable model is the correlation coefficient which is calculated based on the method of Cooley and Naff. An accepted final model satisfies both of these conditions. Models to date show that physical properties of simple isolated targets against homogeneous backgrounds can be obtained from multiple traces from common-offset surface surveys. Ongoing work examines the inversion capabilities with more complex target geometries and heterogeneous soils.
Modeling of the competition of stimulated Raman and Brillouin scatter in multiple beam experiments
NASA Astrophysics Data System (ADS)
Cohen, Bruce I.; Baldis, Hector A.; Berger, Richard L.; Estabrook, Kent G.; Williams, Edward A.; Labaune, Christine
2001-02-01
Multiple laser beam experiments with plastic target foils at the Laboratoire pour L'Utilisation des Lasers Intenses (LULI) facility [Baldis et al., Phys. Rev. Lett. 77, 2957 (1996)] demonstrated anticorrelation of stimulated Brillouin and Raman backscatter (SBS and SRS). Detailed Thomson scattering diagnostics showed that SBS always precedes SRS, that secondary electron plasma waves sometimes accompanied SRS appropriate to the Langmuir Decay Instability (LDI), and that, with multiple interaction laser beams, the SBS direct backscatter signal in the primary laser beam was reduced while the SRS backscatter signal was enhanced and occurred earlier in time. Analysis and numerical calculations are presented here that evaluate the influences on the competition of SBS and SRS, of local pump depletion in laser hot spots due to SBS, of mode coupling of SBS and LDI ion waves, and of optical mixing of secondary and primary laser beams. These influences can be significant. The calculations take into account simple models of the laser beam hot-spot intensity probability distributions and assess whether ponderomotive and thermal self-focusing are significant. Within the limits of the model, which omits several other potentially important nonlinearities, the calculations suggest the effectiveness of local pump depletion, ion wave mode coupling, and optical mixing in affecting the LULI observations.
McFarquhar, Martyn; McKie, Shane; Emsley, Richard; Suckling, John; Elliott, Rebecca; Williams, Stephen
2016-01-01
Repeated measurements and multimodal data are common in neuroimaging research. Despite this, conventional approaches to group level analysis ignore these repeated measurements in favour of multiple between-subject models using contrasts of interest. This approach has a number of drawbacks as certain designs and comparisons of interest are either not possible or complex to implement. Unfortunately, even when attempting to analyse group level data within a repeated-measures framework, the methods implemented in popular software packages make potentially unrealistic assumptions about the covariance structure across the brain. In this paper, we describe how this issue can be addressed in a simple and efficient manner using the multivariate form of the familiar general linear model (GLM), as implemented in a new MATLAB toolbox. This multivariate framework is discussed, paying particular attention to methods of inference by permutation. Comparisons with existing approaches and software packages for dependent group-level neuroimaging data are made. We also demonstrate how this method is easily adapted for dependency at the group level when multiple modalities of imaging are collected from the same individuals. Follow-up of these multimodal models using linear discriminant functions (LDA) is also discussed, with applications to future studies wishing to integrate multiple scanning techniques into investigating populations of interest. PMID:26921716
Query-seeded iterative sequence similarity searching improves selectivity 5–20-fold
Li, Weizhong; Lopez, Rodrigo
2017-01-01
Abstract Iterative similarity search programs, like psiblast, jackhmmer, and psisearch, are much more sensitive than pairwise similarity search methods like blast and ssearch because they build a position specific scoring model (a PSSM or HMM) that captures the pattern of sequence conservation characteristic to a protein family. But models are subject to contamination; once an unrelated sequence has been added to the model, homologs of the unrelated sequence will also produce high scores, and the model can diverge from the original protein family. Examination of alignment errors during psiblast PSSM contamination suggested a simple strategy for dramatically reducing PSSM contamination. psiblast PSSMs are built from the query-based multiple sequence alignment (MSA) implied by the pairwise alignments between the query model (PSSM, HMM) and the subject sequences in the library. When the original query sequence residues are inserted into gapped positions in the aligned subject sequence, the resulting PSSM rarely produces alignment over-extensions or alignments to unrelated sequences. This simple step, which tends to anchor the PSSM to the original query sequence and slightly increase target percent identity, can reduce the frequency of false-positive alignments more than 20-fold compared with psiblast and jackhmmer, with little loss in search sensitivity. PMID:27923999
Figure-Ground Segmentation Using Factor Graphs
Shen, Huiying; Coughlan, James; Ivanchenko, Volodymyr
2009-01-01
Foreground-background segmentation has recently been applied [26,12] to the detection and segmentation of specific objects or structures of interest from the background as an efficient alternative to techniques such as deformable templates [27]. We introduce a graphical model (i.e. Markov random field)-based formulation of structure-specific figure-ground segmentation based on simple geometric features extracted from an image, such as local configurations of linear features, that are characteristic of the desired figure structure. Our formulation is novel in that it is based on factor graphs, which are graphical models that encode interactions among arbitrary numbers of random variables. The ability of factor graphs to express interactions higher than pairwise order (the highest order encountered in most graphical models used in computer vision) is useful for modeling a variety of pattern recognition problems. In particular, we show how this property makes factor graphs a natural framework for performing grouping and segmentation, and demonstrate that the factor graph framework emerges naturally from a simple maximum entropy model of figure-ground segmentation. We cast our approach in a learning framework, in which the contributions of multiple grouping cues are learned from training data, and apply our framework to the problem of finding printed text in natural scenes. Experimental results are described, including a performance analysis that demonstrates the feasibility of the approach. PMID:20160994
Kendal, W S
2000-04-01
To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.
Science Opportunity Analyzer (SOA): Science Planning Made Simple
NASA Technical Reports Server (NTRS)
Streiffert, Barbara A.; Polanskey, Carol A.
2004-01-01
.For the first time at JPL, the Cassini mission to Saturn is using distributed science operations for developing their experiments. Remote scientists needed the ability to: a) Identify observation opportunities; b) Create accurate, detailed designs for their observations; c) Verify that their designs meet their objectives; d) Check their observations against project flight rules and constraints; e) Communicate their observations to other scientists. Many existing tools provide one or more of these functions, but Science Opportunity Analyzer (SOA) has been built to unify these tasks into a single application. Accurate: Utilizes JPL Navigation and Ancillary Information Facility (NAIF) SPICE* software tool kit - Provides high fidelity modeling. - Facilitates rapid adaptation to other flight projects. Portable: Available in Unix, Windows and Linux. Adaptable: Designed to be a multi-mission tool so it can be readily adapted to other flight projects. Implemented in Java, Java 3D and other innovative technologies. Conclusion: SOA is easy to use. It only requires 6 simple steps. SOA's ability to show the same accurate information in multiple ways (multiple visualization formats, data plots, listings and file output) is essential to meet the needs of a diverse, distributed science operations environment.
Managing hydrological measurements for small and intermediate projects: RObsDat
NASA Astrophysics Data System (ADS)
Reusser, Dominik E.
2014-05-01
Hydrological measurements need good management for the data not to be lost. Multiple, often overlapping files from various loggers with heterogeneous formats need to be merged. Data needs to be validated and cleaned and subsequently converted to the format for the hydrological target application. Preferably, all these steps should be easily tracable. RObsDat is an R package designed to support such data management. It comes with a command line user interface to support hydrologists to enter and adjust their data in a database following the Observations Data Model (ODM) standard by QUASHI. RObsDat helps in the setup of the database within one of the free database engines MySQL, PostgreSQL or SQLite. It imports the controlled water vocabulary from the QUASHI web service and provides a smart interface between the hydrologist and the database: Already existing data entries are detected and duplicates avoided. The data import function converts different data table designes to make import simple. Cleaning and modifications of data are handled with a simple version control system. Variable and location names are treated in a user friendly way, accepting and processing multiple versions. A new development is the use of spacetime objects for subsequent processing.
Are the gyro-ages of field stars underestimated?
NASA Astrophysics Data System (ADS)
Kovács, Géza
2015-09-01
By using the current photometric rotational data on eight galactic open clusters, we show that the evolutionary stellar model (isochrone) ages of these clusters are tightly correlated with the period shifts applied to the (B - V)0-Prot ridges that optimally align these ridges to the one defined by Praesepe and the Hyades. On the other hand, when the traditional Skumanich-type multiplicative transformation is used, the ridges become far less aligned due to the age-dependent slope change introduced by the period multiplication. Therefore, we employ our simple additive gyro-age calibration on various datasets of Galactic field stars to test its applicability. We show that, in the overall sense, the gyro-ages are systematically greater than the isochrone ages. The difference could exceed several giga years, depending on the stellar parameters. Although the age overlap between the open clusters used in the calibration and the field star samples is only partial, the systematic difference indicates the limitation of the currently available gyro-age methods and suggests that the rotation of field stars slows down with a considerably lower speed than we would expect from the simple extrapolation of the stellar rotation rates in open clusters.
Parameter Estimation in Epidemiology: from Simple to Complex Dynamics
NASA Astrophysics Data System (ADS)
Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico
2011-09-01
We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.
NASA Astrophysics Data System (ADS)
De Domenico, Manlio
2018-03-01
Biological systems, from a cell to the human brain, are inherently complex. A powerful representation of such systems, described by an intricate web of relationships across multiple scales, is provided by complex networks. Recently, several studies are highlighting how simple networks - obtained by aggregating or neglecting temporal or categorical description of biological data - are not able to account for the richness of information characterizing biological systems. More complex models, namely multilayer networks, are needed to account for interdependencies, often varying across time, of biological interacting units within a cell, a tissue or parts of an organism.
On Edge Exchangeable Random Graphs
NASA Astrophysics Data System (ADS)
Janson, Svante
2017-06-01
We study a recent model for edge exchangeable random graphs introduced by Crane and Dempsey; in particular we study asymptotic properties of the random simple graph obtained by merging multiple edges. We study a number of examples, and show that the model can produce dense, sparse and extremely sparse random graphs. One example yields a power-law degree distribution. We give some examples where the random graph is dense and converges a.s. in the sense of graph limit theory, but also an example where a.s. every graph limit is the limit of some subsequence. Another example is sparse and yields convergence to a non-integrable generalized graphon defined on (0,∞).
Bounding species distribution models
Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.
Bounding Species Distribution Models
NASA Technical Reports Server (NTRS)
Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].
Multiporosity flow in fractured low-permeability rocks: Extension to shale hydrocarbon reservoirs
Kuhlman, Kristopher L.; Malama, Bwalya; Heath, Jason E.
2015-02-05
We presented a multiporosity extension of classical double and triple-porosity fractured rock flow models for slightly compressible fluids. The multiporosity model is an adaptation of the multirate solute transport model of Haggerty and Gorelick (1995) to viscous flow in fractured rock reservoirs. It is a generalization of both pseudo steady state and transient interporosity flow double-porosity models. The model includes a fracture continuum and an overlapping distribution of multiple rock matrix continua, whose fracture-matrix exchange coefficients are specified through a discrete probability mass function. Semianalytical cylindrically symmetric solutions to the multiporosity mathematical model are developed using the Laplace transform tomore » illustrate its behavior. Furthermore, the multiporosity model presented here is conceptually simple, yet flexible enough to simulate common conceptualizations of double and triple-porosity flow. This combination of generality and simplicity makes the multiporosity model a good choice for flow modelling in low-permeability fractured rocks.« less
Disease-induced mortality in density-dependent discrete-time S-I-S epidemic models.
Franke, John E; Yakubu, Abdul-Aziz
2008-12-01
The dynamics of simple discrete-time epidemic models without disease-induced mortality are typically characterized by global transcritical bifurcation. We prove that in corresponding models with disease-induced mortality a tiny number of infectious individuals can drive an otherwise persistent population to extinction. Our model with disease-induced mortality supports multiple attractors. In addition, we use a Ricker recruitment function in an SIS model and obtained a three component discrete Hopf (Neimark-Sacker) cycle attractor coexisting with a fixed point attractor. The basin boundaries of the coexisting attractors are fractal in nature, and the example exhibits sensitive dependence of the long-term disease dynamics on initial conditions. Furthermore, we show that in contrast to corresponding models without disease-induced mortality, the disease-free state dynamics do not drive the disease dynamics.
SCEC UCVM - Unified California Velocity Model
NASA Astrophysics Data System (ADS)
Small, P.; Maechling, P. J.; Jordan, T. H.; Ely, G. P.; Taborda, R.
2011-12-01
The SCEC Unified California Velocity Model (UCVM) is a software framework for a state-wide California velocity model. UCVM provides researchers with two new capabilities: (1) the ability to query Vp, Vs, and density from any standard regional California velocity model through a uniform interface, and (2) the ability to combine multiple velocity models into a single state-wide model. These features are crucial in order to support large-scale ground motion simulations and to facilitate improvements in the underlying velocity models. UCVM provides integrated support for the following standard velocity models: SCEC CVM-H, SCEC CVM-S and the CVM-SI variant, USGS Bay Area (cencalvm), Lin-Thurber Statewide, and other smaller regional models. New models may be easily incorporated as they become available. Two query interfaces are provided: a Linux command line program, and a C application programming interface (API). The C API query interface is simple, fully independent of any specific model, and MPI-friendly. Input coordinates are geographic longitude/latitude and the vertical coordinate may be either depth or elevation. Output parameters include Vp, Vs, and density along with the identity of the model from which these material properties were obtained. In addition to access to the standard models, UCVM also includes a high resolution statewide digital elevation model, Vs30 map, and an optional near-surface geo-technical layer (GTL) based on Ely's Vs30-derived GTL. The elevation and Vs30 information is bundled along with the returned Vp,Vs velocities and density, so that all relevant information is retrieved with a single query. When the GTL is enabled, it is blended with the underlying crustal velocity models along a configurable transition depth range with an interpolation function. Multiple, possibly overlapping, regional velocity models may be combined together into a single state-wide model. This is accomplished by tiling the regional models on top of one another in three dimensions in a researcher-specified order. No reconciliation is performed within overlapping model regions, although a post-processing tool is provided to perform a simple numerical smoothing. Lastly, a 3D region from a combined model may be extracted and exported into a CVM-Etree. This etree may then be queried by UCVM much like a standard velocity model but with less overhead and generally better performance due to the efficiency of the etree data structure.
NASA Astrophysics Data System (ADS)
Rose, D. V.; Miller, C. L.; Welch, D. R.; Clark, R. E.; Madrid, E. A.; Mostrom, C. B.; Stygar, W. A.; Lechien, K. R.; Mazarakis, M. A.; Langston, W. L.; Porter, J. L.; Woodworth, J. R.
2010-09-01
A 3D fully electromagnetic (EM) model of the principal pulsed-power components of a high-current linear transformer driver (LTD) has been developed. LTD systems are a relatively new modular and compact pulsed-power technology based on high-energy density capacitors and low-inductance switches located within a linear-induction cavity. We model 1-MA, 100-kV, 100-ns rise-time LTD cavities [A. A. Kim , Phys. Rev. ST Accel. Beams 12, 050402 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.050402] which can be used to drive z-pinch and material dynamics experiments. The model simulates the generation and propagation of electromagnetic power from individual capacitors and triggered gas switches to a radially symmetric output line. Multiple cavities, combined to provide voltage addition, drive a water-filled coaxial transmission line. A 3D fully EM model of a single 1-MA 100-kV LTD cavity driving a simple resistive load is presented and compared to electrical measurements. A new model of the current loss through the ferromagnetic cores is developed for use both in circuit representations of an LTD cavity and in the 3D EM simulations. Good agreement between the measured core current, a simple circuit model, and the 3D simulation model is obtained. A 3D EM model of an idealized ten-cavity LTD accelerator is also developed. The model results demonstrate efficient voltage addition when driving a matched impedance load, in good agreement with an idealized circuit model.
Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT
NASA Technical Reports Server (NTRS)
Fagundo, Arturo
1994-01-01
Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.
Abreu, P C; Greenberg, D A; Hodge, S E
1999-09-01
Several methods have been proposed for linkage analysis of complex traits with unknown mode of inheritance. These methods include the LOD score maximized over disease models (MMLS) and the "nonparametric" linkage (NPL) statistic. In previous work, we evaluated the increase of type I error when maximizing over two or more genetic models, and we compared the power of MMLS to detect linkage, in a number of complex modes of inheritance, with analysis assuming the true model. In the present study, we compare MMLS and NPL directly. We simulated 100 data sets with 20 families each, using 26 generating models: (1) 4 intermediate models (penetrance of heterozygote between that of the two homozygotes); (2) 6 two-locus additive models; and (3) 16 two-locus heterogeneity models (admixture alpha = 1.0,.7,.5, and.3; alpha = 1.0 replicates simple Mendelian models). For LOD scores, we assumed dominant and recessive inheritance with 50% penetrance. We took the higher of the two maximum LOD scores and subtracted 0.3 to correct for multiple tests (MMLS-C). We compared expected maximum LOD scores and power, using MMLS-C and NPL as well as the true model. Since NPL uses only the affected family members, we also performed an affecteds-only analysis using MMLS-C. The MMLS-C was both uniformly more powerful than NPL for most cases we examined, except when linkage information was low, and close to the results for the true model under locus heterogeneity. We still found better power for the MMLS-C compared with NPL in affecteds-only analysis. The results show that use of two simple modes of inheritance at a fixed penetrance can have more power than NPL when the trait mode of inheritance is complex and when there is heterogeneity in the data set.
A European model and case studies for aggregate exposure assessment of pesticides.
Kennedy, Marc C; Glass, C Richard; Bokkers, Bas; Hart, Andy D M; Hamey, Paul Y; Kruisselbrink, Johannes W; de Boer, Waldo J; van der Voet, Hilko; Garthwaite, David G; van Klaveren, Jacob D
2015-05-01
Exposures to plant protection products (PPPs) are assessed using risk analysis methods to protect public health. Traditionally, single sources, such as food or individual occupational sources, have been addressed. In reality, individuals can be exposed simultaneously to multiple sources. Improved regulation therefore requires the development of new tools for estimating the population distribution of exposures aggregated within an individual. A new aggregate model is described, which allows individual users to include as much, or as little, information as is available or relevant for their particular scenario. Depending on the inputs provided by the user, the outputs can range from simple deterministic values through to probabilistic analyses including characterisations of variability and uncertainty. Exposures can be calculated for multiple compounds, routes and sources of exposure. The aggregate model links to the cumulative dietary exposure model developed in parallel and is implemented in the web-based software tool MCRA. Case studies are presented to illustrate the potential of this model, with inputs drawn from existing European data sources and models. These cover exposures to UK arable spray operators, Italian vineyard spray operators, Netherlands users of a consumer spray and UK bystanders/residents. The model could also be adapted to handle non-PPP compounds. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Kalvāns, Andis; Bitāne, Māra; Kalvāne, Gunta
2015-02-01
A historical phenological record and meteorological data of the period 1960-2009 are used to analyse the ability of seven phenological models to predict leaf unfolding and beginning of flowering for two tree species-silver birch Betula pendula and bird cherry Padus racemosa-in Latvia. Model stability is estimated performing multiple model fitting runs using half of the data for model training and the other half for evaluation. Correlation coefficient, mean absolute error and mean squared error are used to evaluate model performance. UniChill (a model using sigmoidal development rate and temperature relationship and taking into account the necessity for dormancy release) and DDcos (a simple degree-day model considering the diurnal temperature fluctuations) are found to be the best models for describing the considered spring phases. A strong collinearity between base temperature and required heat sum is found for several model fitting runs of the simple degree-day based models. Large variation of the model parameters between different model fitting runs in case of more complex models indicates similar collinearity and over-parameterization of these models. It is suggested that model performance can be improved by incorporating the resolved daily temperature fluctuations of the DDcos model into the framework of the more complex models (e.g. UniChill). The average base temperature, as found by DDcos model, for B. pendula leaf unfolding is 5.6 °C and for the start of the flowering 6.7 °C; for P. racemosa, the respective base temperatures are 3.2 °C and 3.4 °C.
Response of Simple, Model Systems to Extreme Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, Rodney C.; Lang, Maik
2015-07-30
The focus of the research was on the application of high-pressure/high-temperature techniques, together with intense energetic ion beams, to the study of the behavior of simple oxide systems (e.g., SiO 2, GeO 2, CeO 2, TiO 2, HfO 2, SnO 2, ZnO and ZrO 2) under extreme conditions. These simple stoichiometries provide unique model systems for the analysis of structural responses to pressure up to and above 1 Mbar, temperatures of up to several thousands of kelvin, and the extreme energy density generated by energetic heavy ions (tens of keV/atom). The investigations included systematic studies of radiation- and pressure-induced amorphizationmore » of high P-T polymorphs. By studying the response of simple stoichiometries that have multiple structural “outcomes”, we have established the basic knowledge required for the prediction of the response of more complex structures to extreme conditions. We especially focused on the amorphous state and characterized the different non-crystalline structure-types that result from the interplay of radiation and pressure. For such experiments, we made use of recent technological developments, such as the perforated diamond-anvil cell and in situ investigation using synchrotron x-ray sources. We have been particularly interested in using extreme pressures to alter the electronic structure of a solid prior to irradiation. We expected that the effects of modified band structure would be evident in the track structure and morphology, information which is much needed to describe theoretically the fundamental physics of track-formation. Finally, we investigated the behavior of different simple-oxide, composite nanomaterials (e.g., uncoated nanoparticles vs. core/shell systems) under coupled, extreme conditions. This provided insight into surface and boundary effects on phase stability under extreme conditions.« less
Timpka, Toomas; Jacobsson, Jenny; Dahlström, Örjan; Kowalski, Jan; Bargoria, Victor; Ekberg, Joakim; Nilsson, Sverker; Renström, Per
2015-11-01
Athletes' psychological characteristics are important for understanding sports injury mechanisms. We examined the relevance of psychological factors in an integrated model of overuse injury risk in athletics/track and field. Swedish track and field athletes (n=278) entering a 12-month injury surveillance in March 2009 were also invited to complete a psychological survey. Simple Cox proportional hazards models were compiled for single explanatory variables. We also tested multiple models for 3 explanatory variable groupings: an epidemiological model without psychological variables, a psychological model excluding epidemiological variables and an integrated (combined) model. The integrated multiple model included the maladaptive coping behaviour self-blame (p=0.007; HR 1.32; 95% CI 1.08 to 1.61), and an interaction between athlete category and injury history (p<0.001). Youth female (p=0.034; HR 0.51; 95% CI 0.27 to 0.95) and youth male (p=0.047; HR 0.49; 95% CI 0.24 to 0.99) athletes with no severe injury the previous year were at half the risk of sustaining a new injury compared with the reference group. A training load index entered the epidemiological multiple model, but not the integrated model. The coping behaviour self-blame replaced training load in an integrated explanatory model of overuse injury risk in athletes. What seemed to be more strongly related to the likelihood of overuse injury was not the athletics load per se, but, rather, the load applied in situations when the athlete's body was in need of rest. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
ERIC Educational Resources Information Center
Parnafes, Orit
2010-01-01
Many real-world phenomena, even "simple" physical phenomena such as natural harmonic motion, are complex in the sense that they require coordinating multiple subtle foci of attention to get the required information when experiencing them. Moreover, for students to develop sound understanding of a concept or a phenomenon, they need to learn to get…
Simple and inexpensive microfluidic devices for the generation of monodisperse multiple emulsions
NASA Astrophysics Data System (ADS)
Li, Er Qiang; Zhang, Jia Ming; Thoroddsen, Sigurdur T.
2014-01-01
Droplet-based microfluidic devices have become a preferred versatile platform for various fields in physics, chemistry and biology. Polydimethylsiloxane soft lithography, the mainstay for fabricating microfluidic devices, usually requires the usage of expensive apparatus and a complex manufacturing procedure. Here, we report the design and fabrication of simple and inexpensive microfluidic devices based on microscope glass slides and pulled glass capillaries, for generating monodisperse multiple emulsions. The advantages of our method lie in a simple manufacturing procedure, inexpensive processing equipment and flexibility in the surface modification of the designed microfluidic devices. Different types of devices have been designed and tested and the experimental results demonstrated their robustness for preparing monodisperse single, double, triple and multi-component emulsions.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Wagner, Peter J.
2012-01-01
Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution. PMID:21795266
Cocirculation of infectious diseases on networks
NASA Astrophysics Data System (ADS)
Miller, Joel C.
2013-06-01
We consider multiple diseases spreading in a static configuration model network. We make standard assumptions that infection transmits from neighbor to neighbor at a disease-specific rate and infected individuals recover at a disease-specific rate. Infection by one disease confers immediate and permanent immunity to infection by any disease. Under these assumptions, we find a simple, low-dimensional ordinary differential equations model which captures the global dynamics of the infection. The dynamics depend strongly on initial conditions. Although we motivate this Rapid Communication with infectious disease, the model may be adapted to the spread of other infectious agents such as competing political beliefs, or adoption of new technologies if these are influenced by contacts. As an example, we demonstrate how to model an infectious disease which can be prevented by a behavior change.
Collision geometry scaling of Au+Au pseudorapidity density from √(sNN )=19.6 to 200 GeV
NASA Astrophysics Data System (ADS)
Back, B. B.; Baker, M. D.; Ballintijn, M.; Barton, D. S.; Betts, R. R.; Bickley, A. A.; Bindel, R.; Budzanowski, A.; Busza, W.; Carroll, A.; Decowski, M. P.; García, E.; George, N.; Gulbrandsen, K.; Gushue, S.; Halliwell, C.; Hamblen, J.; Heintzelman, G. A.; Henderson, C.; Hofman, D. J.; Hollis, R. S.; Hołyński, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Katzy, J.; Khan, N.; Kucewicz, W.; Kulinich, P.; Kuo, C. M.; Lin, W. T.; Manly, S.; McLeod, D.; Mignerey, A. C.; Nouicer, R.; Olszewski, A.; Pak, R.; Park, I. C.; Pernegger, H.; Reed, C.; Remsberg, L. P.; Reuter, M.; Roland, C.; Roland, G.; Rosenberg, L.; Sagerer, J.; Sarin, P.; Sawicki, P.; Skulski, W.; Steinberg, P.; Stephans, G. S.; Sukhanov, A.; Tonjes, M. B.; Tang, J.-L.; Trzupek, A.; Vale, C.; van Nieuwenhuizen, G. J.; Verdier, R.; Wolfs, F. L.; Wosiek, B.; Woźniak, K.; Wuosmaa, A. H.; Wysłouch, B.
2004-08-01
The centrality dependence of the midrapidity charged particle multiplicity in Au+Au heavy-ion collisions at √(sNN )=19.6 and 200 GeV is presented. Within a simple model, the fraction of hard (scaling with number of binary collisions) to soft (scaling with number of participant pairs) interactions is consistent with a value of x=0.13±0.01 (stat) ±0.05 (syst) at both energies. The experimental results at both energies, scaled by inelastic p ( p¯ ) +p collision data, agree within systematic errors. The ratio of the data was found not to depend on centrality over the studied range and yields a simple linear scale factor of R200/19.6 =2.03±0.02 (stat) ±0.05 (syst) .
Testing a single regression coefficient in high dimensional linear models
Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2017-01-01
In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668
Testing a single regression coefficient in high dimensional linear models.
Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2016-11-01
In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.
Modelling nutrition across organizational levels: from individuals to superorganisms.
Lihoreau, Mathieu; Buhl, Jerome; Charleston, Michael A; Sword, Gregory A; Raubenheimer, David; Simpson, Stephen J
2014-10-01
The Geometric Framework for nutrition has been increasingly used to describe how individual animals regulate their intake of multiple nutrients to maintain target physiological states maximizing growth and reproduction. However, only a few studies have considered the potential influences of the social context in which these nutritional decisions are made. Social insects, for instance, have evolved extreme levels of nutritional interdependence in which food collection, processing, storage and disposal are performed by different individuals with different nutritional needs. These social interactions considerably complicate nutrition and raise the question of how nutrient regulation is achieved at multiple organizational levels, by individuals and groups. Here, we explore the connections between individual- and collective-level nutrition by developing a modelling framework integrating concepts of nutritional geometry into individual-based models. Using this approach, we investigate how simple nutritional interactions between individuals can mediate a range of emergent collective-level phenomena in social arthropods (insects and spiders) and provide examples of novel and empirically testable predictions. We discuss how our approach could be expanded to a wider range of species and social systems. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chen, Qi; Mirman, Daniel
2012-04-01
One of the core principles of how the mind works is the graded, parallel activation of multiple related or similar representations. Parallel activation of multiple representations has been particularly important in the development of theories and models of language processing, where coactivated representations (neighbors) have been shown to exhibit both facilitative and inhibitory effects on word recognition and production. Researchers generally ascribe these effects to interactive activation and competition, but there is no unified explanation for why the effects are facilitative in some cases and inhibitory in others. We present a series of simulations of a simple domain-general interactive activation and competition model that is broadly consistent with more specialized domain-specific models of lexical processing. The results showed that interactive activation and competition can indeed account for the complex pattern of reversals. Critically, the simulations revealed a core computational principle that determines whether neighbor effects are facilitative or inhibitory: strongly active neighbors exert a net inhibitory effect, and weakly active neighbors exert a net facilitative effect.
Schreiber, Roy E; Avram, Liat; Neumann, Ronny
2018-01-09
High-order elementary reactions in homogeneous solutions involving more than two molecules are statistically improbable and very slow to proceed. They are not generally considered in classical transition-state or collision theories. Yet, rather selective, high-yield product formation is common in self-assembly processes that require many reaction steps. On the basis of recent observations of crystallization as well as reactions in dense phases, it is shown that self-assembly can occur by preorganization of reactants in a noncovalent supramolecular assembly, whereby directing forces can lead to an apparent one-step transformation of multiple reactants. A simple and general kinetic model for multiple reactant transformation in a dense phase that can account for many-bodied transformations was developed. Furthermore, the self-assembly of polyfluoroxometalate anion [H 2 F 6 NaW 18 O 56 ] 7- from simple tungstate Na 2 WO 2 F 4 was demonstrated by using 2D 19 F- 19 F NOESY, 2D 19 F- 19 F COSY NMR spectroscopy, a new 2D 19 F{ 183 W} NMR technique, as well as ESI-MS and diffusion NMR spectroscopy, and the crucial involvement of a supramolecular assembly was found. The deterministic kinetic reaction model explains the reaction in a dense phase and supports the suggested self-assembly mechanism. Reactions in dense phases may be of general importance in understanding other self-assembly reactions. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Making sense of information in noisy networks: human communication, gossip, and distortion.
Laidre, Mark E; Lamb, Alex; Shultz, Susanne; Olsen, Megan
2013-01-21
Information from others can be unreliable. Humans nevertheless act on such information, including gossip, to make various social calculations, thus raising the question of whether individuals can sort through social information to identify what is, in fact, true. Inspired by empirical literature on people's decision-making when considering gossip, we built an agent-based simulation model to examine how well simple decision rules could make sense of information as it propagated through a network. Our simulations revealed that a minimalistic decision-rule 'Bit-wise mode' - which compared information from multiple sources and then sought a consensus majority for each component bit within the message - was consistently the most successful at converging upon the truth. This decision rule attained high relative fitness even in maximally noisy networks, composed entirely of nodes that distorted the message. The rule was also superior to other decision rules regardless of its frequency in the population. Simulations carried out with variable agent memory constraints, different numbers of observers who initiated information propagation, and a variety of network types suggested that the single most important factor in making sense of information was the number of independent sources that agents could consult. Broadly, our model suggests that despite the distortion information is subject to in the real world, it is nevertheless possible to make sense of it based on simple Darwinian computations that integrate multiple sources. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Baasch, Benjamin; Müller, Hendrik; von Dobeneck, Tilo; Oberle, Ferdinand K. J.
2017-05-01
The electric conductivity and magnetic susceptibility of sediments are fundamental parameters in environmental geophysics. Both can be derived from marine electromagnetic profiling, a novel, fast and non-invasive seafloor mapping technique. Here we present statistical evidence that electric conductivity and magnetic susceptibility can help to determine physical grain-size characteristics (size, sorting and mud content) of marine surficial sediments. Electromagnetic data acquired with the bottom-towed electromagnetic profiler MARUM NERIDIS III were analysed and compared with grain size data from 33 samples across the NW Iberian continental shelf. A negative correlation between mean grain size and conductivity (R=-0.79) as well as mean grain size and susceptibility (R=-0.78) was found. Simple and multiple linear regression analyses were carried out to predict mean grain size, mud content and the standard deviation of the grain-size distribution from conductivity and susceptibility. The comparison of both methods showed that multiple linear regression models predict the grain-size distribution characteristics better than the simple models. This exemplary study demonstrates that electromagnetic benthic profiling is capable to estimate mean grain size, sorting and mud content of marine surficial sediments at a very high significance level. Transfer functions can be calibrated using grains-size data from a few reference samples and extrapolated along shelf-wide survey lines. This study suggests that electromagnetic benthic profiling should play a larger role for coastal zone management, seafloor contamination and sediment provenance studies in worldwide continental shelf systems.
NASA Astrophysics Data System (ADS)
Wang, Hailong; Ho, Derek Y. H.; Lawton, Wayne; Wang, Jiao; Gong, Jiangbin
2013-11-01
Recent studies have established that, in addition to the well-known kicked-Harper model (KHM), an on-resonance double-kicked rotor (ORDKR) model also has Hofstadter's butterfly Floquet spectrum, with strong resemblance to the standard Hofstadter spectrum that is a paradigm in studies of the integer quantum Hall effect. Earlier it was shown that the quasienergy spectra of these two dynamical models (i) can exactly overlap with each other if an effective Planck constant takes irrational multiples of 2π and (ii) will be different if the same parameter takes rational multiples of 2π. This work makes detailed comparisons between these two models, with an effective Planck constant given by 2πM/N, where M and N are coprime and odd integers. It is found that the ORDKR spectrum (with two periodic kicking sequences having the same kick strength) has one flat band and N-1 nonflat bands with the largest bandwidth decaying in a power law as ˜KN+2, where K is a kick strength parameter. The existence of a flat band is strictly proven and the power-law scaling, numerically checked for a number of cases, is also analytically proven for a three-band case. By contrast, the KHM does not have any flat band and its bandwidths scale linearly with K. This is shown to result in dramatic differences in dynamical behavior, such as transient (but extremely long) dynamical localization in ORDKR, which is absent in the KHM. Finally, we show that despite these differences, there exist simple extensions of the KHM and ORDKR model (upon introducing an additional periodic phase parameter) such that the resulting extended KHM and ORDKR model are actually topologically equivalent, i.e., they yield exactly the same Floquet-band Chern numbers and display topological phase transitions at the same kick strengths. A theoretical derivation of this topological equivalence is provided. These results are also of interest to our current understanding of quantum-classical correspondence considering that the KHM and ORDKR model have exactly the same classical limit after a simple canonical transformation.
NASA Astrophysics Data System (ADS)
Ioannidi, P. I.; Le Pourhiet, L.; Moreno, M.; Agard, P.; Oncken, O.; Angiboust, S.
2017-12-01
The physical nature of plate locking and its relation to surface deformation patterns at different time scales (e.g. GPS displacements during the seismic cycle) can be better understood by determining the rheological parameters of the subduction interface. However, since direct rheological measurements are not possible, finite element modelling helps to determine the effective rheological parameters of the subduction interface. We used the open source finite element code pTatin to create 2D models, starting with a homogeneous medium representing shearing at the subduction interface. We tested several boundary conditions that mimic simple shear and opted for the one that best describes the Grigg's type simple shear experiments. After examining different parameters, such as shearing velocity, temperature and viscosity, we added complexity to the geometry by including a second phase. This arises from field observations, where shear zone outcrops are often composites of multiple phases: stronger crustal blocks embedded within a sedimentary and/or serpentinized matrix have been reported for several exhumed subduction zones. We implemented a simplified model to simulate simple shearing of a two-phase medium in order to quantify the effect of heterogeneous rheology on stress and strain localization. Preliminary results show different strength in the models depending on the block-to-matrix ratio. We applied our method to outcrop scale block-in-matrix geometries and by sampling at different depths along exhumed former subduction interfaces, we expect to be able to provide effective friction and viscosity of a natural interface. In a next step, these effective parameters will be used as input into seismic cycle deformation models in an attempt to assess the possible signature of field geometries on the slip behaviour of the plate interface.
NASA Astrophysics Data System (ADS)
Bakker, Alexander; Louchard, Domitille; Keller, Klaus
2016-04-01
Sea-level rise threatens many coastal areas around the world. The integrated assessment of potential adaptation and mitigation strategies requires a sound understanding of the upper tails and the major drivers of the uncertainties. Global warming causes sea-level to rise, primarily due to thermal expansion of the oceans and mass loss of the major ice sheets, smaller ice caps and glaciers. These components show distinctly different responses to temperature changes with respect to response time, threshold behavior, and local fingerprints. Projections of these different components are deeply uncertain. Projected uncertainty ranges strongly depend on (necessary) pragmatic choices and assumptions; e.g. on the applied climate scenarios, which processes to include and how to parameterize them, and on error structure of the observations. Competing assumptions are very hard to objectively weigh. Hence, uncertainties of sea-level response are hard to grasp in a single distribution function. The deep uncertainty can be better understood by making clear the key assumptions. Here we demonstrate this approach using a relatively simple model framework. We present a mechanistically motivated, but simple model framework that is intended to efficiently explore the deeply uncertain sea-level response to anthropogenic climate change. The model consists of 'building blocks' that represent the major components of sea-level response and its uncertainties, including threshold behavior. The framework's simplicity enables the simulation of large ensembles allowing for an efficient exploration of parameter uncertainty and for the simulation of multiple combined adaptation and mitigation strategies. The model framework can skilfully reproduce earlier major sea level assessments, but due to the modular setup it can also be easily utilized to explore high-end scenarios and the effect of competing assumptions and parameterizations.
Structural equation modeling in environmental risk assessment.
Buncher, C R; Succop, P A; Dietrich, K N
1991-01-01
Environmental epidemiology requires effective models that take individual observations of environmental factors and connect them into meaningful patterns. Single-factor relationships have given way to multivariable analyses; simple additive models have been augmented by multiplicative (logistic) models. Each of these steps has produced greater enlightenment and understanding. Models that allow for factors causing outputs that can affect later outputs with putative causation working at several different time points (e.g., linkage) are not commonly used in the environmental literature. Structural equation models are a class of covariance structure models that have been used extensively in economics/business and social science but are still little used in the realm of biostatistics. Path analysis in genetic studies is one simplified form of this class of models. We have been using these models in a study of the health and development of infants who have been exposed to lead in utero and in the postnatal home environment. These models require as input the directionality of the relationship and then produce fitted models for multiple inputs causing each factor and the opportunity to have outputs serve as input variables into the next phase of the simultaneously fitted model. Some examples of these models from our research are presented to increase familiarity with this class of models. Use of these models can provide insight into the effect of changing an environmental factor when assessing risk. The usual cautions concerning believing a model, believing causation has been proven, and the assumptions that are required for each model are operative.
Evaluation of the CEAS model for barley yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1981-01-01
The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.
Global dynamics in a stoichiometric food chain model with two limiting nutrients.
Chen, Ming; Fan, Meng; Kuang, Yang
2017-07-01
Ecological stoichiometry studies the balance of energy and multiple chemical elements in ecological interactions to establish how the nutrient content affect food-web dynamics and nutrient cycling in ecosystems. In this study, we formulate a food chain with two limiting nutrients in the form of a stoichiometric population model. A comprehensive global analysis of the rich dynamics of the targeted model is explored both analytically and numerically. Chaotic dynamic is observed in this simple stoichiometric food chain model and is compared with traditional model without stoichiometry. The detailed comparison reveals that stoichiometry can reduce the parameter space for chaotic dynamics. Our findings also show that decreasing producer production efficiency may have only a small effect on the consumer growth but a more profound impact on the top predator growth. Copyright © 2017 Elsevier Inc. All rights reserved.
Modeling Smoke Plume-Rise and Dispersion from Southern United States Prescribed Burns with Daysmoke.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Achtemeier, Gary, L.; Goodrick, Scott, A.; Liu, Yongqiang
2011-08-19
We present Daysmoke, an empirical-statistical plume rise and dispersion model for simulating smoke from prescribed burns. Prescribed fires are characterized by complex plume structure including multiple-core updrafts which makes modeling with simple plume models difficult. Daysmoke accounts for plume structure in a three-dimensional veering/sheering atmospheric environment, multiple-core updrafts, and detrainment of particulate matter. The number of empirical coefficients appearing in the model theory is reduced through a sensitivity analysis with the Fourier Amplitude Sensitivity Test (FAST). Daysmoke simulations for 'bent-over' plumes compare closely with Briggs theory although the two-thirds law is not explicit in Daysmoke. However, the solutions for themore » 'highly-tilted' plume characterized by weak buoyancy, low initial vertical velocity, and large initial plume diameter depart considerably from Briggs theory. Results from a study of weak plumes from prescribed burns at Fort Benning GA showed simulated ground-level PM2.5 comparing favorably with observations taken within the first eight kilometers of eleven prescribed burns. Daysmoke placed plume tops near the lower end of the range of observed plume tops for six prescribed burns. Daysmoke provides the levels and amounts of smoke injected into regional scale air quality models. Results from CMAQ with and without an adaptive grid are presented.« less
Explanatory Models for Psychiatric Illness
Kendler, Kenneth S.
2009-01-01
How can we best develop explanatory models for psychiatric disorders? Because causal factors have an impact on psychiatric illness both at micro levels and macro levels, both within and outside of the individual, and involving processes best understood from biological, psychological, and sociocultural perspectives, traditional models of science that strive for single broadly applicable explanatory laws are ill suited for our field. Such models are based on the incorrect assumption that psychiatric illnesses can be understood from a single perspective. A more appropriate scientific model for psychiatry emphasizes the understanding of mechanisms, an approach that fits naturally with a multicausal framework and provides a realistic paradigm for scientific progress, that is, understanding mechanisms through decomposition and reassembly. Simple subunits of complicated mechanisms can be usefully studied in isolation. Reassembling these constituent parts into a functioning whole, which is straightforward for simple additive mechanisms, will be far more challenging in psychiatry where causal networks contain multiple nonlinear interactions and causal loops. Our field has long struggled with the interrelationship between biological and psychological explanatory perspectives. Building from the seminal work of the neuronal modeler and philosopher David Marr, the author suggests that biology will implement but not replace psychology within our explanatory systems. The iterative process of interactions between biology and psychology needed to achieve this implementation will deepen our understanding of both classes of processes. PMID:18483135
Dynamical systems, attractors, and neural circuits.
Miller, Paul
2016-01-01
Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic-they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions.
Approximate analytical solution for induction heating of solid cylinders
Jankowski, Todd Andrew; Pawley, Norma Helen; Gonzales, Lindsey Michal; ...
2015-10-20
An approximate solution to the mathematical model for induction heating of a solid cylinder in a cylindrical induction coil is presented here. The coupled multiphysics model includes equations describing the electromagnetic field in the heated object, a heat transfer simulation to determine temperature of the heated object, and an AC circuit simulation of the induction heating power supply. A multiple-scale perturbation method is used to solve the multiphysics model. The approximate analytical solution yields simple closed-form expressions for the electromagnetic field and heat generation rate in the solid cylinder, for the equivalent impedance of the associated tank circuit, and formore » the frequency response of a variable frequency power supply driving the tank circuit. The solution developed here is validated by comparing predicted power supply frequency to both experimental measurements and calculated values from finite element analysis for heating of graphite cylinders in an induction furnace. The simple expressions from the analytical solution clearly show the functional dependence of the power supply frequency on the material properties of the load and the geometrical characteristics of the furnace installation. In conclusion, the expressions developed here provide physical insight into observations made during load signature analysis of induction heating.« less
The Top 10 List of Gravitational Lens Candidates from the HUBBLE SPACE TELESCOPE Medium Deep Survey
NASA Astrophysics Data System (ADS)
Ratnatunga, Kavan U.; Griffiths, Richard E.; Ostrander, Eric J.
1999-05-01
A total of 10 good candidates for gravitational lensing have been discovered in the WFPC2 images from the Hubble Space Telescope (HST) Medium Deep Survey (MDS) and archival primary observations. These candidate lenses are unique HST discoveries, i.e., they are faint systems with subarcsecond separations between the lensing objects and the lensed source images. Most of them are difficult objects for ground-based spectroscopic confirmation or for measurement of the lens and source redshifts. Seven are ``strong lens'' candidates that appear to have multiple images of the source. Three are cases in which the single image of the source galaxy has been significantly distorted into an arc. The first two quadruply lensed candidates were reported by Ratnatunga et al. We report on the subsequent eight candidates and describe them with simple models based on the assumption of singular isothermal potentials. Residuals from the simple models for some of the candidates indicate that a more complex model for the potential will probably be required to explain the full structural detail of the observations once they are confirmed to be lenses. We also discuss the effective survey area that was searched for these candidate lens objects.
An approach to multivariable control of manipulators
NASA Technical Reports Server (NTRS)
Seraji, H.
1987-01-01
The paper presents simple schemes for multivariable control of multiple-joint robot manipulators in joint and Cartesian coordinates. The joint control scheme consists of two independent multivariable feedforward and feedback controllers. The feedforward controller is the minimal inverse of the linearized model of robot dynamics and contains only proportional-double-derivative (PD2) terms - implying feedforward from the desired position, velocity and acceleration. This controller ensures that the manipulator joint angles track any reference trajectories. The feedback controller is of proportional-integral-derivative (PID) type and is designed to achieve pole placement. This controller reduces any initial tracking error to zero as desired and also ensures that robust steady-state tracking of step-plus-exponential trajectories is achieved by the joint angles. Simple and explicit expressions of computation of the feedforward and feedback gains are obtained based on the linearized model of robot dynamics. This leads to computationally efficient schemes for either on-line gain computation or off-line gain scheduling to account for variations in the linearized robot model due to changes in the operating point. The joint control scheme is extended to direct control of the end-effector motion in Cartesian space. Simulation results are given for illustration.
Wang, Xu; Le, Anh-Thu; Yu, Chao; Lucchese, R. R.; Lin, C. D.
2016-01-01
We discuss a scheme to retrieve transient conformational molecular structure information using photoelectron angular distributions (PADs) that have averaged over partial alignments of isolated molecules. The photoelectron is pulled out from a localized inner-shell molecular orbital by an X-ray photon. We show that a transient change in the atomic positions from their equilibrium will lead to a sensitive change in the alignment-averaged PADs, which can be measured and used to retrieve the former. Exploiting the experimental convenience of changing the photon polarization direction, we show that it is advantageous to use PADs obtained from multiple photon polarization directions. A simple single-scattering model is proposed and benchmarked to describe the photoionization process and to do the retrieval using a multiple-parameter fitting method. PMID:27025410
Boby-Vortex Interaction, Sound Generation and Destructive Interference
NASA Technical Reports Server (NTRS)
Kao, Hsiao C.
2000-01-01
It is generally recognized that interaction of vortices with downstream blades is a major source of noise production. To analyze this problem numerically, a two-dimensional model of inviscid flow together with the method of matched asymptotic expansions is proposed. The method of matched asymptotic expansions is used to match the inner region of incompressible flow to the outer region of compressible flow. Because of incompressibility, relatively simple numerical methods are available to treat multiple vortices and multiple bodies of arbitrary shape. Disturbances from vortices and bodies propagate outward as sound waves. Due to their interactions, either constructive or destructive interference may result. When it is destructive, the combined sound intensity can be reduced, sometimes substantially. In addition, an analytical solution to sound generation by the cascade-vonex interaction is given.
NASA Astrophysics Data System (ADS)
Wang, Xu; Le, Anh-Thu; Yu, Chao; Lucchese, R. R.; Lin, C. D.
2016-03-01
We discuss a scheme to retrieve transient conformational molecular structure information using photoelectron angular distributions (PADs) that have averaged over partial alignments of isolated molecules. The photoelectron is pulled out from a localized inner-shell molecular orbital by an X-ray photon. We show that a transient change in the atomic positions from their equilibrium will lead to a sensitive change in the alignment-averaged PADs, which can be measured and used to retrieve the former. Exploiting the experimental convenience of changing the photon polarization direction, we show that it is advantageous to use PADs obtained from multiple photon polarization directions. A simple single-scattering model is proposed and benchmarked to describe the photoionization process and to do the retrieval using a multiple-parameter fitting method.
Taking Aim at the Cognitive Side of Learning in Sensorimotor Adaptation Tasks.
McDougle, Samuel D; Ivry, Richard B; Taylor, Jordan A
2016-07-01
Sensorimotor adaptation tasks have been used to characterize processes responsible for calibrating the mapping between desired outcomes and motor commands. Research has focused on how this form of error-based learning takes place in an implicit and automatic manner. However, recent work has revealed the operation of multiple learning processes, even in this simple form of learning. This review focuses on the contribution of cognitive strategies and heuristics to sensorimotor learning, and how these processes enable humans to rapidly explore and evaluate novel solutions to enable flexible, goal-oriented behavior. This new work points to limitations in current computational models, and how these must be updated to describe the conjoint impact of multiple processes in sensorimotor learning. Copyright © 2016 Elsevier Ltd. All rights reserved.
Risk-taking behavior in the presence of nonconvex asset dynamics.
Lybbert, Travis J; Barrett, Christopher B
2011-01-01
The growing literature on poverty traps emphasizes the links between multiple equilibria and risk avoidance. However, multiple equilibria may also foster risk-taking behavior by some poor people. We illustrate this idea with a simple analytical model in which people with different wealth and ability endowments make investment and risky activity choices in the presence of known nonconvex asset dynamics. This model underscores a crucial distinction between familiar static concepts of risk aversion and forward-looking dynamic risk responses to nonconvex asset dynamics. Even when unobservable preferences exhibit decreasing absolute risk aversion, observed behavior may suggest that risk aversion actually increases with wealth near perceived dynamic asset thresholds. Although high ability individuals are not immune from poverty traps, they can leverage their capital endowments more effectively than lower ability types and are therefore less likely to take seemingly excessive risks. In general, linkages between behavioral responses and wealth dynamics often seem to run in both directions. Both theoretical and empirical poverty trap research could benefit from making this two-way linkage more explicit.
Rakotonarivo, S T; Walker, S C; Kuperman, W A; Roux, P
2011-12-01
A method to actively localize a small perturbation in a multiple scattering medium using a collection of remote acoustic sensors is presented. The approach requires only minimal modeling and no knowledge of the scatterer distribution and properties of the scattering medium and the perturbation. The medium is ensonified before and after a perturbation is introduced. The coherent difference between the measured signals then reveals all field components that have interacted with the perturbation. A simple single scatter filter (that ignores the presence of the medium scatterers) is matched to the earliest change of the coherent difference to localize the perturbation. Using a multi-source/receiver laboratory setup in air, the technique has been successfully tested with experimental data at frequencies varying from 30 to 60 kHz (wavelength ranging from 0.5 to 1 cm) for cm-scale scatterers in a scattering medium with a size two to five times bigger than its transport mean free path. © 2011 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Bhakat, Soumendranath; Åberg, Emil; Söderhjelm, Pär
2018-01-01
Advanced molecular docking methods often aim at capturing the flexibility of the protein upon binding to the ligand. In this study, we investigate whether instead a simple rigid docking method can be applied, if combined with multiple target structures to model the backbone flexibility and molecular dynamics simulations to model the sidechain and ligand flexibility. The methods are tested for the binding of 35 ligands to FXR as part of the first stage of the Drug Design Data Resource (D3R) Grand Challenge 2 blind challenge. The results show that the multiple-target docking protocol performs surprisingly well, with correct poses found for 21 of the ligands. MD simulations started on the docked structures are remarkably stable, but show almost no tendency of refining the structure closer to the experimentally found binding pose. Reconnaissance metadynamics enhances the exploration of new binding poses, but additional collective variables involving the protein are needed to exploit the full potential of the method.
Bhakat, Soumendranath; Åberg, Emil; Söderhjelm, Pär
2018-01-01
Advanced molecular docking methods often aim at capturing the flexibility of the protein upon binding to the ligand. In this study, we investigate whether instead a simple rigid docking method can be applied, if combined with multiple target structures to model the backbone flexibility and molecular dynamics simulations to model the sidechain and ligand flexibility. The methods are tested for the binding of 35 ligands to FXR as part of the first stage of the Drug Design Data Resource (D3R) Grand Challenge 2 blind challenge. The results show that the multiple-target docking protocol performs surprisingly well, with correct poses found for 21 of the ligands. MD simulations started on the docked structures are remarkably stable, but show almost no tendency of refining the structure closer to the experimentally found binding pose. Reconnaissance metadynamics enhances the exploration of new binding poses, but additional collective variables involving the protein are needed to exploit the full potential of the method.
Coactivation of response initiation processes with redundant signals.
Maslovat, Dana; Hajj, Joëlle; Carlsen, Anthony N
2018-05-14
During reaction time (RT) tasks, participants respond faster to multiple stimuli from different modalities as compared to a single stimulus, a phenomenon known as the redundant signal effect (RSE). Explanations for this effect typically include coactivation arising from the multiple stimuli, which results in enhanced processing of one or more response production stages. The current study compared empirical RT data with the predictions of a model in which initiation-related activation arising from each stimulus is additive. Participants performed a simple wrist extension RT task following either a visual go-signal, an auditory go-signal, or both stimuli with the auditory stimulus delayed between 0 and 125 ms relative to the visual stimulus. Results showed statistical equivalence between the predictions of an additive initiation model and the observed RT data, providing novel evidence that the RSE can be explained via a coactivation of initiation-related processes. It is speculated that activation summation occurs at the thalamus, leading to the observed facilitation of response initiation. Copyright © 2018 Elsevier B.V. All rights reserved.
Xu, Jiajia; Li, Yuanyuan; Ma, Xiuling; Ding, Jianfeng; Wang, Kai; Wang, Sisi; Tian, Ye; Zhang, Hui; Zhu, Xin-Guang
2013-09-01
Setaria viridis is an emerging model species for genetic studies of C4 photosynthesis. Many basic molecular resources need to be developed to support for this species. In this paper, we performed a comprehensive transcriptome analysis from multiple developmental stages and tissues of S. viridis using next-generation sequencing technologies. Sequencing of the transcriptome from multiple tissues across three developmental stages (seed germination, vegetative growth, and reproduction) yielded a total of 71 million single end 100 bp long reads. Reference-based assembly using Setaria italica genome as a reference generated 42,754 transcripts. De novo assembly generated 60,751 transcripts. In addition, 9,576 and 7,056 potential simple sequence repeats (SSRs) covering S. viridis genome were identified when using the reference based assembled transcripts and the de novo assembled transcripts, respectively. This identified transcripts and SSR provided by this study can be used for both reverse and forward genetic studies based on S. viridis.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Brown, G.; Rikvold, P. A.
2017-05-01
A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.
Dynamics of coarsening in multicomponent lipid vesicles with non-uniform mechanical properties
NASA Astrophysics Data System (ADS)
Funkhouser, Chloe M.; Solis, Francisco J.; Thornton, K.
2014-04-01
Multicomponent lipid vesicles are commonly used as a model system for the complex plasma membrane. One phenomenon that is studied using such model systems is phase separation. Vesicles composed of simple lipid mixtures can phase-separate into liquid-ordered and liquid-disordered phases, and since these phases can have different mechanical properties, this separation can lead to changes in the shape of the vesicle. In this work, we investigate the dynamics of phase separation in multicomponent lipid vesicles, using a model that couples composition to mechanical properties such as bending rigidity and spontaneous curvature. The model allows the vesicle surface to deform while conserving surface area and composition. For vesicles initialized as spheres, we study the effects of phase fraction and spontaneous curvature. We additionally initialize two systems with elongated, spheroidal shapes. Dynamic behavior is contrasted in systems where only one phase has a spontaneous curvature similar to the overall vesicle surface curvature and systems where the spontaneous curvatures of both phases are similar to the overall curvature. The bending energy contribution is typically found to slow the dynamics by stabilizing configurations with multiple domains. Such multiple-domain configurations are found more often in vesicles with spheroidal shapes than in nearly spherical vesicles.
Cui, Shuqi; Hong, Ning; Shi, Baochang; Chai, Zhenhua
2016-04-01
In this paper, we will focus on the multiple-relaxation-time (MRT) lattice Boltzmann model for two-dimensional convection-diffusion equations (CDEs), and analyze the discrete effect on the halfway bounce-back (HBB) boundary condition (or sometimes called bounce-back boundary condition) of the MRT model where three different discrete velocity models are considered. We first present a theoretical analysis on the discrete effect of the HBB boundary condition for the simple problems with a parabolic distribution in the x or y direction, and a numerical slip proportional to the second-order of lattice spacing is observed at the boundary, which means that the MRT model has a second-order convergence rate in space. The theoretical analysis also shows that the numerical slip can be eliminated in the MRT model through tuning the free relaxation parameter corresponding to the second-order moment, while it cannot be removed in the single-relaxation-time model or the Bhatnagar-Gross-Krook model unless the relaxation parameter related to the diffusion coefficient is set to be a special value. We then perform some simulations to confirm our theoretical results, and find that the numerical results are consistent with our theoretical analysis. Finally, we would also like to point out the present analysis can be extended to other boundary conditions of lattice Boltzmann models for CDEs.
Application-Level Interoperability Across Grids and Clouds
NASA Astrophysics Data System (ADS)
Jha, Shantenu; Luckow, Andre; Merzky, Andre; Erdely, Miklos; Sehgal, Saurabh
Application-level interoperability is defined as the ability of an application to utilize multiple distributed heterogeneous resources. Such interoperability is becoming increasingly important with increasing volumes of data, multiple sources of data as well as resource types. The primary aim of this chapter is to understand different ways in which application-level interoperability can be provided across distributed infrastructure. We achieve this by (i) using the canonical wordcount application, based on an enhanced version of MapReduce that scales-out across clusters, clouds, and HPC resources, (ii) establishing how SAGA enables the execution of wordcount application using MapReduce and other programming models such as Sphere concurrently, and (iii) demonstrating the scale-out of ensemble-based biomolecular simulations across multiple resources. We show user-level control of the relative placement of compute and data and also provide simple performance measures and analysis of SAGA-MapReduce when using multiple, different, heterogeneous infrastructures concurrently for the same problem instance. Finally, we discuss Azure and some of the system-level abstractions that it provides and show how it is used to support ensemble-based biomolecular simulations.
A new multiple trauma model of the mouse.
Fitschen-Oestern, Stefanie; Lippross, Sebastian; Klueter, Tim; Weuster, Matthias; Varoga, Deike; Tohidnezhad, Mersedeh; Pufe, Thomas; Rose-John, Stefan; Andruszkow, Hagen; Hildebrand, Frank; Steubesand, Nadine; Seekamp, Andreas; Neunaber, Claudia
2017-11-21
Blunt trauma is the most frequent mechanism of injury in multiple trauma, commonly resulting from road traffic collisions or falls. Two of the most frequent injuries in patients with multiple trauma are chest trauma and extremity fracture. Several trauma mouse models combine chest trauma and head injury, but no trauma mouse model to date includes the combination of long bone fractures and chest trauma. Outcome is essentially determined by the combination of these injuries. In this study, we attempted to establish a reproducible novel multiple trauma model in mice that combines blunt trauma, major injuries and simple practicability. Ninety-six male C57BL/6 N mice (n = 8/group) were subjected to trauma for isolated femur fracture and a combination of femur fracture and chest injury. Serum samples of mice were obtained by heart puncture at defined time points of 0 h (hour), 6 h, 12 h, 24 h, 3 d (days), and 7 d. A tendency toward reduced weight and temperature was observed at 24 h after chest trauma and femur fracture. Blood analyses revealed a decrease in hemoglobin during the first 24 h after trauma. Some animals were killed by heart puncture immediately after chest contusion; these animals showed the most severe lung contusion and hemorrhage. The extent of structural lung injury varied in different mice but was evident in all animals. Representative H&E-stained (Haematoxylin and Eosin-stained) paraffin lung sections of mice with multiple trauma revealed hemorrhage and an inflammatory immune response. Plasma samples of mice with chest trauma and femur fracture showed an up-regulation of IL-1β (Interleukin-1β), IL-6, IL-10, IL-12p70 and TNF-α (Tumor necrosis factor- α) compared with the control group. Mice with femur fracture and chest trauma showed a significant up-regulation of IL-6 compared to group with isolated femur fracture. The multiple trauma mouse model comprising chest trauma and femur fracture enables many analogies to clinical cases of multiple trauma in humans and demonstrates associated characteristic clinical and pathophysiological changes. This model is easy to perform, is economical and can be used for further research examining specific immunological questions.
NASA Astrophysics Data System (ADS)
Su, Qi; Li, Aming; Wang, Long
2017-02-01
Spatial reciprocity is generally regarded as a positive rule facilitating the evolution of cooperation. However, a few recent studies show that, in the snowdrift game, spatial structure still could be detrimental to cooperation. Here we propose a model of multiple interactive dynamics, where each individual can cooperate and defect simultaneously against different neighbors. We realize individuals' multiple interactions simply by endowing them with strategies relevant to probabilities, and every one decides to cooperate or defect with a probability. With multiple interactive dynamics, the cooperation level in square lattices is higher than that in the well-mixed case for a wide range of cost-to-benefit ratio r, implying that spatial structure favors cooperative behavior in the snowdrift game. Moreover, in square lattices, the most favorable strategy follows a simple relation of r, which confers theoretically the average evolutionary frequency of cooperative behavior. We further extend our study to various homogeneous and heterogeneous networks, which demonstrates the robustness of our results. Here multiple interactive dynamics stabilizes the positive role of spatial structure on the evolution of cooperation and individuals' distinct reactions to different neighbors can be a new line in understanding the emergence of cooperation.
Managing bay and estuarine ecosystems for multiple services
Needles, Lisa A.; Lester, Sarah E.; Ambrose, Richard; Andren, Anders; Beyeler, Marc; Connor, Michael S.; Eckman, James E.; Costa-Pierce, Barry A.; Gaines, Steven D.; Lafferty, Kevin D.; Lenihan, Junter S.; Parrish, Julia; Peterson, Mark S.; Scaroni, Amy E.; Weis, Judith S.; Wendt, Dean E.
2013-01-01
Managers are moving from a model of managing individual sectors, human activities, or ecosystem services to an ecosystem-based management (EBM) approach which attempts to balance the range of services provided by ecosystems. Applying EBM is often difficult due to inherent tradeoffs in managing for different services. This challenge particularly holds for estuarine systems, which have been heavily altered in most regions and are often subject to intense management interventions. Estuarine managers can often choose among a range of management tactics to enhance a particular service; although some management actions will result in strong tradeoffs, others may enhance multiple services simultaneously. Management of estuarine ecosystems could be improved by distinguishing between optimal management actions for enhancing multiple services and those that have severe tradeoffs. This requires a framework that evaluates tradeoff scenarios and identifies management actions likely to benefit multiple services. We created a management action-services matrix as a first step towards assessing tradeoffs and providing managers with a decision support tool. We found that management actions that restored or enhanced natural vegetation (e.g., salt marsh and mangroves) and some shellfish (particularly oysters and oyster reef habitat) benefited multiple services. In contrast, management actions such as desalination, salt pond creation, sand mining, and large container shipping had large net negative effects on several of the other services considered in the matrix. Our framework provides resource managers a simple way to inform EBM decisions and can also be used as a first step in more sophisticated approaches that model service delivery.
Interoperation transfer in Chinese-English bilinguals' arithmetic.
Campbell, Jamie I D; Dowd, Roxanne R
2012-10-01
We examined interoperation transfer of practice in adult Chinese-English bilinguals' memory for simple multiplication (6 × 8 = 48) and addition (6 + 8 = 14) facts. The purpose was to determine whether they possessed distinct number-fact representations in both Chinese (L1) and English (L2). Participants repeatedly practiced multiplication problems (e.g., 4 × 5 = ?), answering a subset in L1 and another subset in L2. Then separate groups answered corresponding addition problems (4 + 5 = ?) and control addition problems in either L1 (N = 24) or L2 (N = 24). The results demonstrated language-specific negative transfer of multiplication practice to corresponding addition problems. Specifically, large simple addition problems (sum > 10) presented a significant response time cost (i.e., retrieval-induced forgetting) after their multiplication counterparts were practiced in the same language, relative to practice in the other language. The results indicate that our Chinese-English bilinguals had multiplication and addition facts represented in distinct language-specific memory stores.
Binary encoding of multiplexed images in mixed noise.
Lalush, David S
2008-09-01
Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.
Liew, Lawrence J; Day, Richard M; Dilley, Rodney J
2017-03-01
Tissue engineering approaches using growth factors and various materials for repairing chronic perforations of the tympanic membrane are being developed, but there are surprisingly few relevant tissue culture models available to test new treatments. Here, we present a simple three-dimensional model system based on micro-dissecting the rat tympanic membrane umbo and grafting it into the membrane of a cell culture well insert. Cell outgrowth from the graft produced sufficient cells to populate a membrane of similar surface area to the human tympanic membrane within 2 weeks. Tissue grafts from the annulus region also showed cell outgrowth but were not as productive. The umbo organoid supported substantial cell proliferation and migration under the influence of keratinocyte growth medium. Cells from umbo grafts were enzymatically harvested from the polyethylene terephthalate (PET) membrane for expansion in routine culture and cells could be harvested consecutively from the same graft over multiple cycles. We used harvested cells to test cell migration properties and to engraft a porous silk scaffold material as proof-of-principle for tissue engineering applications. This model is simple enough to be widely adopted for tympanic membrane regeneration studies and has promise as a tissue-equivalent model alternative to animal testing.
Zivkovic, Milena Z; Djuric, Sasa; Cuk, Ivan; Suzovic, Dejan; Jaric, Slobodan
2017-07-01
A range of force (F) and velocity (V) data obtained from functional movement tasks (e.g., running, jumping, throwing, lifting, cycling) performed under variety of external loads have typically revealed strong and approximately linear F-V relationships. The regression model parameters reveal the maximum F (F-intercept), V (V-intercept), and power (P) producing capacities of the tested muscles. The aim of the present study was to evaluate the level of agreement between the routinely used "multiple-load model" and a simple "two-load model" based on direct assessment of the F-V relationship from only 2 external loads applied. Twelve participants were tested on the maximum performance vertical jumps, cycling, bench press throws, and bench pull performed against a variety of different loads. All 4 tested tasks revealed both exceptionally strong relationships between the parameters of the 2 models (median R = 0.98) and a lack of meaningful differences between their magnitudes (fixed bias below 3.4%). Therefore, addition of another load to the standard tests of various functional tasks typically conducted under a single set of mechanical conditions could allow for the assessment of the muscle mechanical properties such as the muscle F, V, and P producing capacities.
Effects of Geometric Variations on Lift Augmentation of Simple-plenum-chamber Ground-effect Models
NASA Technical Reports Server (NTRS)
Davenport, Edwin E.
1961-01-01
Considerable interest has been shown during recent years in ground-effect vehicles. Of the various types proposed, the simple-plenum-chamber vehicle has indicated promise because, although the lift augmentation obtainable appears to be less than that of an annular jet, it may be somewhat less complicated structurally. The present investigation was undertaken to study the effects of some geometric variations upon lift augmentation of a simple plenum chamber within ground proximity. The variables included the ratio inlet area to exit area, plenum-chamber depth, and entrance configuration. An optimum plenum-chamber depth appeared to be between 3 and 10 percent of the plenum-chamber diameter with a ratio of inlet diameter to plenum-chamber diameter of 0.15 for the range of plenum-chamber depths investigated. The most important effect of multiple inlets was the elimination of negative lift augmentation, which was experienced with single sharp-edged inlets, at intermediate heights. Installation of a flared inlet and a turning-vane assembly improved lift augmentation of a single-inlet configuration at intermediate heights.
Ullrich, Thomas; Ermantraut, Eugen; Schulz, Torsten; Steinmetzer, Katrin
2012-01-01
Background State of the art molecular diagnostic tests are based on the sensitive detection and quantification of nucleic acids. However, currently established diagnostic tests are characterized by elaborate and expensive technical solutions hindering the development of simple, affordable and compact point-of-care molecular tests. Methodology and Principal Findings The described competitive reporter monitored amplification allows the simultaneous amplification and quantification of multiple nucleic acid targets by polymerase chain reaction. Target quantification is accomplished by real-time detection of amplified nucleic acids utilizing a capture probe array and specific reporter probes. The reporter probes are fluorescently labeled oligonucleotides that are complementary to the respective capture probes on the array and to the respective sites of the target nucleic acids in solution. Capture probes and amplified target compete for reporter probes. Increasing amplicon concentration leads to decreased fluorescence signal at the respective capture probe position on the array which is measured after each cycle of amplification. In order to observe reporter probe hybridization in real-time without any additional washing steps, we have developed a mechanical fluorescence background displacement technique. Conclusions and Significance The system presented in this paper enables simultaneous detection and quantification of multiple targets. Moreover, the presented fluorescence background displacement technique provides a generic solution for real time monitoring of binding events of fluorescently labelled ligands to surface immobilized probes. With the model assay for the detection of human immunodeficiency virus type 1 and 2 (HIV 1/2), we have been able to observe the amplification kinetics of five targets simultaneously and accommodate two additional hybridization controls with a simple instrument set-up. The ability to accommodate multiple controls and targets into a single assay and to perform the assay on simple and robust instrumentation is a prerequisite for the development of novel molecular point of care tests. PMID:22539973
Modeling abundance using multinomial N-mixture models
Royle, Andy
2016-01-01
Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.
Demonstrations Using a Fabry-Perot. I. Multiple-Slit Interference
ERIC Educational Resources Information Center
Roychoudhuri, Chandrasekhar
1975-01-01
Describes a demonstration technique for showing multiple-slit interference patterns with the use of a Fabry-Perot etalon and a laser beam. A simple derivation of the analytical expression for such fringes is presented. (Author/CP)
NASA Astrophysics Data System (ADS)
Sutherland, D. A.; Kim, C.; Marsik, M.; Spiridonov, G.; Toft, J.; Ruckelshaus, M.; Guerry, A.; Plummer, M.
2011-12-01
Humans obtain numerous benefits from marine ecosystems, including fish to eat; mitigation of storm damage; nutrient and water cycling and primary production; and cultural, aesthetic and recreational values. However, managing these benefits, or ecosystem services, in the marine world relies on an integrated approach that accounts for both marine and watershed activities. Here we present the results of a set of simple, physically-based, and spatially-explicit models that quantify the effects of terrestrial activities on marine ecosystem services. Specifically, we model the circulation and water quality of Hood Canal, WA, USA, a fjord system in Puget Sound where multiple human uses of the nearshore ecosystem (e.g., shellfish aquaculture, recreational Dungeness crab and shellfish harvest) can be compromised when water quality is poor (e.g., hypoxia, excessive non-point source pollution). Linked to the estuarine water quality model is a terrestrial hydrology model that simulates streamflow and nutrient loading, so land cover and climate changes in watersheds can be reflected in the marine environment. In addition, a shellfish aquaculture model is linked to the water quality model to test the sensitivity of the ecosystem service and its value to both terrestrial and marine activities. The modeling framework is general and will be publicly available, allowing easy comparisons of watershed impacts on marine ecosystem services across multiple scales and regions.
SMT-Aware Instantaneous Footprint Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, Probir; Liu, Xu; Song, Shuaiwen
Modern architectures employ simultaneous multithreading (SMT) to increase thread-level parallelism. SMT threads share many functional units and the whole memory hierarchy of a physical core. Without a careful code design, SMT threads can easily contend with each other for these shared resources, causing severe performance degradation. Minimizing SMT thread contention for HPC applications running on dedicated platforms is very challenging, because they usually spawn threads within Single Program Multiple Data (SPMD) models. To address this important issue, we introduce a simple scheme for SMT-aware code optimization, which aims to reduce the memory contention across SMT threads.
NASA Technical Reports Server (NTRS)
1978-01-01
The antenna shown is the new, multiple-beam, Unattended Earth Terminal, located at COMSAT Laboratories in Clarksburg, Maryland. Seemingly simple, it is actually a complex structure capable of maintaining contact with several satellites simultaneously (conventional Earth station antennas communicate with only one satellite at a time). In developing the antenna, COMSAT Laboratories used NASTRAN, NASA's structural analysis computer program, together with BANDIT, a companion program. The computer programs were used to model several structural configurations and determine the most suitable, The speed and accuracy of the computerized design analysis afforded appreciable savings in time and money.
Tracking and Control of a Neutral Particle Beam Using Multiple Model Adaptive Meer Filter.
1987-12-01
34 method incorporated by Zicker in 1983 [32]. Once the beam estimation problem had been solved, the problem of beam control was examined. Zicker conducted a...filter. Then, the methods applied by Meer, and later Zicker , to reduce the computational load of a simple Meer filter, will be presented. 2.5.1 Basic...number of possible methods to prune the hypothesis tree and chose the "Best Half Method" as the most viable (21). Zicker [323, applied the work of Weiss
Bit-parallel arithmetic in a massively-parallel associative processor
NASA Technical Reports Server (NTRS)
Scherson, Isaac D.; Kramer, David A.; Alleyne, Brian D.
1992-01-01
A simple but powerful new architecture based on a classical associative processor model is presented. Algorithms for performing the four basic arithmetic operations both for integer and floating point operands are described. For m-bit operands, the proposed architecture makes it possible to execute complex operations in O(m) cycles as opposed to O(m exp 2) for bit-serial machines. A word-parallel, bit-parallel, massively-parallel computing system can be constructed using this architecture with VLSI technology. The operation of this system is demonstrated for the fast Fourier transform and matrix multiplication.
Jet-A fuel evaporation analysis in conical tube injectors
NASA Technical Reports Server (NTRS)
Lai, M.-C.; Chue, T.-H.; Zhu, G.; Sun, H.; Tacina, R.; Chun, K.; Hicks, Y.
1991-01-01
A simple one-dimensional drop-life-history analysis and a multidimensional spray calculation using KIVA-II code are applied to the vaporization of Jet-A fuel in multiple tube injectors. Within the assumptions of the analysis, the one-dimensional results are useful for design purposes. The pressure-atomizer breakup models do not accurately predict the dropsize measured experimentally or deduced from the one-dimensional analysis. Cold flow visualization and dropsize measurements show that capillary wave breakup mechanism plays an important role in the spray angle and droplet impingement on the tube wall.
Multiple Weyl points and the sign change of their topological charges in woodpile photonic crystals
NASA Astrophysics Data System (ADS)
Chang, Ming-Li; Xiao, Meng; Chen, Wen-Jie; Chan, C. T.
2017-03-01
We show that Weyl points with topological charges 1 and 2 can be found in very simple chiral woodpile photonic crystals and the distribution of the charges can be changed by changing the material parameters without altering space-group symmetry. The underlying physics can be understood through a tight-binding model. Gapless surface states and their backscattering immune properties also are demonstrated in these systems. Obtaining Weyl points in these easily fabricated woodpile photonic crystals will facilitate the realization of Weyl point physics in optical and IR frequencies.
The effect of tick size on trading volume share in three competing stock markets
NASA Astrophysics Data System (ADS)
Nagumo, Shota; Shimada, Takashi; Ito, Nobuyasu
2016-09-01
The relationship between tick sizes and trading volume share in two and three competing markets is studied theoretically. By introducing a simple model which is equipped with multiple markets and non-strategic traders, we analytically calculate the share. It is shown that share is shifted from a market with a larger tick size to a market with a smaller tick size, and the size of share-shift is determined by difference between tick sizes not by ratio between tick sizes in both cases of two markets and three markets.
Kuris, A M; Mager, M
1975-09-01
Size increase at molt is reduced following multiple limb regeneration in the shore crabs, Hemigrapsus oregonensis and Pachygrapsus crassipes. Limb loss per se does not influence postmolt size. Effect of increasing number of regenerating limbs is additive. Postmolt size is programmed early in the premolt period of the preceding instar and is probably not readily influenced by water uptake mechanics at ecdysis. A simple model for growth, molting, and regeneration in heavily calcified Crustacea is developed from the viewpoint of adaptive strategies and energetic considerations.
Graphical function mapping as a new way to explore cause-and-effect chains
Evans, Mary Anne
2016-01-01
Graphical function mapping provides a simple method for improving communication within interdisciplinary research teams and between scientists and nonscientists. This article introduces graphical function mapping using two examples and discusses its usefulness. Function mapping projects the outcome of one function into another to show the combined effect. Using this mathematical property in a simpler, even cartoon-like, graphical way allows the rapid combination of multiple information sources (models, empirical data, expert judgment, and guesses) in an intuitive visual to promote further discussion, scenario development, and clear communication.
Atom optics in the time domain
NASA Astrophysics Data System (ADS)
Arndt, M.; Szriftgiser, P.; Dalibard, J.; Steane, A. M.
1996-05-01
Atom-optics experiments are presented using a time-modulated evanescent light wave as an atomic mirror in the trampoline configuration, i.e., perpendicular to the direction of the atomic free fall. This modulated mirror is used to accelerate cesium atoms, to focus their trajectories, and to apply a ``multiple lens'' to separately focus different velocity classes of atoms originating from a point source. We form images of a simple two-slit object to show the resolution of the device. The experiments are modelled by a general treatment analogous to classical ray optics.
Empirical Reference Distributions for Networks of Different Size
Smith, Anna; Calder, Catherine A.; Browning, Christopher R.
2016-01-01
Network analysis has become an increasingly prevalent research tool across a vast range of scientific fields. Here, we focus on the particular issue of comparing network statistics, i.e. graph-level measures of network structural features, across multiple networks that differ in size. Although “normalized” versions of some network statistics exist, we demonstrate via simulation why direct comparison is often inappropriate. We consider normalizing network statistics relative to a simple fully parameterized reference distribution and demonstrate via simulation how this is an improvement over direct comparison, but still sometimes problematic. We propose a new adjustment method based on a reference distribution constructed as a mixture model of random graphs which reflect the dependence structure exhibited in the observed networks. We show that using simple Bernoulli models as mixture components in this reference distribution can provide adjusted network statistics that are relatively comparable across different network sizes but still describe interesting features of networks, and that this can be accomplished at relatively low computational expense. Finally, we apply this methodology to a collection of ecological networks derived from the Los Angeles Family and Neighborhood Survey activity location data. PMID:27721556
Salehifar, Mehdi; Moreno-Equilaz, Manuel
2016-01-01
Due to its fault tolerance, a multiphase brushless direct current (BLDC) motor can meet high reliability demand for application in electric vehicles. The voltage-source inverter (VSI) supplying the motor is subjected to open circuit faults. Therefore, it is necessary to design a fault-tolerant (FT) control algorithm with an embedded fault diagnosis (FD) block. In this paper, finite control set-model predictive control (FCS-MPC) is developed to implement the fault-tolerant control algorithm of a five-phase BLDC motor. The developed control method is fast, simple, and flexible. A FD method based on available information from the control block is proposed; this method is simple, robust to common transients in motor and able to localize multiple open circuit faults. The proposed FD and FT control algorithm are embedded in a five-phase BLDC motor drive. In order to validate the theory presented, simulation and experimental results are conducted on a five-phase two-level VSI supplying a five-phase BLDC motor. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Statistical strategies for averaging EC50 from multiple dose-response experiments.
Jiang, Xiaoqi; Kopp-Schneider, Annette
2015-11-01
In most dose-response studies, repeated experiments are conducted to determine the EC50 value for a chemical, requiring averaging EC50 estimates from a series of experiments. Two statistical strategies, the mixed-effect modeling and the meta-analysis approach, can be applied to estimate average behavior of EC50 values over all experiments by considering the variabilities within and among experiments. We investigated these two strategies in two common cases of multiple dose-response experiments in (a) complete and explicit dose-response relationships are observed in all experiments and in (b) only in a subset of experiments. In case (a), the meta-analysis strategy is a simple and robust method to average EC50 estimates. In case (b), all experimental data sets can be first screened using the dose-response screening plot, which allows visualization and comparison of multiple dose-response experimental results. As long as more than three experiments provide information about complete dose-response relationships, the experiments that cover incomplete relationships can be excluded from the meta-analysis strategy of averaging EC50 estimates. If there are only two experiments containing complete dose-response information, the mixed-effects model approach is suggested. We subsequently provided a web application for non-statisticians to implement the proposed meta-analysis strategy of averaging EC50 estimates from multiple dose-response experiments.
Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach
NASA Astrophysics Data System (ADS)
Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto
2017-12-01
In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \
Behavior systems and reinforcement: an integrative approach.
Timberlake, W
1993-01-01
Most traditional conceptions of reinforcement are based on a simple causal model in which responding is strengthened by the presentation of a reinforcer. I argue that reinforcement is better viewed as the outcome of constraint of a functioning causal system comprised of multiple interrelated causal sequences, complex linkages between causes and effects, and a set of initial conditions. Using a simplified system conception of the reinforcement situation, I review the similarities and drawbacks of traditional reinforcement models and analyze the recent contributions of cognitive, regulatory, and ecological approaches. Finally, I show how the concept of behavior systems can begin to incorporate both traditional and recent conceptions of reinforcement in an integrative approach. PMID:8354963
The use of dwell time cross-correlation functions to study single-ion channel gating kinetics.
Ball, F G; Kerry, C J; Ramsey, R L; Sansom, M S; Usherwood, P N
1988-01-01
The derivation of cross-correlation functions from single-channel dwell (open and closed) times is described. Simulation of single-channel data for simple gating models, alongside theoretical treatment, is used to demonstrate the relationship of cross-correlation functions to underlying gating mechanisms. It is shown that time irreversibility of gating kinetics may be revealed in cross-correlation functions. Application of cross-correlation function analysis to data derived from the locust muscle glutamate receptor-channel provides evidence for multiple gateway states and time reversibility of gating. A model for the gating of this channel is used to show the effect of omission of brief channel events on cross-correlation functions. PMID:2462924
Building Diversified Multiple Trees for classification in high dimensional noisy biomedical data.
Li, Jiuyong; Liu, Lin; Liu, Jixue; Green, Ryan
2017-12-01
It is common that a trained classification model is applied to the operating data that is deviated from the training data because of noise. This paper will test an ensemble method, Diversified Multiple Tree (DMT), on its capability for classifying instances in a new laboratory using the classifier built on the instances of another laboratory. DMT is tested on three real world biomedical data sets from different laboratories in comparison with four benchmark ensemble methods, AdaBoost, Bagging, Random Forests, and Random Trees. Experiments have also been conducted on studying the limitation of DMT and its possible variations. Experimental results show that DMT is significantly more accurate than other benchmark ensemble classifiers on classifying new instances of a different laboratory from the laboratory where instances are used to build the classifier. This paper demonstrates that an ensemble classifier, DMT, is more robust in classifying noisy data than other widely used ensemble methods. DMT works on the data set that supports multiple simple trees.
Small RNA biology is systems biology.
Jost, Daniel; Nowojewski, Andrzej; Levine, Erel
2011-01-01
During the last decade small regulatory RNA (srRNA) emerged as central players in the regulation of gene expression in all kingdoms of life. Multiple pathways for srRNA biogenesis and diverse mechanisms of gene regulation may indicate that srRNA regulation evolved independently multiple times. However, small RNA pathways share numerous properties, including the ability of a single srRNA to regulate multiple targets. Some of the mechanisms of gene regulation by srRNAs have significant effect on the abundance of free srRNAs that are ready to interact with new targets. This results in indirect interactions among seemingly unrelated genes, as well as in a crosstalk between different srRNA pathways. Here we briefly review and compare the major srRNA pathways, and argue that the impact of srRNA is always at the system level. We demonstrate how a simple mathematical model can ease the discussion of governing principles. To demonstrate these points we review a few examples from bacteria and animals.
On the interpretation of kernels - Computer simulation of responses to impulse pairs
NASA Technical Reports Server (NTRS)
Hung, G.; Stark, L.; Eykhoff, P.
1983-01-01
A method is presented for the use of a unit impulse response and responses to impulse pairs of variable separation in the calculation of the second-degree kernels of a quadratic system. A quadratic system may be built from simple linear terms of known dynamics and a multiplier. Computer simulation results on quadratic systems with building elements of various time constants indicate reasonably that the larger time constant term before multiplication dominates in the envelope of the off-diagonal kernel curves as these move perpendicular to and away from the main diagonal. The smaller time constant term before multiplication combines with the effect of the time constant after multiplication to dominate in the kernel curves in the direction of the second-degree impulse response, i.e., parallel to the main diagonal. Such types of insight may be helpful in recognizing essential aspects of (second-degree) kernels; they may be used in simplifying the model structure and, perhaps, add to the physical/physiological understanding of the underlying processes.
NASA Astrophysics Data System (ADS)
Kuo, Cynthia; Walker, Jesse; Perrig, Adrian
Bluetooth Simple Pairing and Wi-Fi Protected Setup specify mechanisms for exchanging authentication credentials in wireless networks. Both Simple Pairing and Protected Setup support multiple setup mechanisms, which increases security risks and hurts the user experience. To improve the security and usability of these specifications, we suggest defining a common baseline for hardware features and a consistent, interoperable user experience across devices.
NASA Astrophysics Data System (ADS)
Joyce, Hannah; Reaney, Sim
2015-04-01
Catchment systems provide multiple benefits for society, including: land for agriculture, climate regulation and recreational space. Yet, these systems also have undesirable externalities, such as flooding, and the benefits they create can be compromised through societal use. For example, agriculture, forestry and urban land use practices can increase the export of fine sediment and faecal indicator organisms (FIO) delivered to river systems. These diffuse landscape pressures are coupled with pressures on the in stream temperature environment from projected climate change. Such pressures can have detrimental impacts on water quality and ecological habitat and consequently the benefits they provide for society. These diffuse and in-stream pressures can be reduced through actions at the landscape scale but are commonly tackled individually. Any intervention may have benefits for other pressures and hence the challenge is to consider all of the different pressures simultaneously to find solutions with high levels of cross-pressure benefits. This research presents (1) a simple but spatially distributed model to predict the pattern of multiple pressures at the landscape scale, and (2) a method for spatially targeting the optimum location for riparian woodland planting as mitigation action against these pressures. The model follows a minimal information requirement approach along the lines of SCIMAP (www.scimap.org.uk). This approach defines the critical source areas of fine sediment diffuse pollution, rapid overland flow and FIOs, based on the analysis of the pattern of the pressure in the landscape and the connectivity from source areas to rivers. River temperature was modeled using a simple energy balance equation; focusing on temperature of inflowing and outflowing water across a catchment. The model has been calibrated using a long term observed temperature record. The modelling outcomes enabled the identification of the severity of each pressure in relative rather than absolute sense at the landscape scale. Riparian woodland planting is proposed as one mitigation action to address these pressures. This planting disconnects the transfer of material from the landscape to the river channel by promoting increased infiltration and also provides river shading and hence decreases the rate of water heating. To identify the optimal locations for riparian woodland planting, a Monte Carlo based approach was used to identify multiple mitigation options and their influence on the pressures identified. These results were integrated into a decision support tool, which allows the user to explore the implications of individual and a set of pressures. This is achieved by allowing the user to change the importance of different pressures to identify the optimal locations for a custom combination of pressures. For example, reductions in flood risk can be prioritized over reductions in fine sediment. This approach provides an innovative way of identifying and targeting multiple diffuse pressures at the catchment scale simultaneously, which has presented a challenge in previous management efforts. The approach has been tested in the River Ribble Catchment, North West England.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andronov, E.; Vechernin, V.
2016-01-22
The long-range rapidity correlations between the multiplicities (n-n) and the transverse momentum and the multiplicity (pT-n) of charge particles are analyzed in the framework of the simple string inspired model with two types of sources. The sources of the first type correspond to the initial strings formed in a hadronic collision. The sources of the second type imitate the appearance of the emitters of a new kind resulting from interaction (fusion) of the initial strings. The model enabled to describe effectively the influence of the string fusion effects on the strength both the n-n and the pT-n correlations. It wasmore » found that in the region, where the process of string fusion comes into play, the calculation results predict the non-monotonic behaviour of the n-n and pT-n correlation coefficients with the growth of the mean number of initial strings, i.e. with the increase of the collision centrality. It was shown also that the increase of the event-by-event fluctuation in the number of primary strings leads to the change of the pT-n correlation sign from negative to positive. One can try to search these signatures of string collective phenomena in interactions of various nuclei at different energies varying the class of collision centrality and its width.« less
Modeling the Development of Audiovisual Cue Integration in Speech Perception
Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.
2017-01-01
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558
McFarquhar, Martyn; McKie, Shane; Emsley, Richard; Suckling, John; Elliott, Rebecca; Williams, Stephen
2016-05-15
Repeated measurements and multimodal data are common in neuroimaging research. Despite this, conventional approaches to group level analysis ignore these repeated measurements in favour of multiple between-subject models using contrasts of interest. This approach has a number of drawbacks as certain designs and comparisons of interest are either not possible or complex to implement. Unfortunately, even when attempting to analyse group level data within a repeated-measures framework, the methods implemented in popular software packages make potentially unrealistic assumptions about the covariance structure across the brain. In this paper, we describe how this issue can be addressed in a simple and efficient manner using the multivariate form of the familiar general linear model (GLM), as implemented in a new MATLAB toolbox. This multivariate framework is discussed, paying particular attention to methods of inference by permutation. Comparisons with existing approaches and software packages for dependent group-level neuroimaging data are made. We also demonstrate how this method is easily adapted for dependency at the group level when multiple modalities of imaging are collected from the same individuals. Follow-up of these multimodal models using linear discriminant functions (LDA) is also discussed, with applications to future studies wishing to integrate multiple scanning techniques into investigating populations of interest. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Modeling the Development of Audiovisual Cue Integration in Speech Perception.
Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C
2017-03-21
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.
Gutreuter, S.; Boogaard, M.A.
2007-01-01
Predictors of the percentile lethal/effective concentration/dose are commonly used measures of efficacy and toxicity. Typically such quantal-response predictors (e.g., the exposure required to kill 50% of some population) are estimated from simple bioassays wherein organisms are exposed to a gradient of several concentrations of a single agent. The toxicity of an agent may be influenced by auxiliary covariates, however, and more complicated experimental designs may introduce multiple variance components. Prediction methods lag examples of those cases. A conventional two-stage approach consists of multiple bivariate predictions of, say, medial lethal concentration followed by regression of those predictions on the auxiliary covariates. We propose a more effective and parsimonious class of generalized nonlinear mixed-effects models for prediction of lethal/effective dose/concentration from auxiliary covariates. We demonstrate examples using data from a study regarding the effects of pH and additions of variable quantities 2???,5???-dichloro-4???- nitrosalicylanilide (niclosamide) on the toxicity of 3-trifluoromethyl-4- nitrophenol to larval sea lamprey (Petromyzon marinus). The new models yielded unbiased predictions and root-mean-squared errors (RMSEs) of prediction for the exposure required to kill 50 and 99.9% of some population that were 29 to 82% smaller, respectively, than those from the conventional two-stage procedure. The model class is flexible and easily implemented using commonly available software. ?? 2007 SETAC.
Adaptive walking of a quadrupedal robot based on layered biological reflexes
NASA Astrophysics Data System (ADS)
Zhang, Xiuli; Mingcheng, E.; Zeng, Xiangyu; Zheng, Haojun
2012-07-01
A multiple-legged robot is traditionally controlled by using its dynamic model. But the dynamic-model-based approach fails to acquire satisfactory performances when the robot faces rough terrains and unknown environments. Referring animals' neural control mechanisms, a control model is built for a quadruped robot walking adaptively. The basic rhythmic motion of the robot is controlled by a well-designed rhythmic motion controller(RMC) comprising a central pattern generator(CPG) for hip joints and a rhythmic coupler (RC) for knee joints. CPG and RC have relationships of motion-mapping and rhythmic couple. Multiple sensory-motor models, abstracted from the neural reflexes of a cat, are employed. These reflex models are organized and thus interact with the CPG in three layers, to meet different requirements of complexity and response time to the tasks. On the basis of the RMC and layered biological reflexes, a quadruped robot is constructed, which can clear obstacles and walk uphill and downhill autonomously, and make a turn voluntarily in uncertain environments, interacting with the environment in a way similar to that of an animal. The paper provides a biologically inspired architecture, with which a robot can walk adaptively in uncertain environments in a simple and effective way, and achieve better performances.
Wallace, B M N; Searle, J B; Everett, C A
2002-01-01
The influence of Robertsonian (Rb) heterozygosity on fertility has been the subject of much study in the house mouse. However, these studies have been largely directed at single simple heterozygotes (heterozygous for a single Rb metacentric) or complex heterozygotes (heterozygous for several to many metacentrics which share common chromosome arms). In this paper we describe studies on male multiple simple heterozygotes, specifically the F(1) products of crosses between wild-stock mice homozygous for four or seven metacentrics and wild-stock mice with a standard all-acrocentric karyotype; these F(1) products were characterized by four and seven trivalents at meiosis I, respectively. Mice with the same karyotype, but two different genetic backgrounds were examined. Although a range of meiotic and fertility studies were conducted, particular emphasis was paid to analysis of chromosome pairing, previously not well-described in multiple simple heterozygous mice. The progression of spermatocytes through prophase I was followed by electron microscopy of surface spread material. As previously shown for single simple Rb heterozygotes, the trivalents that characterize multiple simple heterozygotes initially showed delayed pairing of the centromeric region and later showed side arm formation, resulting from non-homologous pairing by the centromeric ends of the acrocentric chromosomes. In the four trivalent groups of mice, 15 and 32% of trivalents showed unpairing in the centromeric region at mid pachytene; equivalent values were 29 and 39% for the seven trivalent groups. Pairing abnormalities (largely attachments and interlocks between trivalents and between a trivalent and the XY configuration) were observed in 18 and 23% of mid pachytene cells in the four trivalent groups and 36 and 49% of cells in the seven trivalent groups. The greater level of pachytene irregularity (unpairing and pairing abnormalities) in seven versus four trivalent heterozygotes was mirrored in terms of higher anaphase I nondisjunction frequency and lower germ cell counts. However, while pachytene irregularities appear to contribute to germ cell death, examples of male sterility in our material undoubtedly also involve genic incompatibilities. Copyright 2002 S. Karger AG, Basel
Evaluation of the Williams-type model for barley yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1981-01-01
The Williams-type yield model is based on multiple regression analysis of historial time series data at CRD level pooled to regional level (groups of similar CRDs). Basic variables considered in the analysis include USDA yield, monthly mean temperature, monthly precipitation, soil texture and topographic information, and variables derived from these. Technologic trend is represented by piecewise linear and/or quadratic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-1979) demonstrate that biases are small and performance based on root mean square appears to be acceptable for the intended AgRISTARS large area applications. The model is objective, adequate, timely, simple, and not costly. It consideres scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.
Experimental validation of a linear model for data reduction in chirp-pulse microwave CT.
Miyakawa, M; Orikasa, K; Bertero, M; Boccacci, P; Conte, F; Piana, M
2002-04-01
Chirp-pulse microwave computerized tomography (CP-MCT) is an imaging modality developed at the Department of Biocybernetics, University of Niigata (Niigata, Japan), which intends to reduce the microwave-tomography problem to an X-ray-like situation. We have recently shown that data acquisition in CP-MCT can be described in terms of a linear model derived from scattering theory. In this paper, we validate this model by showing that the theoretically computed response function is in good agreement with the one obtained from a regularized multiple deconvolution of three data sets measured with the prototype of CP-MCT. Furthermore, the reliability of the model as far as image restoration in concerned, is tested in the case of space-invariant conditions by considering the reconstruction of simple on-axis cylindrical phantoms.
Validation of optical codes based on 3D nanostructures
NASA Astrophysics Data System (ADS)
Carnicer, Artur; Javidi, Bahram
2017-05-01
Image information encoding using random phase masks produce speckle-like noise distributions when the sample is propagated in the Fresnel domain. As a result, information cannot be accessed by simple visual inspection. Phase masks can be easily implemented in practice by attaching cello-tape to the plain-text message. Conventional 2D-phase masks can be generalized to 3D by combining glass and diffusers resulting in a more complex, physical unclonable function. In this communication, we model the behavior of a 3D phase mask using a simple approach: light is propagated trough glass using the angular spectrum of plane waves whereas the diffusor is described as a random phase mask and a blurring effect on the amplitude of the propagated wave. Using different designs for the 3D phase mask and multiple samples, we demonstrate that classification is possible using the k-nearest neighbors and random forests machine learning algorithms.
NASA Astrophysics Data System (ADS)
Behzad, Mehdi; Ghadami, Amin; Maghsoodi, Ameneh; Michael Hale, Jack
2013-11-01
In this paper, a simple method for detection of multiple edge cracks in Euler-Bernoulli beams having two different types of cracks is presented based on energy equations. Each crack is modeled as a massless rotational spring using Linear Elastic Fracture Mechanics (LEFM) theory, and a relationship among natural frequencies, crack locations and stiffness of equivalent springs is demonstrated. In the procedure, for detection of m cracks in a beam, 3m equations and natural frequencies of healthy and cracked beam in two different directions are needed as input to the algorithm. The main accomplishment of the presented algorithm is the capability to detect the location, severity and type of each crack in a multi-cracked beam. Concise and simple calculations along with accuracy are other advantages of this method. A number of numerical examples for cantilever beams including one and two cracks are presented to validate the method.
Robust Combining of Disparate Classifiers Through Order Statistics
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2001-01-01
Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.
Heterogeneous distribution of metabolites across plant species
NASA Astrophysics Data System (ADS)
Takemoto, Kazuhiro; Arita, Masanori
2009-07-01
We investigate the distribution of flavonoids, a major category of plant secondary metabolites, across species. Flavonoids are known to show high species specificity, and were once considered as chemical markers for understanding adaptive evolution and characterization of living organisms. We investigate the distribution among species using bipartite networks, and find that two heterogeneous distributions are conserved among several families: the power-law distributions of the number of flavonoids in a species and the number of shared species of a particular flavonoid. In order to explain the possible origin of the heterogeneity, we propose a simple model with, essentially, a single parameter. As a result, we show that two respective power-law statistics emerge from simple evolutionary mechanisms based on a multiplicative process. These findings provide insights into the evolution of metabolite diversity and characterization of living organisms that defy genome sequence analysis for different reasons.
More asymptotic safety guaranteed
NASA Astrophysics Data System (ADS)
Bond, Andrew D.; Litim, Daniel F.
2018-04-01
We study interacting fixed points and phase diagrams of simple and semisimple quantum field theories in four dimensions involving non-Abelian gauge fields, fermions and scalars in the Veneziano limit. Particular emphasis is put on new phenomena which arise due to the semisimple nature of the theory. Using matter field multiplicities as free parameters, we find a large variety of interacting conformal fixed points with stable vacua and crossovers inbetween. Highlights include semisimple gauge theories with exact asymptotic safety, theories with one or several interacting fixed points in the IR, theories where one of the gauge sectors is both UV free and IR free, and theories with weakly interacting fixed points in the UV and the IR limits. The phase diagrams for various simple and semisimple settings are also given. Further aspects such as perturbativity beyond the Veneziano limit, conformal windows, and implications for model building are discussed.
Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices
NASA Astrophysics Data System (ADS)
Passemier, Damien; McKay, Matthew R.; Chen, Yang
2015-07-01
Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.
van Rhee, Henk; Hak, Tony
2017-01-01
We present a new tool for meta‐analysis, Meta‐Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta‐analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta‐Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta‐analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp‐Hartung adjustment of the DerSimonian‐Laird estimator. However, more advanced meta‐analysis methods such as meta‐analytical structural equation modelling and meta‐regression with multiple covariates are not available. In summary, Meta‐Essentials may prove a valuable resource for meta‐analysts, including researchers, teachers, and students. PMID:28801932
Learning and inference using complex generative models in a spatial localization task.
Bejjanki, Vikranth R; Knill, David C; Aslin, Richard N
2016-01-01
A large body of research has established that, under relatively simple task conditions, human observers integrate uncertain sensory information with learned prior knowledge in an approximately Bayes-optimal manner. However, in many natural tasks, observers must perform this sensory-plus-prior integration when the underlying generative model of the environment consists of multiple causes. Here we ask if the Bayes-optimal integration seen with simple tasks also applies to such natural tasks when the generative model is more complex, or whether observers rely instead on a less efficient set of heuristics that approximate ideal performance. Participants localized a "hidden" target whose position on a touch screen was sampled from a location-contingent bimodal generative model with different variances around each mode. Over repeated exposure to this task, participants learned the a priori locations of the target (i.e., the bimodal generative model), and integrated this learned knowledge with uncertain sensory information on a trial-by-trial basis in a manner consistent with the predictions of Bayes-optimal behavior. In particular, participants rapidly learned the locations of the two modes of the generative model, but the relative variances of the modes were learned much more slowly. Taken together, our results suggest that human performance in a more complex localization task, which requires the integration of sensory information with learned knowledge of a bimodal generative model, is consistent with the predictions of Bayes-optimal behavior, but involves a much longer time-course than in simpler tasks.
Audiovisual Fundamentals; Basic Equipment Operation and Simple Materials Production.
ERIC Educational Resources Information Center
Bullard, John R.; Mether, Calvin E.
A guide illustrated with simple sketches explains the functions and step-by-step uses of audiovisual (AV) equipment. Principles of projection, audio, AV equipment, lettering, limited-quantity and quantity duplication, and materials preservation are outlined. Apparatus discussed include overhead, opaque, slide-filmstrip, and multiple-loading slide…
Physical meaning of the multiplicities of emitted nucleons in hadron-nucleus collisions
NASA Technical Reports Server (NTRS)
Strugalski, Z.
1985-01-01
The analysis of experimental data on hadron-nucleus collisions at energies from about 2 up to about 400 GeV was performed in order to discover a physical meaning of the multiplicity of emitted nucleons. Simple relations between the multiplicities and the thickness of the nuclear matter layer involved in collisions were obtained.
Multiple Questions Require Multiple Designs: An Evaluation of the 1981 Changes to the AFDC Program.
ERIC Educational Resources Information Center
Hedrick, Terry E.; Shipman, Stephanie L.
1988-01-01
Changes made in 1981 to the Aid to Families with Dependent Children (AFDC) program under the Omnibus Budget Reconciliation Act were evaluated. Multiple quasi-experimental designs (interrupted time series, non-equivalent comparison groups, and simple pre-post designs) used to address evaluation questions illustrate the issues faced by evaluators in…
Comparison of CEAS and Williams-type models for spring wheat yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1982-01-01
The CEAS and Williams-type yield models are both based on multiple regression analysis of historical time series data at CRD level. The CEAS model develops a separate relation for each CRD; the Williams-type model pools CRD data to regional level (groups of similar CRDs). Basic variables considered in the analyses are USDA yield, monthly mean temperature, monthly precipitation, and variables derived from these. The Williams-type model also used soil texture and topographic information. Technological trend is represented in both by piecewise linear functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test of each model (1970-1979) demonstrate that the models are very similar in performance in all respects. Both models are about equally objective, adequate, timely, simple, and inexpensive. Both consider scientific knowledge on a broad scale but not in detail. Neither provides a good current measure of modeled yield reliability. The CEAS model is considered very slightly preferable for AgRISTARS applications.
Hoppel, Magdalena; Mahrhauser, Denise; Stallinger, Christina; Wagner, Florian; Wirth, Michael; Valenta, Claudia
2014-05-01
The aim of this study was to create multiple water-in-oil-in-water (W/O/W) emulsions with an increased long-term stability as skin delivery systems for the hydrophilic model drug 5-fluorouracil. Multiple W/O/W emulsions were prepared in a one-step emulsification process, and were characterized regarding particle size, microstructure and viscosity. In-vitro studies on porcine skin with Franz-type diffusion cells, tape stripping experiments and attenuated total reflectance-fourier transform infrared spectroscopy (ATR-FTIR) were performed. The addition of Solagum AX, a natural polymer mixture of acacia and xanthan gum, led to multiple W/O/W emulsions with a remarkably increased long-term stability in comparison to formulations without a thickener. The higher skin diffusion of 5-fluorouracil from the multiple emulsions compared with an O/W-macroemulsion could be explained by ATR-FTIR. Shifts to higher wave numbers and increase of peak areas of the asymmetric and symmetric CH2 stretching vibrations confirmed a transition of parts of the skin lipids from an ordered to a disordered state after impregnation of porcine skin with the multiple emulsions. Solagum AX is highly suitable for stabilization of the created multiple emulsions. Moreover, these formulations showed superiority over a simple O/W-macroemulsion regarding skin permeation and penetration of 5-fluorouracil. © 2013 Royal Pharmaceutical Society.
Models of emergency departments for reducing patient waiting times.
Laskowski, Marek; McLeod, Robert D; Friesen, Marcia R; Podaima, Blake W; Alfa, Attahiru S
2009-07-02
In this paper, we apply both agent-based models and queuing models to investigate patient access and patient flow through emergency departments. The objective of this work is to gain insights into the comparative contributions and limitations of these complementary techniques, in their ability to contribute empirical input into healthcare policy and practice guidelines. The models were developed independently, with a view to compare their suitability to emergency department simulation. The current models implement relatively simple general scenarios, and rely on a combination of simulated and real data to simulate patient flow in a single emergency department or in multiple interacting emergency departments. In addition, several concepts from telecommunications engineering are translated into this modeling context. The framework of multiple-priority queue systems and the genetic programming paradigm of evolutionary machine learning are applied as a means of forecasting patient wait times and as a means of evolving healthcare policy, respectively. The models' utility lies in their ability to provide qualitative insights into the relative sensitivities and impacts of model input parameters, to illuminate scenarios worthy of more complex investigation, and to iteratively validate the models as they continue to be refined and extended. The paper discusses future efforts to refine, extend, and validate the models with more data and real data relative to physical (spatial-topographical) and social inputs (staffing, patient care models, etc.). Real data obtained through proximity location and tracking system technologies is one example discussed.
Models of Emergency Departments for Reducing Patient Waiting Times
Laskowski, Marek; McLeod, Robert D.; Friesen, Marcia R.; Podaima, Blake W.; Alfa, Attahiru S.
2009-01-01
In this paper, we apply both agent-based models and queuing models to investigate patient access and patient flow through emergency departments. The objective of this work is to gain insights into the comparative contributions and limitations of these complementary techniques, in their ability to contribute empirical input into healthcare policy and practice guidelines. The models were developed independently, with a view to compare their suitability to emergency department simulation. The current models implement relatively simple general scenarios, and rely on a combination of simulated and real data to simulate patient flow in a single emergency department or in multiple interacting emergency departments. In addition, several concepts from telecommunications engineering are translated into this modeling context. The framework of multiple-priority queue systems and the genetic programming paradigm of evolutionary machine learning are applied as a means of forecasting patient wait times and as a means of evolving healthcare policy, respectively. The models' utility lies in their ability to provide qualitative insights into the relative sensitivities and impacts of model input parameters, to illuminate scenarios worthy of more complex investigation, and to iteratively validate the models as they continue to be refined and extended. The paper discusses future efforts to refine, extend, and validate the models with more data and real data relative to physical (spatial–topographical) and social inputs (staffing, patient care models, etc.). Real data obtained through proximity location and tracking system technologies is one example discussed. PMID:19572015
A Multiple Scattering Polarized Radiative Transfer Model: Application to HD 189733b
NASA Astrophysics Data System (ADS)
Kopparla, Pushkar; Natraj, Vijay; Zhang, Xi; Swain, Mark R.; Wiktorowicz, Sloane J.; Yung, Yuk L.
2016-01-01
We present a multiple scattering vector radiative transfer model that produces disk integrated, full phase polarized light curves for reflected light from an exoplanetary atmosphere. We validate our model against results from published analytical and computational models and discuss a small number of cases relevant to the existing and possible near-future observations of the exoplanet HD 189733b. HD 189733b is arguably the most well observed exoplanet to date and the only exoplanet to be observed in polarized light, yet it is debated if the planet’s atmosphere is cloudy or clear. We model reflected light from clear atmospheres with Rayleigh scattering, and cloudy or hazy atmospheres with Mie and fractal aggregate particles. We show that clear and cloudy atmospheres have large differences in polarized light as compared to simple flux measurements, though existing observations are insufficient to make this distinction. Futhermore, we show that atmospheres that are spatially inhomogeneous, such as being partially covered by clouds or hazes, exhibit larger contrasts in polarized light when compared to clear atmospheres. This effect can potentially be used to identify patchy clouds in exoplanets. Given a set of full phase polarimetric measurements, this model can constrain the geometric albedo, properties of scattering particles in the atmosphere, and the longitude of the ascending node of the orbit. The model is used to interpret new polarimetric observations of HD 189733b in a companion paper.
Random intermittent search and the tug-of-war model of motor-driven transport
NASA Astrophysics Data System (ADS)
Newby, Jay; Bressloff, Paul C.
2010-04-01
We formulate the 'tug-of-war' model of microtubule cargo transport by multiple molecular motors as an intermittent random search for a hidden target. A motor complex consisting of multiple molecular motors with opposing directional preference is modeled using a discrete Markov process. The motors randomly pull each other off of the microtubule so that the state of the motor complex is determined by the number of bound motors. The tug-of-war model prescribes the state transition rates and corresponding cargo velocities in terms of experimentally measured physical parameters. We add space to the resulting Chapman-Kolmogorov (CK) equation so that we can consider delivery of the cargo to a hidden target at an unknown location along the microtubule track. The target represents some subcellular compartment such as a synapse in a neuron's dendrites, and target delivery is modeled as a simple absorption process. Using a quasi-steady-state (QSS) reduction technique we calculate analytical approximations of the mean first passage time (MFPT) to find the target. We show that there exists an optimal adenosine triphosphate (ATP) concentration that minimizes the MFPT for two different cases: (i) the motor complex is composed of equal numbers of kinesin motors bound to two different microtubules (symmetric tug-of-war model) and (ii) the motor complex is composed of different numbers of kinesin and dynein motors bound to a single microtubule (asymmetric tug-of-war model).
Hierarchy Bayesian model based services awareness of high-speed optical access networks
NASA Astrophysics Data System (ADS)
Bai, Hui-feng
2018-03-01
As the speed of optical access networks soars with ever increasing multiple services, the service-supporting ability of optical access networks suffers greatly from the shortage of service awareness. Aiming to solve this problem, a hierarchy Bayesian model based services awareness mechanism is proposed for high-speed optical access networks. This approach builds a so-called hierarchy Bayesian model, according to the structure of typical optical access networks. Moreover, the proposed scheme is able to conduct simple services awareness operation in each optical network unit (ONU) and to perform complex services awareness from the whole view of system in optical line terminal (OLT). Simulation results show that the proposed scheme is able to achieve better quality of services (QoS), in terms of packet loss rate and time delay.
Gravitational microlensing of gamma-ray bursts
NASA Technical Reports Server (NTRS)
Mao, Shude
1993-01-01
A Monte Carlo code is developed to calculate gravitational microlensing in three dimensions when the lensing optical depth is low or moderate (not greater than 0.25). The code calculates positions of microimages and time delays between the microimages. The majority of lensed gamma-ray bursts should show a simple double-burst structure, as predicted by a single point mass lens model. A small fraction should show complicated multiple events due to the collective effects of several point masses (black holes). Cosmological models with a significant fraction of mass density in massive compact objects can be tested by searching for microlensing events in the current BATSE data. Our catalog generated by 10,000 Monte Carlo models is accessible through the computer network. The catalog can be used to take realistic selection effects into account.
Stoykov, Nikolay S; Kuiken, Todd A; Lowery, Madeleine M; Taflove, Allen
2003-09-01
We present what we believe to be the first algorithms that use a simple scalar-potential formulation to model linear Debye and Lorentz dielectric dispersions at low frequencies in the context of finite-element time-domain (FETD) numerical solutions of electric potential. The new algorithms, which permit treatment of multiple-pole dielectric relaxations, are based on the auxiliary differential equation method and are unconditionally stable. We validate the algorithms by comparison with the results of a previously reported method based on the Fourier transform. The new algorithms should be useful in calculating the transient response of biological materials subject to impulsive excitation. Potential applications include FETD modeling of electromyography, functional electrical stimulation, defibrillation, and effects of lightning and impulsive electric shock.
Model-based tomographic reconstruction
Chambers, David H; Lehman, Sean K; Goodman, Dennis M
2012-06-26
A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.
Inkjet Deposition of Layer by Layer Assembled Films
Andres, Christine M.; Kotov, Nicholas A.
2010-01-01
Layer-by-layer assembly (LBL) can create advanced composites with exceptional properties unavailable by other means, but the laborious deposition process and multiple dipping cycles hamper their utilization in microtechnologies and electronics. Multiple rinse steps provide both structural control and thermodynamic stability to LBL multilayers but they significantly limit their practical applications and contribute significantly to the processing time and waste. Here we demonstrate that by employing inkjet technology one can deliver the necessary quantities of LBL components required for film build-up without excess, eliminating the need for repetitive rinsing steps. This feature differentiates this approach from all other recognized LBL modalities. Using a model system of negatively charged gold nanoparticles and positively charged poly(diallyldimethylammonium) chloride, the material stability, nanoscale control over thickness and particle coverage offered by the inkjet LBL technique are shown to be equal or better than the multilayers made with traditional dipping cycles. The opportunity for fast deposition of complex metallic patterns using a simple inkjet printer was also shown. The additive nature of LBL deposition based on the formation of insoluble nanoparticle-polyelectrolyte complexes of various compositions provides an excellent opportunity for versatile, multi-component, and non-contact patterning for the simple production of stratified patterns that are much needed in advanced devices. PMID:20863114
Liu, Yansong; Yu, Xinnian; Yang, Bixiu; Zhang, Fuquan; Zou, Wenhua; Na, Aiguo; Zhao, Xudong; Yin, Guangzhong
2017-03-21
Overgeneral autobiographical memory has been identified as a risk factor for the onset and maintenance of depression. However, little is known about the underlying mechanisms that might explain overgeneral autobiographical memory phenomenon in depression. The purpose of this study was to test the mediation effects of rumination on the relationship between overgeneral autobiographical memory and depressive symptoms. Specifically, the mediation effects of brooding and reflection subtypes of rumination were examined in patients with major depressive disorder. Eighty-seven patients with major depressive disorder completed the 17-item Hamilton Depression Rating Scale, Ruminative Response Scale, and Autobiographical Memory Test. Bootstrap mediation analysis for simple and multiple mediation models through the PROCESS macro was applied. Simple mediation analysis showed that rumination significantly mediated the relationship between overgeneral autobiographical memory and depression symptoms. Multiple mediation analyses showed that brooding, but not reflection, significantly mediated the relationship between overgeneral autobiographical memory and depression symptoms. Our results indicate that global rumination partly mediates the relationship between overgeneral autobiographical memory and depressive symptoms in patients with major depressive disorder. Furthermore, the present results suggest that the mediating role of rumination in the relationship between overgeneral autobiographical memory and depression is mainly due to the maladaptive brooding subtype of rumination.
Attentional priority determines working memory precision.
Klyszejko, Zuzanna; Rahmati, Masih; Curtis, Clayton E
2014-12-01
Visual working memory is a system used to hold information actively in mind for a limited time. The number of items and the precision with which we can store information has limits that define its capacity. How much control do we have over the precision with which we store information when faced with these severe capacity limitations? Here, we tested the hypothesis that rank-ordered attentional priority determines the precision of multiple working memory representations. We conducted two psychophysical experiments that manipulated the priority of multiple items in a two-alternative forced choice task (2AFC) with distance discrimination. In Experiment 1, we varied the probabilities with which memorized items were likely to be tested. To generalize the effects of priority beyond simple cueing, in Experiment 2, we manipulated priority by varying monetary incentives contingent upon successful memory for items tested. Moreover, we illustrate our hypothesis using a simple model that distributed attentional resources across items with rank-ordered priorities. Indeed, we found evidence in both experiments that priority affects the precision of working memory in a monotonic fashion. Our results demonstrate that representations of priority may provide a mechanism by which resources can be allocated to increase the precision with which we encode and briefly store information. Copyright © 2014 Elsevier Ltd. All rights reserved.
Asaithamby, Aroumougame; Hu, Burong; Delgado, Oliver; Ding, Liang-Hao; Story, Michael D.; Minna, John D.; Shay, Jerry W.; Chen, David J.
2011-01-01
DNA damage and consequent mutations initiate the multistep carcinogenic process. Differentiated cells have a reduced capacity to repair DNA lesions, but the biological impact of unrepaired DNA lesions in differentiated lung epithelial cells is unclear. Here, we used a novel organotypic human lung three-dimensional (3D) model to investigate the biological significance of unrepaired DNA lesions in differentiated lung epithelial cells. We showed, consistent with existing notions that the kinetics of loss of simple double-strand breaks (DSBs) were significantly reduced in organotypic 3D culture compared to kinetics of repair in two-dimensional (2D) culture. Strikingly, we found that, unlike simple DSBs, a majority of complex DNA lesions were irreparable in organotypic 3D culture. Levels of expression of multiple DNA damage repair pathway genes were significantly reduced in the organotypic 3D culture compared with those in 2D culture providing molecular evidence for the defective DNA damage repair in organotypic culture. Further, when differentiated cells with unrepaired DNA lesions re-entered the cell cycle, they manifested a spectrum of gross-chromosomal aberrations in mitosis. Our data suggest that downregulation of multiple DNA repair pathway genes in differentiated cells renders them vulnerable to DSBs, promoting genome instability that may lead to carcinogenesis. PMID:21421565
Benchmarking novel approaches for modelling species range dynamics
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.
2016-01-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. PMID:26872305
Benchmarking novel approaches for modelling species range dynamics.
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E
2016-08-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. © 2016 John Wiley & Sons Ltd.
Non-Abelian Bremsstrahlung and Azimuthal Asymmetries in High Energy p+A Reactions
Gyulassy, Miklos; Vitev, Ivan Mateev; Levai, Peter; ...
2014-09-25
Here we apply the GLV reaction operator solution to the Vitev-Gunion-Bertsch (VGB) boundary conditions to compute the all-order in nuclear opacity non-abelian gluon bremsstrahlung of event- by-event uctuating beam jets in nuclear collisions. We evaluate analytically azimuthal Fourier moments of single gluon, vmore » $$M\\atop{n}$$ {1}, and even number 2ℓ gluon, v$$M\\atop{n}$$ {2ℓ} inclusive distributions in high energy p+A reactions as a function of harmonic $n$, target recoil cluster number, $M$, and gluon number, 2ℓ, at RHIC and LHC. Multiple resolved clusters of recoiling target beam jets together with the projectile beam jet form Color Scintillation Antenna (CSA) arrays that lead to character- istic boost non-invariant trapezoidal rapidity distributions in asymmetric B+A nuclear collisions. The scaling of intrinsically azimuthally anisotropic and long range in η nature of the non-Abelian bremsstrahlung leads to v n moments that are similar to results from hydrodynamic models, but due entirely to non-Abelian wave interference phenomena sourced by the fluctuating CSA. Our analytic non-flow solutions are similar to recent numerical saturation model predictions but differ by predicting a simple power-law hierarchy of both even and odd v n without invoking k T factorization. A test of CSA mechanism is the predicted nearly linear η rapidity dependence of the v n(k Tη). Non- Abelian beam jet bremsstrahlung may thus provide a simple analytic solution to Beam Energy Scan (BES) puzzle of the near $$\\sqrt{s}$$ independence of v n(pT) moments observed down to 10 AGeV where large-x valence quark beam jets dominate inelastic dynamics. Recoil bremsstrahlung from multiple independent CSA clusters could also provide a partial explanation for the unexpected similarity of v n in p(D) + A and non-central A + A at same dN=dη multiplicity as observed at RHIC and LHC.« less
VLSI architectures for computing multiplications and inverses in GF(2m)
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.
1985-01-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
VLSI architectures for computing multiplications and inverses in GF(2-m)
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.; Reed, I. S.
1983-01-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
VLSI architectures for computing multiplications and inverses in GF(2m).
Wang, C C; Truong, T K; Shao, H M; Deutsch, L J; Omura, J K; Reed, I S
1985-08-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that can be easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. In this paper, a pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal basis representation used together with this multiplier, a pipeline architecture is developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable, and therefore, naturally suitable for VLSI implementation.
Lancioni, Giulio E; O'Reilly, Mark F; Singh, Nirbhay N; Campodonico, Francesca; Marziani, Monia; Oliva, Doretta
2004-01-01
This study assessed a microswitch program to foster simple foot and leg movements in 2 adult wheelchair users with multiple disabilities. The participants' mood (indices of happiness) was recorded throughout the study. Data showed that participants rapidly increased the target foot and leg movements and maintained those movements during the course of the study, which lasted about 4.5 months. With regard to indices of happiness, 1 participant showed a fairly modest increase during the intervention while the other participant showed a substantial increase. Implications of the findings are discussed.
Zhao, Yi; Cao, Xiangyu; Gao, Jun; Liu, Xiao; Li, Sijia
2016-05-16
We demonstrate a simple reconfigurable metasurface with multiple functions. Anisotropic tiles are investigated and manufactured as fundamental elements. Then, the tiles are combined in a certain sequence to construct a metasurface. Each of the tiles can be adjusted independently which is like a jigsaw puzzle and the whole metasurface can achieve diverse functions by different layouts. For demonstration purposes, we realize polarization conversion, anomalous reflection and diffusion by a jigsaw puzzle metasurface with 6 × 6 pieces of anisotropic tile. Simulated and measured results prove that our method offers a simple and effective strategy for metasurface design.
Simple method for assembly of CRISPR synergistic activation mediator gRNA expression array.
Vad-Nielsen, Johan; Nielsen, Anders Lade; Luo, Yonglun
2018-05-20
When studying complex interconnected regulatory networks, effective methods for simultaneously manipulating multiple genes expression are paramount. Previously, we have developed a simple method for generation of an all-in-one CRISPR gRNA expression array. We here present a Golden Gate Assembly-based system of synergistic activation mediator (SAM) compatible CRISPR/dCas9 gRNA expression array for the simultaneous activation of multiple genes. Using this system, we demonstrated the simultaneous activation of the transcription factors, TWIST, SNAIL, SLUG, and ZEB1 a human breast cancer cell line. Copyright © 2018 Elsevier B.V. All rights reserved.
Cognitive/emotional models for human behavior representation in 3D avatar simulations
NASA Astrophysics Data System (ADS)
Peterson, James K.
2004-08-01
Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.
A Simple Method for Calculating Clebsch-Gordan Coefficients
ERIC Educational Resources Information Center
Klink, W. H.; Wickramasekara, S.
2010-01-01
This paper presents a simple method for calculating Clebsch-Gordan coefficients for the tensor product of two unitary irreducible representations (UIRs) of the rotation group. The method also works for multiplicity-free irreducible representations appearing in the tensor product of any number of UIRs of the rotation group. The generalization to…
Losoya, Sandra H.; Knight, George P.; Chassin, Laurie; Little, Michelle; Vargas-Chanes, Delfino; Mauricio, Anne; Piquero, Alex
2009-01-01
This study examines the longitudinal relations of multiple dimensions of acculturation and enculturation to heavy episodic drinking and marijuana use in a sample of 300 male, Mexican-American, serious juvenile offenders. We track trajectories between ages 15 and 20 and also consider the effects of participants’ time spent residing in supervised settings during these years. Results showed some (although not entirely consistent) support for the hypothesis that bicultural adaptation is most functional in terms of lowered substance use involvement. The current findings demonstrate the importance of examining these relations longitudinally and among multiple dimensions of acculturation and enculturation, and they call into question simple models that suggest that greater acculturation is associated with greater substance use among Mexican-American adolescents. PMID:20198119
Wang, Xu; Le, Anh -Thu; Yu, Chao; ...
2016-03-30
We discuss a scheme to retrieve transient conformational molecular structure information using photoelectron angular distributions (PADs) that have averaged over partial alignments of isolated molecules. The photoelectron is pulled out from a localized inner-shell molecular orbital by an X-ray photon. We show that a transient change in the atomic positions from their equilibrium will lead to a sensitive change in the alignment-averaged PADs, which can be measured and used to retrieve the former. Exploiting the experimental convenience of changing the photon polarization direction, we show that it is advantageous to use PADs obtained from multiple photon polarization directions. Lastly, amore » simple single-scattering model is proposed and benchmarked to describe the photoionization process and to do the retrieval using a multiple-parameter fitting method.« less
Particle image velocimetry based on wavelength division multiplexing
NASA Astrophysics Data System (ADS)
Tang, Chunxiao; Li, Enbang; Li, Hongqiang
2018-01-01
This paper introduces a technical approach of wavelength division multiplexing (WDM) based particle image velocimetry (PIV). It is designed to measure transient flows with different scales of velocity by capturing multiple particle images in one exposure. These images are separated by different wavelengths, and thus the pulse separation time is not influenced by the frame rate of the camera. A triple-pulsed PIV system has been created in order to prove the feasibility of WDM-PIV. This is demonstrated in a sieve plate extraction column model by simultaneously measuring the fast flow in the downcomer and the slow vortices inside the plates. A simple displacement/velocity field combination method has also been developed. The constraints imposed by WDM-PIV are limited wavelength choices of available light sources and cameras. The usage of WDM technique represents a feasible way to realize multiple-pulsed PIV.
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.
NASA Astrophysics Data System (ADS)
Gerhard, Christoph; Adams, Geoff
2015-10-01
Geometric optics is at the heart of optics teaching. Some of us may remember using pins and string to test the simple lens equation at school. Matters get more complex at undergraduate/postgraduate levels as we are introduced to paraxial rays, real rays, wavefronts, aberration theory and much more. Software is essential for the later stages, and the right software can profitably be used even at school. We present two free PC programs, which have been widely used in optics teaching, and have been further developed in close cooperation with lecturers/professors in order to address the current content of the curricula for optics, photonics and lasers in higher education. PreDesigner is a single thin lens modeller. It illustrates the simple lens law with construction rays and then allows the user to include field size and aperture. Sliders can be used to adjust key values with instant graphical feedback. This tool thus represents a helpful teaching medium for the visualization of basic interrelations in optics. WinLens3DBasic can model multiple thin or thick lenses with real glasses. It shows the system focii, principal planes, nodal points, gives paraxial ray trace values, details the Seidel aberrations, offers real ray tracing and many forms of analysis. It is simple to reverse lenses and model tilts and decenters. This tool therefore provides a good base for learning lens design fundamentals. Much work has been put into offering these features in ways that are easy to use, and offer opportunities to enhance the student's background understanding.
Welch, Catherine A; Petersen, Irene; Bartlett, Jonathan W; White, Ian R; Marston, Louise; Morris, Richard W; Nazareth, Irwin; Walters, Kate; Carpenter, James
2014-01-01
Most implementations of multiple imputation (MI) of missing data are designed for simple rectangular data structures ignoring temporal ordering of data. Therefore, when applying MI to longitudinal data with intermittent patterns of missing data, some alternative strategies must be considered. One approach is to divide data into time blocks and implement MI independently at each block. An alternative approach is to include all time blocks in the same MI model. With increasing numbers of time blocks, this approach is likely to break down because of co-linearity and over-fitting. The new two-fold fully conditional specification (FCS) MI algorithm addresses these issues, by only conditioning on measurements, which are local in time. We describe and report the results of a novel simulation study to critically evaluate the two-fold FCS algorithm and its suitability for imputation of longitudinal electronic health records. After generating a full data set, approximately 70% of selected continuous and categorical variables were made missing completely at random in each of ten time blocks. Subsequently, we applied a simple time-to-event model. We compared efficiency of estimated coefficients from a complete records analysis, MI of data in the baseline time block and the two-fold FCS algorithm. The results show that the two-fold FCS algorithm maximises the use of data available, with the gain relative to baseline MI depending on the strength of correlations within and between variables. Using this approach also increases plausibility of the missing at random assumption by using repeated measures over time of variables whose baseline values may be missing. PMID:24782349
Arithmetic on Your Phone: A Large Scale Investigation of Simple Additions and Multiplications.
Zimmerman, Federico; Shalom, Diego; Gonzalez, Pablo A; Garrido, Juan Manuel; Alvarez Heduan, Facundo; Dehaene, Stanislas; Sigman, Mariano; Rieznik, Andres
2016-01-01
We present the results of a gamified mobile device arithmetic application which allowed us to collect vast amount of data in simple arithmetic operations. Our results confirm and replicate, on a large sample, six of the main principles derived in a long tradition of investigation: size effect, tie effect, size-tie interaction effect, five-effect, RTs and error rates correlation effect, and most common error effect. Our dataset allowed us to perform a robust analysis of order effects for each individual problem, for which there is controversy both in experimental findings and in the predictions of theoretical models. For addition problems, the order effect was dominated by a max-then-min structure (i.e 7+4 is easier than 4+7). This result is predicted by models in which additions are performed as a translation starting from the first addend, with a distance given by the second addend. In multiplication, we observed a dominance of two effects: (1) a max-then-min pattern that can be accounted by the fact that it is easier to perform fewer additions of the largest number (i.e. 8x3 is easier to compute as 8+8+8 than as 3+3+…+3) and (2) a phonological effect by which problems for which there is a rhyme (i.e. "seis por cuatro es veinticuatro") are performed faster. Above and beyond these results, our study bares an important practical conclusion, as proof of concept, that participants can be motivated to perform substantial arithmetic training simply by presenting it in a gamified format.
Arithmetic on Your Phone: A Large Scale Investigation of Simple Additions and Multiplications
Zimmerman, Federico; Shalom, Diego; Gonzalez, Pablo A.; Garrido, Juan Manuel; Alvarez Heduan, Facundo; Dehaene, Stanislas; Sigman, Mariano; Rieznik, Andres
2016-01-01
We present the results of a gamified mobile device arithmetic application which allowed us to collect vast amount of data in simple arithmetic operations. Our results confirm and replicate, on a large sample, six of the main principles derived in a long tradition of investigation: size effect, tie effect, size-tie interaction effect, five-effect, RTs and error rates correlation effect, and most common error effect. Our dataset allowed us to perform a robust analysis of order effects for each individual problem, for which there is controversy both in experimental findings and in the predictions of theoretical models. For addition problems, the order effect was dominated by a max-then-min structure (i.e 7+4 is easier than 4+7). This result is predicted by models in which additions are performed as a translation starting from the first addend, with a distance given by the second addend. In multiplication, we observed a dominance of two effects: (1) a max-then-min pattern that can be accounted by the fact that it is easier to perform fewer additions of the largest number (i.e. 8x3 is easier to compute as 8+8+8 than as 3+3+…+3) and (2) a phonological effect by which problems for which there is a rhyme (i.e. "seis por cuatro es veinticuatro") are performed faster. Above and beyond these results, our study bares an important practical conclusion, as proof of concept, that participants can be motivated to perform substantial arithmetic training simply by presenting it in a gamified format. PMID:28033357
Asay window: A new spall diagnostic
NASA Astrophysics Data System (ADS)
McCluskey, Craig W.; Wilke, Mark D.; Anderson, William W.; Byers, Mark E.; Holtkamp, David B.; Rigg, Paulo A.; Furnish, Michael D.; Romero, Vincent T.
2006-11-01
By changing from the metallic foil of the Asay foil diagnostic, which can detect ejecta from a shocked surface, to a lithium fluoride (LiF) or polymethyl methacrylate (PMMA) window, it is possible to detect multiple spall layers and interlayer rubble. Past experiments to demonstrate this diagnostic have used high explosives (HEs) to shock metals to produce multiple spall layers. Because the exact characteristics of HE-induced spall layers cannot be predetermined, two issues exist in the quantitative interpretation of the data. First, to what level of fidelity is the Asay window method capable of providing quantitative information about spall layers, possibly separated by rubble, and second, contingent on the first, can an analytic technique be developed to convert the data to a meaningful description of spall from a given experiment? In this article, we address the first issue. A layered projectile fired from a gas gun was used to test the new diagnostic's accuracy and repeatability. We impacted a LiF or PMMA window viewed by a velocity interferometer system for any reflector (VISAR) probe with a projectile consisting of four thin stainless steel disks spaced apart 200μm with either vacuum or polyethylene. The window/surface interface velocity measured with a VISAR probe was compared with calculations. The good agreement observed between the adjusted calculation and the measured data indicates that, in principle and given enough prior information, it is possible to use the Asay window data to model a density distribution from spalled material with simple hydrodynamic models and only simple adjustments to nominal predictions.
ViSimpl: Multi-View Visual Analysis of Brain Simulation Data
Galindo, Sergio E.; Toharia, Pablo; Robles, Oscar D.; Pastor, Luis
2016-01-01
After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures. PMID:27774062
ViSimpl: Multi-View Visual Analysis of Brain Simulation Data.
Galindo, Sergio E; Toharia, Pablo; Robles, Oscar D; Pastor, Luis
2016-01-01
After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures.
Fischer, Thomas; Fischer, Susanne; Himmel, Wolfgang; Kochen, Michael M; Hummers-Pradier, Eva
2008-01-01
The influence of patient characteristics on family practitioners' (FPs') diagnostic decision making has mainly been investigated using indirect methods such as vignettes or questionnaires. Direct observation-borrowed from social and cultural anthropology-may be an alternative method for describing FPs' real-life behavior and may help in gaining insight into how FPs diagnose respiratory tract infections, which are frequent in primary care. To clarify FPs' diagnostic processes when treating patients suffering from symptoms of respiratory tract infection. This direct observation study was performed in 30 family practices using a checklist for patient complaints, history taking, physical examination, and diagnoses. The influence of patients' symptoms and complaints on the FPs' physical examination and diagnosis was calculated by logistic regression analyses. Dummy variables based on combinations of symptoms and complaints were constructed and tested against saturated (full) and backward regression models. In total, 273 patients (median age 37 years, 51% women) were included. The median number of symptoms described was 4 per patient, and most information was provided at the patients' own initiative. Multiple logistic regression analysis showed a strong association between patients' complaints and the physical examination. Frequent diagnoses were upper respiratory tract infection (URTI)/common cold (43%), bronchitis (26%), sinusitis (12%), and tonsillitis (11%). There were no significant statistical differences between "simple heuristic'' models and saturated regression models in the diagnoses of bronchitis, sinusitis, and tonsillitis, indicating that simple heuristics are probably used by the FPs, whereas "URTI/common cold'' was better explained by the full model. FPs tended to make their diagnosis based on a few patient symptoms and a limited physical examination. Simple heuristic models were almost as powerful in explaining most diagnoses as saturated models. Direct observation allowed for the study of decision making under real conditions, yielding both quantitative data and "qualitative'' information about the FPs' performance. It is important for investigators to be aware of the specific disadvantages of the method (e.g., a possible observer effect).
Why the Long Face? The Mechanics of Mandibular Symphysis Proportions in Crocodiles
Walmsley, Christopher W.; Smits, Peter D.; Quayle, Michelle R.; McCurry, Matthew R.; Richards, Heather S.; Oldfield, Christopher C.; Wroe, Stephen; Clausen, Phillip D.; McHenry, Colin R.
2013-01-01
Background Crocodilians exhibit a spectrum of rostral shape from long snouted (longirostrine), through to short snouted (brevirostrine) morphologies. The proportional length of the mandibular symphysis correlates consistently with rostral shape, forming as much as 50% of the mandible’s length in longirostrine forms, but 10% in brevirostrine crocodilians. Here we analyse the structural consequences of an elongate mandibular symphysis in relation to feeding behaviours. Methods/Principal Findings Simple beam and high resolution Finite Element (FE) models of seven species of crocodile were analysed under loads simulating biting, shaking and twisting. Using beam theory, we statistically compared multiple hypotheses of which morphological variables should control the biomechanical response. Brevi- and mesorostrine morphologies were found to consistently outperform longirostrine types when subject to equivalent biting, shaking and twisting loads. The best predictors of performance for biting and twisting loads in FE models were overall length and symphyseal length respectively; for shaking loads symphyseal length and a multivariate measurement of shape (PC1– which is strongly but not exclusively correlated with symphyseal length) were equally good predictors. Linear measurements were better predictors than multivariate measurements of shape in biting and twisting loads. For both biting and shaking loads but not for twisting, simple beam models agree with best performance predictors in FE models. Conclusions/Significance Combining beam and FE modelling allows a priori hypotheses about the importance of morphological traits on biomechanics to be statistically tested. Short mandibular symphyses perform well under loads used for feeding upon large prey, but elongate symphyses incur high strains under equivalent loads, underlining the structural constraints to prey size in the longirostrine morphotype. The biomechanics of the crocodilian mandible are largely consistent with beam theory and can be predicted from simple morphological measurements, suggesting that crocodilians are a useful model for investigating the palaeobiomechanics of other aquatic tetrapods. PMID:23342027
On heart rate variability and autonomic activity in homeostasis and in systemic inflammation.
Scheff, Jeremy D; Griffel, Benjamin; Corbett, Siobhan A; Calvano, Steve E; Androulakis, Ioannis P
2014-06-01
Analysis of heart rate variability (HRV) is a promising diagnostic technique due to the noninvasive nature of the measurements involved and established correlations with disease severity, particularly in inflammation-linked disorders. However, the complexities underlying the interpretation of HRV complicate understanding the mechanisms that cause variability. Despite this, such interpretations are often found in literature. In this paper we explored mathematical modeling of the relationship between the autonomic nervous system and the heart, incorporating basic mechanisms such as perturbing mean values of oscillating autonomic activities and saturating signal transduction pathways to explore their impacts on HRV. We focused our analysis on human endotoxemia, a well-established, controlled experimental model of systemic inflammation that provokes changes in HRV representative of acute stress. By contrasting modeling results with published experimental data and analyses, we found that even a simple model linking the autonomic nervous system and the heart confound the interpretation of HRV changes in human endotoxemia. Multiple plausible alternative hypotheses, encoded in a model-based framework, equally reconciled experimental results. In total, our work illustrates how conventional assumptions about the relationships between autonomic activity and frequency-domain HRV metrics break down, even in a simple model. This underscores the need for further experimental work towards unraveling the underlying mechanisms of autonomic dysfunction and HRV changes in systemic inflammation. Understanding the extent of information encoded in HRV signals is critical in appropriately analyzing prior and future studies. Copyright © 2014 Elsevier Inc. All rights reserved.
On heart rate variability and autonomic activity in homeostasis and in systemic inflammation
Scheff, Jeremy D.; Griffel, Benjamin; Corbett, Siobhan A.; Calvano, Steve E.; Androulakis, Ioannis P.
2014-01-01
Analysis of heart rate variability (HRV) is a promising diagnostic technique due to the noninvasive nature of the measurements involved and established correlations with disease severity, particularly in inflammation-linked disorders. However, the complexities underlying the interpretation of HRV complicate understanding the mechanisms that cause variability. Despite this, such interpretations are often found in literature. In this paper we explored mathematical modeling of the relationship between the autonomic nervous system and the heart, incorporating basic mechanisms such as perturbing mean values of oscillating autonomic activities and saturating signal transduction pathways to explore their impacts on HRV. We focused our analysis on human endotoxemia, a well-established, controlled experimental model of systemic inflammation that provokes changes in HRV representative of acute stress. By contrasting modeling results with published experimental data and analyses, we found that even a simple model linking the autonomic nervous system and the heart confound the interpretation of HRV changes in human endotoxemia. Multiple plausible alternative hypotheses, encoded in a model-based framework, equally reconciled experimental results. In total, our work illustrates how conventional assumptions about the relationships between autonomic activity and frequency-domain HRV metrics break down, even in a simple model. This underscores the need for further experimental work towards unraveling the underlying mechanisms of autonomic dysfunction and HRV changes in systemic inflammation. Understanding the extent of information encoded in HRV signals is critical in appropriately analyzing prior and future studies. PMID:24680646
Health belief model and reasoned action theory in predicting water saving behaviors in yazd, iran.
Morowatisharifabad, Mohammad Ali; Momayyezi, Mahdieh; Ghaneian, Mohammad Taghi
2012-01-01
People's behaviors and intentions about healthy behaviors depend on their beliefs, values, and knowledge about the issue. Various models of health education are used in deter¬mining predictors of different healthy behaviors but their efficacy in cultural behaviors, such as water saving behaviors, are not studied. The study was conducted to explain water saving beha¬viors in Yazd, Iran on the basis of Health Belief Model and Reasoned Action Theory. The cross-sectional study used random cluster sampling to recruit 200 heads of households to collect the data. The survey questionnaire was tested for its content validity and reliability. Analysis of data included descriptive statistics, simple correlation, hierarchical multiple regression. Simple correlations between water saving behaviors and Reasoned Action Theory and Health Belief Model constructs were statistically significant. Health Belief Model and Reasoned Action Theory constructs explained 20.80% and 8.40% of the variances in water saving beha-viors, respectively. Perceived barriers were the strongest Predictor. Additionally, there was a sta¬tistically positive correlation between water saving behaviors and intention. In designing interventions aimed at water waste prevention, barriers of water saving behaviors should be addressed first, followed by people's attitude towards water saving. Health Belief Model constructs, with the exception of perceived severity and benefits, is more powerful than is Reasoned Action Theory in predicting water saving behavior and may be used as a framework for educational interventions aimed at improving water saving behaviors.
SpikingLab: modelling agents controlled by Spiking Neural Networks in Netlogo.
Jimenez-Romero, Cristian; Johnson, Jeffrey
2017-01-01
The scientific interest attracted by Spiking Neural Networks (SNN) has lead to the development of tools for the simulation and study of neuronal dynamics ranging from phenomenological models to the more sophisticated and biologically accurate Hodgkin-and-Huxley-based and multi-compartmental models. However, despite the multiple features offered by neural modelling tools, their integration with environments for the simulation of robots and agents can be challenging and time consuming. The implementation of artificial neural circuits to control robots generally involves the following tasks: (1) understanding the simulation tools, (2) creating the neural circuit in the neural simulator, (3) linking the simulated neural circuit with the environment of the agent and (4) programming the appropriate interface in the robot or agent to use the neural controller. The accomplishment of the above-mentioned tasks can be challenging, especially for undergraduate students or novice researchers. This paper presents an alternative tool which facilitates the simulation of simple SNN circuits using the multi-agent simulation and the programming environment Netlogo (educational software that simplifies the study and experimentation of complex systems). The engine proposed and implemented in Netlogo for the simulation of a functional model of SNN is a simplification of integrate and fire (I&F) models. The characteristics of the engine (including neuronal dynamics, STDP learning and synaptic delay) are demonstrated through the implementation of an agent representing an artificial insect controlled by a simple neural circuit. The setup of the experiment and its outcomes are described in this work.
Health Belief Model and Reasoned Action Theory in Predicting Water Saving Behaviors in Yazd, Iran
Morowatisharifabad, Mohammad Ali; Momayyezi, Mahdieh; Ghaneian, Mohammad Taghi
2012-01-01
Background: People's behaviors and intentions about healthy behaviors depend on their beliefs, values, and knowledge about the issue. Various models of health education are used in deter¬mining predictors of different healthy behaviors but their efficacy in cultural behaviors, such as water saving behaviors, are not studied. The study was conducted to explain water saving beha¬viors in Yazd, Iran on the basis of Health Belief Model and Reasoned Action Theory. Methods: The cross-sectional study used random cluster sampling to recruit 200 heads of households to collect the data. The survey questionnaire was tested for its content validity and reliability. Analysis of data included descriptive statistics, simple correlation, hierarchical multiple regression. Results: Simple correlations between water saving behaviors and Reasoned Action Theory and Health Belief Model constructs were statistically significant. Health Belief Model and Reasoned Action Theory constructs explained 20.80% and 8.40% of the variances in water saving beha-viors, respectively. Perceived barriers were the strongest Predictor. Additionally, there was a sta¬tistically positive correlation between water saving behaviors and intention. Conclusion: In designing interventions aimed at water waste prevention, barriers of water saving behaviors should be addressed first, followed by people's attitude towards water saving. Health Belief Model constructs, with the exception of perceived severity and benefits, is more powerful than is Reasoned Action Theory in predicting water saving behavior and may be used as a framework for educational interventions aimed at improving water saving behaviors. PMID:24688927
NASA Astrophysics Data System (ADS)
Choudhury, Vishal; Prakash, Roopa; Nagarjun, K. P.; Supradeepa, V. R.
2018-02-01
A simple and powerful method using continuous wave supercontinuum lasers is demonstrated to perform spectrally resolved, broadband frequency response characterization of photodetectors in the NIR Band. In contrast to existing techniques, this method allows for a simple system to achieve the goal, requiring just a standard continuous wave(CW) high-power fiber laser source and an RF spectrum analyzer. From our recent work, we summarize methods to easily convert any high-power fiber laser into a CW supercontinuum. These sources in the time domain exhibit interesting properties all the way down to the femtosecond time scale. This enables measurement of broadband frequency response of photodetectors while the wide optical spectrum of the supercontinuum can be spectrally filtered to obtain this information in a spectrally resolved fashion. The method involves looking at the RF spectrum of the output of a photodetector under test when incident with the supercontinuum. By using prior knowledge of the RF spectrum of the source, the frequency response can be calculated. We utilize two techniques for calibration of the source spectrum, one using a prior measurement and the other relying on a fitted model. Here, we characterize multiple photodetectors from 150MHz bandwidth to >20GHz bandwidth at multiple bands in the NIR region. We utilize a supercontinuum source spanning over 700nm bandwidth from 1300nm to 2000nm. For spectrally resolved measurement, we utilize multiple wavelength bands such as around 1400nm and 1600nm. Interesting behavior was observed in the frequency response of the photodetectors when comparing broadband spectral excitation versus narrower band excitation.
NASA Technical Reports Server (NTRS)
Schmidt, Gavin A.
1999-01-01
The distribution and variation of oxygen isotopes in seawater are calculated using the Goddard Institute for Space Studies global ocean model. Simple ecological models are used to estimate the planktonic foraminiferal abundance as a function of depth, column temperature, season, light intensity, and density stratification. These models are combined to forward model isotopic signals recorded in calcareous ocean sediment. The sensitivity of the results to the changes in foraminiferal ecology, secondary calcification, and dissolution are also examined. Simulated present-day isotopic values for ecology relevant for multiple species compare well with core-top data. Hindcasts of sea surface temperature and salinity are made from time series of the modeled carbonate isotope values as the model climate changes. Paleoclimatic inferences from these carbonate isotope records are strongly affected by erroneous assumptions concerning the covariations of temperature, salinity, and delta (sup 18)O(sub w). Habitat-imposed biases are less important, although errors due to temperature-dependent abundances can be significant.
López-Guerra, Enrique A
2014-01-01
Summary We examine different approaches to model viscoelasticity within atomic force microscopy (AFM) simulation. Our study ranges from very simple linear spring–dashpot models to more sophisticated nonlinear systems that are able to reproduce fundamental properties of viscoelastic surfaces, including creep, stress relaxation and the presence of multiple relaxation times. Some of the models examined have been previously used in AFM simulation, but their applicability to different situations has not yet been examined in detail. The behavior of each model is analyzed here in terms of force–distance curves, dissipated energy and any inherent unphysical artifacts. We focus in this paper on single-eigenmode tip–sample impacts, but the models and results can also be useful in the context of multifrequency AFM, in which the tip trajectories are very complex and there is a wider range of sample deformation frequencies (descriptions of tip–sample model behaviors in the context of multifrequency AFM require detailed studies and are beyond the scope of this work). PMID:25551043
Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Oliva, Doretta; Campodonico, Francesca; Groeneweg, Jop
2008-01-01
The present study assessed the possibility of assisting four persons with multiple disabilities to move through and perform simple occupational activities arranged within a room with the help of automatic prompting. The study involved two multiple probe designs across participants. The first multiple probe concerned the two participants with blindness or minimal vision and deafness, who received air blowing as a prompt. The second multiple probe concerned the two participants with blindness and typical hearing who received a voice calling as a prompt. Initially, all participants had baseline sessions. Then intervention started with the first participant of each dyad. When their performance was consolidated, new baseline and intervention occurred with the second participant of each dyad. Finally, all four participants were exposed to a second intervention phase, in which the number of activities per session doubled (i.e., from 8 to 16). Data showed that all four participants: (a) learned to move across and perform the activities available with the help of automatic prompting and (b) remained highly successful through the second intervention phase when the sessions were extended. Implications of the findings are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Safigholi, H; Soliman, A; Song, W
Purpose: Brachytherapy treatment planning systems based on TG-43 protocol calculate the dose in water and neglects the heterogeneity effect of seeds in multi-seed implant brachytherapy. In this research, the accuracy of a novel analytical model that we propose for the inter-seed attenuation effect (ISA) for 103-Pd seed model is evaluated. Methods: In the analytical model, dose perturbation due to the ISA effect for each seed in an LDR multi-seed implant for 103-Pd is calculated by assuming that the seed of interest is active and the other surrounding seeds are inactive. The cumulative dosimetric effect of all seeds is then summedmore » using the superposition principle. The model is based on pre Monte Carlo (MC) simulated 3D kernels of the dose perturbations caused by the ISA effect. The cumulative ISA effect due to multiple surrounding seeds is obtained by a simple multiplication of the individual ISA effect by each seed, the effect of which is determined by the distance from the seed of interest. This novel algorithm is then compared with full MC water-based simulations (FMCW). Results: The results show that the dose perturbation model we propose is in excellent agreement with the FMCW values for a case with three seeds separated by 1 cm. The average difference of the model and the FMCW simulations was less than 8%±2%. Conclusion: Using the proposed novel analytical ISA effect model, one could expedite the corrections due to the ISA dose perturbation effects during permanent seed 103-Pd brachytherapy planning with minimal increase in time since the model is based on multiplications and superposition. This model can be applied, in principle, to any other brachytherapy seeds. Further work is necessary to validate this model on a more complicated geometry as well.« less
Modeling fructose-load-induced hepatic de-novo lipogenesis by model simplification.
Allen, Richard J; Musante, Cynthia J
2017-01-01
Hepatic de-novo lipogenesis is a metabolic process implemented in the pathogenesis of type 2 diabetes. Clinically, the rate of this process can be ascertained by use of labeled acetate and stimulation by fructose administration. A systems pharmacology model of this process is desirable because it facilitates the description, analysis, and prediction of this experiment. Due to the multiple enzymes involved in de-novo lipogenesis, and the limited data, it is desirable to use single functional expressions to encapsulate the flux between multiple enzymes. To accomplish this we developed a novel simplification technique which uses the available information about the properties of the individual enzymes to bound the parameters of a single governing 'transfer function'. This method should be applicable to any model with linear chains of enzymes that are well stimulated. We validated this approach with computational simulations and analytical justification in a limiting case. Using this technique we generated a simple model of hepatic de-novo lipogenesis in these experimental conditions that matched prior data. This model can be used to assess pharmacological intervention at specific points on this pathway. We have demonstrated this with prospective simulation of acetyl-CoA carboxylase inhibition. This simplification technique suggests how the constituent properties of an enzymatic chain of reactions gives rise to the sensitivity (to substrate) of the pathway as a whole.
Modeling fructose-load-induced hepatic de-novo lipogenesis by model simplification
Allen, Richard J; Musante, Cynthia J
2017-01-01
Hepatic de-novo lipogenesis is a metabolic process implemented in the pathogenesis of type 2 diabetes. Clinically, the rate of this process can be ascertained by use of labeled acetate and stimulation by fructose administration. A systems pharmacology model of this process is desirable because it facilitates the description, analysis, and prediction of this experiment. Due to the multiple enzymes involved in de-novo lipogenesis, and the limited data, it is desirable to use single functional expressions to encapsulate the flux between multiple enzymes. To accomplish this we developed a novel simplification technique which uses the available information about the properties of the individual enzymes to bound the parameters of a single governing ‘transfer function’. This method should be applicable to any model with linear chains of enzymes that are well stimulated. We validated this approach with computational simulations and analytical justification in a limiting case. Using this technique we generated a simple model of hepatic de-novo lipogenesis in these experimental conditions that matched prior data. This model can be used to assess pharmacological intervention at specific points on this pathway. We have demonstrated this with prospective simulation of acetyl-CoA carboxylase inhibition. This simplification technique suggests how the constituent properties of an enzymatic chain of reactions gives rise to the sensitivity (to substrate) of the pathway as a whole. PMID:28469410