Sample records for simple model combining

  1. Comparing and combining process-based crop models and statistical models with some implications for climate change

    NASA Astrophysics Data System (ADS)

    Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram

    2017-09-01

    We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.

  2. Improved CORF model of simple cell combined with non-classical receptive field and its application on edge detection

    NASA Astrophysics Data System (ADS)

    Sun, Xiao; Chai, Guobei; Liu, Wei; Bao, Wenzhuo; Zhao, Xiaoning; Ming, Delie

    2018-02-01

    Simple cells in primary visual cortex are believed to extract local edge information from a visual scene. In this paper, inspired by different receptive field properties and visual information flow paths of neurons, an improved Combination of Receptive Fields (CORF) model combined with non-classical receptive fields was proposed to simulate the responses of simple cell's receptive fields. Compared to the classical model, the proposed model is able to better imitate simple cell's physiologic structure with consideration of facilitation and suppression of non-classical receptive fields. And on this base, an edge detection algorithm as an application of the improved CORF model was proposed. Experimental results validate the robustness of the proposed algorithm to noise and background interference.

  3. Combinatorial structures to modeling simple games and applications

    NASA Astrophysics Data System (ADS)

    Molinero, Xavier

    2017-09-01

    We connect three different topics: combinatorial structures, game theory and chemistry. In particular, we establish the bases to represent some simple games, defined as influence games, and molecules, defined from atoms, by using combinatorial structures. First, we characterize simple games as influence games using influence graphs. It let us to modeling simple games as combinatorial structures (from the viewpoint of structures or graphs). Second, we formally define molecules as combinations of atoms. It let us to modeling molecules as combinatorial structures (from the viewpoint of combinations). It is open to generate such combinatorial structures using some specific techniques as genetic algorithms, (meta-)heuristics algorithms and parallel programming, among others.

  4. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ajami, N K; Duan, Q; Gao, X

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less

  5. Rubber friction and tire dynamics.

    PubMed

    Persson, B N J

    2011-01-12

    We propose a simple rubber friction law, which can be used, for example, in models of tire (and vehicle) dynamics. The friction law is tested by comparing numerical results to the full rubber friction theory (Persson 2006 J. Phys.: Condens. Matter 18 7789). Good agreement is found between the two theories. We describe a two-dimensional (2D) tire model which combines the rubber friction model with a simple mass-spring description of the tire body. The tire model is very flexible and can be used to accurately calculate μ-slip curves (and the self-aligning torque) for braking and cornering or combined motion (e.g. braking during cornering). We present numerical results which illustrate the theory. Simulations of anti-blocking system (ABS) braking are performed using two simple control algorithms.

  6. A Multivariate Model for the Study of Parental Acceptance-Rejection and Child Abuse.

    ERIC Educational Resources Information Center

    Rohner, Ronald P.; Rohner, Evelyn C.

    This paper proposes a multivariate strategy for the study of parental acceptance-rejection and child abuse and describes a research study on parental rejection and child abuse which illustrates the advantages of using a multivariate, (rather than a simple-model) approach. The multivariate model is a combination of three simple models used to study…

  7. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    PubMed

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  8. Simple animal models for amyotrophic lateral sclerosis drug discovery.

    PubMed

    Patten, Shunmoogum A; Parker, J Alex; Wen, Xiao-Yan; Drapeau, Pierre

    2016-08-01

    Simple animal models have enabled great progress in uncovering the disease mechanisms of amyotrophic lateral sclerosis (ALS) and are helping in the selection of therapeutic compounds through chemical genetic approaches. Within this article, the authors provide a concise overview of simple model organisms, C. elegans, Drosophila and zebrafish, which have been employed to study ALS and discuss their value to ALS drug discovery. In particular, the authors focus on innovative chemical screens that have established simple organisms as important models for ALS drug discovery. There are several advantages of using simple animal model organisms to accelerate drug discovery for ALS. It is the authors' particular belief that the amenability of simple animal models to various genetic manipulations, the availability of a wide range of transgenic strains for labelling motoneurons and other cell types, combined with live imaging and chemical screens should allow for new detailed studies elucidating early pathological processes in ALS and subsequent drug and target discovery.

  9. A Simple Device for Measuring Static Compliance of Lung-Thorax Combine

    ERIC Educational Resources Information Center

    Sircar, Sabyasachi

    2015-01-01

    Explaining the concept of lung compliance remains a challenge to the physiology teacher because it cannot be demonstrated easily in human subjects and all attempts until now have used only simulation models. A simple device is described in the present article to measure the compliance of the "lung-thorax" combine in human subjects with…

  10. Combinational Reasoning of Quantitative Fuzzy Topological Relations for Simple Fuzzy Regions

    PubMed Central

    Liu, Bo; Li, Dajun; Xia, Yuanping; Ruan, Jian; Xu, Lili; Wu, Huanyi

    2015-01-01

    In recent years, formalization and reasoning of topological relations have become a hot topic as a means to generate knowledge about the relations between spatial objects at the conceptual and geometrical levels. These mechanisms have been widely used in spatial data query, spatial data mining, evaluation of equivalence and similarity in a spatial scene, as well as for consistency assessment of the topological relations of multi-resolution spatial databases. The concept of computational fuzzy topological space is applied to simple fuzzy regions to efficiently and more accurately solve fuzzy topological relations. Thus, extending the existing research and improving upon the previous work, this paper presents a new method to describe fuzzy topological relations between simple spatial regions in Geographic Information Sciences (GIS) and Artificial Intelligence (AI). Firstly, we propose a new definition for simple fuzzy line segments and simple fuzzy regions based on the computational fuzzy topology. And then, based on the new definitions, we also propose a new combinational reasoning method to compute the topological relations between simple fuzzy regions, moreover, this study has discovered that there are (1) 23 different topological relations between a simple crisp region and a simple fuzzy region; (2) 152 different topological relations between two simple fuzzy regions. In the end, we have discussed some examples to demonstrate the validity of the new method, through comparisons with existing fuzzy models, we showed that the proposed method can compute more than the existing models, as it is more expressive than the existing fuzzy models. PMID:25775452

  11. Accounting For Gains And Orientations In Polarimetric SAR

    NASA Technical Reports Server (NTRS)

    Freeman, Anthony

    1992-01-01

    Calibration method accounts for characteristics of real radar equipment invalidating standard 2 X 2 complex-amplitude R (receiving) and T (transmitting) matrices. Overall gain in each combination of transmitting and receiving channels assumed different even when only one transmitter and one receiver used. One characterizes departure of polarimetric Synthetic Aperture Radar (SAR) system from simple 2 X 2 model in terms of single parameter used to transform measurements into format compatible with simple 2 X 2 model. Data processed by applicable one of several prior methods based on simple model.

  12. Kinematic analysis of asymmetric folds in competent layers using mathematical modelling

    NASA Astrophysics Data System (ADS)

    Aller, J.; Bobillo-Ares, N. C.; Bastida, F.; Lisle, R. J.; Menéndez, C. O.

    2010-08-01

    Mathematical 2D modelling of asymmetric folds is carried out by applying a combination of different kinematic folding mechanisms: tangential longitudinal strain, flexural flow and homogeneous deformation. The main source of fold asymmetry is discovered to be due to the superimposition of a general homogeneous deformation on buckle folds that typically produces a migration of the hinge point. Forward modelling is performed mathematically using the software 'FoldModeler', by the superimposition of simple shear or a combination of simple shear and irrotational strain on initial buckle folds. The resulting folds are Ramsay class 1C folds, comparable to those formed by symmetric flattening, but with different length of limbs and layer thickness asymmetry. Inverse modelling is made by fitting the natural fold to a computer-simulated fold. A problem of this modelling is the search for the most appropriate homogeneous deformation to be superimposed on the initial fold. A comparative analysis of the irrotational and rotational deformations is made in order to find the deformation which best simulates the shapes and attitudes of natural folds. Modelling of recumbent folds suggests that optimal conditions for their development are: a) buckling in a simple shear regime with a sub-horizontal shear direction and layering gently dipping towards this direction; b) kinematic amplification due to superimposition of a combination of simple shear and irrotational strain with a sub-vertical maximum shortening direction for the latter component. The modelling shows that the amount of homogeneous strain necessary for the development of recumbent folds is much less when an irrotational strain component is superimposed at this stage that when the superimposed strain is only simple shear. In nature, the amount of the irrotational strain component probably increases during the development of the fold as a consequence of the increasing influence of the gravity due to the tectonic superimposition of rocks.

  13. A simple Lagrangian forecast system with aviation forecast potential

    NASA Technical Reports Server (NTRS)

    Petersen, R. A.; Homan, J. H.

    1983-01-01

    A trajectory forecast procedure is developed which uses geopotential tendency fields obtained from a simple, multiple layer, potential vorticity conservative isentropic model. This model can objectively account for short-term advective changes in the mass field when combined with fine-scale initial analyses. This procedure for producing short-term, upper-tropospheric trajectory forecasts employs a combination of a detailed objective analysis technique, an efficient mass advection model, and a diagnostically proven trajectory algorithm, none of which require extensive computer resources. Results of initial tests are presented, which indicate an exceptionally good agreement for trajectory paths entering the jet stream and passing through an intensifying trough. It is concluded that this technique not only has potential for aiding in route determination, fuel use estimation, and clear air turbulence detection, but also provides an example of the types of short range forecasting procedures which can be applied at local forecast centers using simple algorithms and a minimum of computer resources.

  14. The dynamics of coastal models

    USGS Publications Warehouse

    Hearn, Clifford J.

    2008-01-01

    Coastal basins are defined as estuaries, lagoons, and embayments. This book deals with the science of coastal basins using simple models, many of which are presented in either analytical form or Microsoft Excel or MATLAB. The book introduces simple hydrodynamics and its applications, from the use of simple box and one-dimensional models to flow over coral reefs. The book also emphasizes models as a scientific tool in our understanding of coasts, and introduces the value of the most modern flexible mesh combined wave-current models. Examples from shallow basins around the world illustrate the wonders of the scientific method and the power of simple dynamics. This book is ideal for use as an advanced textbook for graduate students and as an introduction to the topic for researchers, especially those from other fields of science needing a basic understanding of the basic ideas of the dynamics of coastal basins.

  15. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  16. Exergetic simulation of a combined infrared-convective drying process

    NASA Astrophysics Data System (ADS)

    Aghbashlo, Mortaza

    2016-04-01

    Optimal design and performance of a combined infrared-convective drying system with respect to the energy issue is extremely put through the application of advanced engineering analyses. This article proposes a theoretical approach for exergy analysis of the combined infrared-convective drying process using a simple heat and mass transfer model. The applicability of the developed model to actual drying processes was proved using an illustrative example for a typical food.

  17. Method for Constructing Composite Response Surfaces by Combining Neural Networks with Polynominal Interpolation or Estimation Techniques

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2007-01-01

    A method and system for data modeling that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The invention partitions the parameters into a first set of s simple parameters, where observable data are expressible as low order polynomials, and c complex parameters that reflect more complicated variation of the observed data. Variation of the data with the simple parameters is modeled using polynomials; and variation of the data with the complex parameters at each vertex is analyzed using a neural network. Variations with the simple parameters and with the complex parameters are expressed using a first sequence of shape functions and a second sequence of neural network functions. The first and second sequences are multiplicatively combined to form a composite response surface, dependent upon the parameter values, that can be used to identify an accurate mode

  18. Robust Combining of Disparate Classifiers Through Order Statistics

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  19. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.

    PubMed

    Brette, Romain; Gerstner, Wulfram

    2005-11-01

    We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.

  20. Synchrotron radiation and diffusive shock acceleration - A short review and GRB perspective

    NASA Astrophysics Data System (ADS)

    Karlica, Mile

    2015-12-01

    In this talk we present the sponge" model and its possible implications on the GRB afterglow light curves. "Sponge" model describes source of GRB afterglow radiation as fragmented GRB ejecta where bubbles move through the rarefied medium. In the first part of the talk a short introduction to synchrotron radiation and Fermi acceleration was presented. In the assumption that X-ray luminosity of GRB afterglow phase comes from the kinetic energy losses of clouds in ejecta medium radiated as synchrotron radiation we solved currently very simple equation of motion to find which combination of cloud and medium regime describes the afterglow light curve the best. We proposed for the first step to watch simple combinations of expansion regimes for both bubbles and surrounding medium. The closest case to the numerical fit of GRB 150403A with time power law index k = 1.38 is the combination of constant bubbles and Sedov like expanding medium with time power law index k = 1.25. Of course the question of possible mixture of variuos regime combinations is still open within this model.

  1. Seasonal ENSO forecasting: Where does a simple model stand amongst other operational ENSO models?

    NASA Astrophysics Data System (ADS)

    Halide, Halmar

    2017-01-01

    We apply a simple linear multiple regression model called IndOzy for predicting ENSO up to 7 seasonal lead times. The model still used 5 (five) predictors of the past seasonal Niño 3.4 ENSO indices derived from chaos theory and it was rolling-validated to give a one-step ahead forecast. The model skill was evaluated against data from the season of May-June-July (MJJ) 2003 to November-December-January (NDJ) 2015/2016. There were three skill measures such as: Pearson correlation, RMSE, and Euclidean distance were used for forecast verification. The skill of this simple model was than compared to those of combined Statistical and Dynamical models compiled at the IRI (International Research Institute) website. It was found that the simple model was only capable of producing a useful ENSO prediction only up to 3 seasonal leads, while the IRI statistical and Dynamical model skill were still useful up to 4 and 6 seasonal leads, respectively. Even with its short-range seasonal prediction skills, however, the simple model still has a potential to give ENSO-derived tailored products such as probabilistic measures of precipitation and air temperature. Both meteorological conditions affect the presence of wild-land fire hot-spots in Sumatera and Kalimantan. It is suggested that to improve its long-range skill, the simple INDOZY model needs to incorporate a nonlinear model such as an artificial neural network technique.

  2. Fitting mechanistic epidemic models to data: A comparison of simple Markov chain Monte Carlo approaches.

    PubMed

    Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M

    2018-07-01

    Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).

  3. A Bayesian Attractor Model for Perceptual Decision Making

    PubMed Central

    Bitzer, Sebastian; Bruineberg, Jelle; Kiebel, Stefan J.

    2015-01-01

    Even for simple perceptual decisions, the mechanisms that the brain employs are still under debate. Although current consensus states that the brain accumulates evidence extracted from noisy sensory information, open questions remain about how this simple model relates to other perceptual phenomena such as flexibility in decisions, decision-dependent modulation of sensory gain, or confidence about a decision. We propose a novel approach of how perceptual decisions are made by combining two influential formalisms into a new model. Specifically, we embed an attractor model of decision making into a probabilistic framework that models decision making as Bayesian inference. We show that the new model can explain decision making behaviour by fitting it to experimental data. In addition, the new model combines for the first time three important features: First, the model can update decisions in response to switches in the underlying stimulus. Second, the probabilistic formulation accounts for top-down effects that may explain recent experimental findings of decision-related gain modulation of sensory neurons. Finally, the model computes an explicit measure of confidence which we relate to recent experimental evidence for confidence computations in perceptual decision tasks. PMID:26267143

  4. Structural modeling of Ge6.25As32.5Se61.25 using a combination of reverse Monte Carlo and Ab initio molecular dynamics.

    PubMed

    Opletal, George; Drumm, Daniel W; Wang, Rong P; Russo, Salvy P

    2014-07-03

    Ternary glass structures are notoriously difficult to model accurately, and yet prevalent in several modern endeavors. Here, a novel combination of Reverse Monte Carlo (RMC) modeling and ab initio molecular dynamics (MD) is presented, rendering these complicated structures computationally tractable. A case study (Ge6.25As32.5Se61.25 glass) illustrates the effects of ab initio MD quench rates and equilibration temperatures, and the combined approach's efficacy over standard RMC or random insertion methods. Submelting point MD quenches achieve the most stable, realistic models, agreeing with both experimental and fully ab initio results. The simple approach of RMC followed by ab initio geometry optimization provides similar quality to the RMC-MD combination, for far fewer resources.

  5. The Q theory of investment, the capital asset pricing model, and asset valuation: a synthesis.

    PubMed

    McDonald, John F

    2004-05-01

    The paper combines Tobin's Q theory of real investment with the capital asset pricing model to produce a new and relatively simple procedure for the valuation of real assets using the income approach. Applications of the new method are provided.

  6. Numerical model of solar dynamic radiator for parametric analysis

    NASA Technical Reports Server (NTRS)

    Rhatigan, Jennifer L.

    1989-01-01

    Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations.

  7. Influence of collision on the flow through in-vitro rigid models of the vocal folds

    NASA Astrophysics Data System (ADS)

    Deverge, M.; Pelorson, X.; Vilain, C.; Lagrée, P.-Y.; Chentouf, F.; Willems, J.; Hirschberg, A.

    2003-12-01

    Measurements of pressure in oscillating rigid replicas of vocal folds are presented. The pressure upstream of the replica is used as input to various theoretical approximations to predict the pressure within the glottis. As the vocal folds collide the classical quasisteady boundary layer theory fails. It appears however that for physiologically reasonable shapes of the replicas, viscous effects are more important than the influence of the flow unsteadiness due to the wall movement. A simple model based on a quasisteady Bernoulli equation corrected for viscous effect, combined with a simple boundary layer separation model does globally predict the observed pressure behavior.

  8. Simple model of foam drainage

    NASA Astrophysics Data System (ADS)

    Fortes, M. A.; Coughlan, S.

    1994-10-01

    A simple model of foam drainage is introduced in which the Plateau borders and quadruple junctions are identified with pools that discharge through channels to pools underneath. The flow is driven by gravity and there are friction losses in the exhausting channels. The equation of Bernoulli combined with the Hagen-Poiseuille equation is applied to describe the flow. The area of the cross section of the exhausting channels can be taken as a constant or may vary during drainage. The predictions of the model are compared with standard drainage curves and with the results of a recently reported experiment in which additional liquid is supplied at the top of the froth.

  9. Predictive Analytics In Healthcare: Medications as a Predictor of Medical Complexity.

    PubMed

    Higdon, Roger; Stewart, Elizabeth; Roach, Jared C; Dombrowski, Caroline; Stanberry, Larissa; Clifton, Holly; Kolker, Natali; van Belle, Gerald; Del Beccaro, Mark A; Kolker, Eugene

    2013-12-01

    Children with special healthcare needs (CSHCN) require health and related services that exceed those required by most hospitalized children. A small but growing and important subset of the CSHCN group includes medically complex children (MCCs). MCCs typically have comorbidities and disproportionately consume healthcare resources. To enable strategic planning for the needs of MCCs, simple screens to identify potential MCCs rapidly in a hospital setting are needed. We assessed whether the number of medications used and the class of those medications correlated with MCC status. Retrospective analysis of medication data from the inpatients at Seattle Children's Hospital found that the numbers of inpatient and outpatient medications significantly correlated with MCC status. Numerous variables based on counts of medications, use of individual medications, and use of combinations of medications were considered, resulting in a simple model based on three different counts of medications: outpatient and inpatient drug classes and individual inpatient drug names. The combined model was used to rank the patient population for medical complexity. As a result, simple, objective admission screens for predicting the complexity of patients based on the number and type of medications were implemented.

  10. Perspective: Sloppiness and emergent theories in physics, biology, and beyond.

    PubMed

    Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P

    2015-07-07

    Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.

  11. Scalable Track Detection in SAR CCD Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, James G; Quach, Tu-Thach

    Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images ta ken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are often too simple to capture natural track features such as continuity and parallelism. We present a simple convolutional network architecture consisting of a series of 3-by-3 convolutions to detect tracks. The network is trained end-to-end to learn natural track features entirely from data. The network is computationally efficient and improves the F-score on a standard dataset to 0.988,more » up fr om 0.907 obtained by the current state-of-the-art method.« less

  12. Modeling Speed-Accuracy Tradeoff in Adaptive System for Practicing Estimation

    ERIC Educational Resources Information Center

    Nižnan, Juraj

    2015-01-01

    Estimation is useful in situations where an exact answer is not as important as a quick answer that is good enough. A web-based adaptive system for practicing estimates is currently being developed. We propose a simple model for estimating student's latent skill of estimation. This model combines a continuous measure of correctness and response…

  13. Using Supply, Demand, and the Cournot Model to Understand Corruption

    ERIC Educational Resources Information Center

    Hayford, Marc D.

    2007-01-01

    The author combines the supply and demand model of taxes with a Cournot model of bribe takers to develop a simple and useful framework for understanding the effect of corruption on economic activity. There are many examples of corruption in both developed and developing countries. Because corruption decreases the level of economic activity and…

  14. Combined electrochemical, heat generation, and thermal model for large prismatic lithium-ion batteries in real-time applications

    NASA Astrophysics Data System (ADS)

    Farag, Mohammed; Sweity, Haitham; Fleckenstein, Matthias; Habibi, Saeid

    2017-08-01

    Real-time prediction of the battery's core temperature and terminal voltage is very crucial for an accurate battery management system. In this paper, a combined electrochemical, heat generation, and thermal model is developed for large prismatic cells. The proposed model consists of three sub-models, an electrochemical model, heat generation model, and thermal model which are coupled together in an iterative fashion through physicochemical temperature dependent parameters. The proposed parameterization cycles identify the sub-models' parameters separately by exciting the battery under isothermal and non-isothermal operating conditions. The proposed combined model structure shows accurate terminal voltage and core temperature prediction at various operating conditions while maintaining a simple mathematical structure, making it ideal for real-time BMS applications. Finally, the model is validated against both isothermal and non-isothermal drive cycles, covering a broad range of C-rates, and temperature ranges [-25 °C to 45 °C].

  15. Combined tension and bending testing of tapered composite laminates

    NASA Astrophysics Data System (ADS)

    O'Brien, T. Kevin; Murri, Gretchen B.; Hagemeier, Rick; Rogers, Charles

    1994-11-01

    A simple beam element used at Bell Helicopter was incorporated in the Computational Mechanics Testbed (COMET) finite element code at the Langley Research Center (LaRC) to analyze the responce of tappered laminates typical of flexbeams in composite rotor hubs. This beam element incorporated the influence of membrane loads on the flexural response of the tapered laminate configurations modeled and tested in a combined axial tension and bending (ATB) hydraulic load frame designed and built at LaRC. The moments generated from the finite element model were used in a tapered laminated plate theory analysis to estimate axial stresses on the surface of the tapered laminates due to combined bending and tension loads. Surfaces strains were calculated and compared to surface strains measured using strain gages mounted along the laminate length. The strain distributions correlated reasonably well with the analysis. The analysis was then used to examine the surface strain distribution in a non-linear tapered laminate where a similarly good correlation was obtained. Results indicate that simple finite element beam models may be used to identify tapered laminate configurations best suited for simulating the response of a composite flexbeam in a full scale rotor hub.

  16. Measurements of PANs during the New England Air Quality Study 2002

    NASA Astrophysics Data System (ADS)

    Roberts, J. M.; Marchewka, M.; Bertman, S. B.; Sommariva, R.; Warneke, C.; de Gouw, J.; Kuster, W.; Goldan, P.; Williams, E.; Lerner, B. M.; Murphy, P.; Fehsenfeld, F. C.

    2007-10-01

    Measurements of peroxycarboxylic nitric anhydrides (PANs) were made during the New England Air Quality Study 2002 cruise of the NOAA RV Ronald H Brown. The four compounds observed, PAN, peroxypropionic nitric anhydride (PPN), peroxymethacrylic nitric anhydride (MPAN), and peroxyisobutyric nitric anhydride (PiBN) were compared with results from other continental and Gulf of Maine sites. Systematic changes in PPN/PAN ratio, due to differential thermal decomposition rates, were related quantitatively to air mass aging. At least one early morning period was observed when O3 seemed to have been lost probably due to NO3 and N2O5 chemistry. The highest O3 episode was observed in the combined plume of isoprene sources and anthropogenic volatile organic compounds (VOCs) and NOx sources from the greater Boston area. A simple linear combination model showed that the organic precursors leading to elevated O3 were roughly half from the biogenic and half from anthropogenic VOC regimes. An explicit chemical box model confirmed that the chemistry in the Boston plume is well represented by the simple linear combination model. This degree of biogenic hydrocarbon involvement in the production of photochemical ozone has significant implications for air quality control strategies in this region.

  17. Application of thermal model for pan evaporation to the hydrology of a defined medium, the sponge

    NASA Technical Reports Server (NTRS)

    Trenchard, M. H.; Artley, J. A. (Principal Investigator)

    1981-01-01

    A technique is presented which estimates pan evaporation from the commonly observed values of daily maximum and minimum air temperatures. These two variables are transformed to saturation vapor pressure equivalents which are used in a simple linear regression model. The model provides reasonably accurate estimates of pan evaporation rates over a large geographic area. The derived evaporation algorithm is combined with precipitation to obtain a simple moisture variable. A hypothetical medium with a capacity of 8 inches of water is initialized at 4 inches. The medium behaves like a sponge: it absorbs all incident precipitation, with runoff or drainage occurring only after it is saturated. Water is lost from this simple system through evaporation just as from a Class A pan, but at a rate proportional to its degree of saturation. The contents of the sponge is a moisture index calculated from only the maximum and minium temperatures and precipitation.

  18. Efficient micromagnetic modelling of spin-transfer torque and spin-orbit torque

    NASA Astrophysics Data System (ADS)

    Abert, Claas; Bruckner, Florian; Vogler, Christoph; Suess, Dieter

    2018-05-01

    While the spin-diffusion model is considered one of the most complete and accurate tools for the description of spin transport and spin torque, its solution in the context of dynamical micromagnetic simulations is numerically expensive. We propose a procedure to retrieve the free parameters of a simple macro-spin like spin-torque model through the spin-diffusion model. In case of spin-transfer torque the simplified model complies with the model of Slonczewski. A similar model can be established for the description of spin-orbit torque. In both cases the spin-diffusion model enables the retrieval of free model parameters from the geometry and the material parameters of the system. Since these parameters usually have to be determined phenomenologically through experiments, the proposed method combines the strength of the diffusion model to resolve material parameters and geometry with the high performance of simple torque models.

  19. Mathematical modeling of high-pH chemical flooding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhuyan, D.; Lake, L.W.; Pope, G.A.

    1990-05-01

    This paper describes a generalized compositional reservoir simulator for high-pH chemical flooding processes. This simulator combines the reaction chemistry associated with these processes with the extensive physical- and flow-property modeling schemes of an existing micellar/polymer flood simulator, UTCHEM. Application of the model is illustrated for cases from a simple alkaline preflush to surfactant-enhanced alkaline-polymer flooding.

  20. Adjusting STEMS growth model for Wisconsin forests.

    Treesearch

    Margaret R. Holdaway

    1985-01-01

    Describes a simple procedure for adjusting growth in the STEMS regional tree growth model to compensate for subregional differences. Coefficients are reported to adjust Lake States STEMS to the forests of Northern and Central Wisconsin--an area of essentially uniform climate and similar broad physiographic features. Errors are presented for various combinations of...

  1. An Assessment of the Exposure of Americans to Perflourooctane Sulfonate: A Comparison of Estimated Intake with Values Inferred from NHANES Data

    EPA Science Inventory

    To better understand human exposure to perfluorinated compounds (PFCs), a model that assesses exposure to perfluorooctane sulfonate (PFOS) and its precursors from both an intake and a body burden perspective and combines the two with a simple pharmacokinetic (PK) model is demonst...

  2. Ground temperature measurement by PRT-5 for maps experiment

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm and computer program were developed for determining the actual surface temperature from the effective brightness temperature as measured remotely by a radiation thermometer called PRT-5. This procedure allows the computation of atmospheric correction to the effective brightness temperature without performing detailed radiative transfer calculations. Model radiative transfer calculations were performed to compute atmospheric corrections for several values of the surface and atmospheric parameters individually and in combination. Polynomial regressions were performed between the magnitudes or deviations of these parameters and the corresponding computed corrections to establish simple analytical relations between them. Analytical relations were also developed to represent combined correction for simultaneous variation of parameters in terms of their individual corrections.

  3. A Methodology for Multihazards Load Combinations of Earthquake and Heavy Trucks for Bridges

    PubMed Central

    Wang, Xu; Sun, Baitao

    2014-01-01

    Issues of load combinations of earthquakes and heavy trucks are important contents in multihazards bridge design. Current load resistance factor design (LRFD) specifications usually treat extreme hazards alone and have no probabilistic basis in extreme load combinations. Earthquake load and heavy truck load are considered as random processes with respective characteristics, and the maximum combined load is not the simple superimposition of their maximum loads. Traditional Ferry Borges-Castaneda model that considers load lasting duration and occurrence probability well describes random process converting to random variables and load combinations, but this model has strict constraint in time interval selection to obtain precise results. Turkstra's rule considers one load reaching its maximum value in bridge's service life combined with another load with its instantaneous value (or mean value), which looks more rational, but the results are generally unconservative. Therefore, a modified model is presented here considering both advantages of Ferry Borges-Castaneda's model and Turkstra's rule. The modified model is based on conditional probability, which can convert random process to random variables relatively easily and consider the nonmaximum factor in load combinations. Earthquake load and heavy truck load combinations are employed to illustrate the model. Finally, the results of a numerical simulation are used to verify the feasibility and rationality of the model. PMID:24883347

  4. Simple, distance-dependent formulation of the Watts-Strogatz model for directed and undirected small-world networks.

    PubMed

    Song, H Francis; Wang, Xiao-Jing

    2014-12-01

    Small-world networks-complex networks characterized by a combination of high clustering and short path lengths-are widely studied using the paradigmatic model of Watts and Strogatz (WS). Although the WS model is already quite minimal and intuitive, we describe an alternative formulation of the WS model in terms of a distance-dependent probability of connection that further simplifies, both practically and theoretically, the generation of directed and undirected WS-type small-world networks. In addition to highlighting an essential feature of the WS model that has previously been overlooked, namely the equivalence to a simple distance-dependent model, this alternative formulation makes it possible to derive exact expressions for quantities such as the degree and motif distributions and global clustering coefficient for both directed and undirected networks in terms of model parameters.

  5. Simple, distance-dependent formulation of the Watts-Strogatz model for directed and undirected small-world networks

    NASA Astrophysics Data System (ADS)

    Song, H. Francis; Wang, Xiao-Jing

    2014-12-01

    Small-world networks—complex networks characterized by a combination of high clustering and short path lengths—are widely studied using the paradigmatic model of Watts and Strogatz (WS). Although the WS model is already quite minimal and intuitive, we describe an alternative formulation of the WS model in terms of a distance-dependent probability of connection that further simplifies, both practically and theoretically, the generation of directed and undirected WS-type small-world networks. In addition to highlighting an essential feature of the WS model that has previously been overlooked, namely the equivalence to a simple distance-dependent model, this alternative formulation makes it possible to derive exact expressions for quantities such as the degree and motif distributions and global clustering coefficient for both directed and undirected networks in terms of model parameters.

  6. Analysis of lithology: Vegetation mixes in multispectral images

    NASA Technical Reports Server (NTRS)

    Adams, J. B.; Smith, M.; Adams, J. D.

    1982-01-01

    Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.

  7. Mining Peripheral Arterial Disease Cases from Narrative Clinical Notes Using Natural Language Processing

    PubMed Central

    Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G.; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J.; Arruda-Olson, Adelaide M.

    2016-01-01

    Objective Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm to billing code algorithms, using ankle-brachial index (ABI) test results as the gold standard. Methods We compared the performance of the NLP algorithm to 1) results of gold standard ABI; 2) previously validated algorithms based on relevant ICD-9 diagnostic codes (simple model) and 3) a combination of ICD-9 codes with procedural codes (full model). A dataset of 1,569 PAD patients and controls was randomly divided into training (n= 935) and testing (n= 634) subsets. Results We iteratively refined the NLP algorithm in the training set including narrative note sections, note types and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP: 91.8%, full model: 81.8%, simple model: 83%, P<.001), PPV (NLP: 92.9%, full model: 74.3%, simple model: 79.9%, P<.001), and specificity (NLP: 92.5%, full model: 64.2%, simple model: 75.9%, P<.001). Conclusions A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. PMID:28189359

  8. Increased Expression of Simple Ganglioside Species GM2 and GM3 Detected by MALDI Imaging Mass Spectrometry in a Combined Rat Model of Aβ Toxicity and Stroke

    PubMed Central

    Caughlin, Sarah; Hepburn, Jeffrey D.; Park, Dae Hee; Jurcic, Kristina; Yeung, Ken K.-C.; Cechetto, David F.; Whitehead, Shawn N.

    2015-01-01

    The aging brain is often characterized by the presence of multiple comorbidities resulting in synergistic damaging effects in the brain as demonstrated through the interaction of Alzheimer’s disease (AD) and stroke. Gangliosides, a family of membrane lipids enriched in the central nervous system, may have a mechanistic role in mediating the brain’s response to injury as their expression is altered in a number of disease and injury states. Matrix-Assisted Laser Desorption Ionization (MALDI) Imaging Mass Spectrometry (IMS) was used to study the expression of A-series ganglioside species GD1a, GM1, GM2, and GM3 to determine alteration of their expression profiles in the presence of beta-amyloid (Aβ) toxicity in addition to ischemic injury. To model a stroke, rats received a unilateral striatal injection of endothelin-1 (ET-1) (stroke alone group). To model Aβ toxicity, rats received intracerebralventricular (icv) injections of the toxic 25-35 fragment of the Aβ peptide (Aβ alone group). To model the combination of Aβ toxicity with stroke, rats received both the unilateral ET-1 injection and the bilateral icv injections of Aβ₂₅₋₃₅ (combined Aβ/ET-1 group). By 3 d, a significant increase in the simple ganglioside species GM2 was observed in the ischemic brain region of rats who received a stroke (ET-1), with or without Aβ. By 21 d, GM2 levels only remained elevated in the combined Aβ/ET-1 group. GM3 levels however demonstrated a different pattern of expression. By 3 d GM3 was elevated in the ischemic brain region only in the combined Aβ/ET-1 group. By 21 d, GM3 was elevated in the ischemic brain region in both stroke alone and Aβ/ET-1 groups. Overall, results indicate that the accumulation of simple ganglioside species GM2 and GM3 may be indicative of a mechanism of interaction between AD and stroke. PMID:26086081

  9. Increased Expression of Simple Ganglioside Species GM2 and GM3 Detected by MALDI Imaging Mass Spectrometry in a Combined Rat Model of Aβ Toxicity and Stroke.

    PubMed

    Caughlin, Sarah; Hepburn, Jeffrey D; Park, Dae Hee; Jurcic, Kristina; Yeung, Ken K-C; Cechetto, David F; Whitehead, Shawn N

    2015-01-01

    The aging brain is often characterized by the presence of multiple comorbidities resulting in synergistic damaging effects in the brain as demonstrated through the interaction of Alzheimer's disease (AD) and stroke. Gangliosides, a family of membrane lipids enriched in the central nervous system, may have a mechanistic role in mediating the brain's response to injury as their expression is altered in a number of disease and injury states. Matrix-Assisted Laser Desorption Ionization (MALDI) Imaging Mass Spectrometry (IMS) was used to study the expression of A-series ganglioside species GD1a, GM1, GM2, and GM3 to determine alteration of their expression profiles in the presence of beta-amyloid (Aβ) toxicity in addition to ischemic injury. To model a stroke, rats received a unilateral striatal injection of endothelin-1 (ET-1) (stroke alone group). To model Aβ toxicity, rats received intracerebralventricular (i.c.v.) injections of the toxic 25-35 fragment of the Aβ peptide (Aβ alone group). To model the combination of Aβ toxicity with stroke, rats received both the unilateral ET-1 injection and the bilateral icv injections of Aβ25-35 (combined Aβ/ET-1 group). By 3 d, a significant increase in the simple ganglioside species GM2 was observed in the ischemic brain region of rats who received a stroke (ET-1), with or without Aβ. By 21 d, GM2 levels only remained elevated in the combined Aβ/ET-1 group. GM3 levels however demonstrated a different pattern of expression. By 3 d GM3 was elevated in the ischemic brain region only in the combined Aβ/ET-1 group. By 21 d, GM3 was elevated in the ischemic brain region in both stroke alone and Aβ/ET-1 groups. Overall, results indicate that the accumulation of simple ganglioside species GM2 and GM3 may be indicative of a mechanism of interaction between AD and stroke.

  10. Robust optical flow using adaptive Lorentzian filter for image reconstruction under noisy condition

    NASA Astrophysics Data System (ADS)

    Kesrarat, Darun; Patanavijit, Vorapoj

    2017-02-01

    In optical flow for motion allocation, the efficient result in Motion Vector (MV) is an important issue. Several noisy conditions may cause the unreliable result in optical flow algorithms. We discover that many classical optical flows algorithms perform better result under noisy condition when combined with modern optimized model. This paper introduces effective robust models of optical flow by using Robust high reliability spatial based optical flow algorithms using the adaptive Lorentzian norm influence function in computation on simple spatial temporal optical flows algorithm. Experiment on our proposed models confirm better noise tolerance in optical flow's MV under noisy condition when they are applied over simple spatial temporal optical flow algorithms as a filtering model in simple frame-to-frame correlation technique. We illustrate the performance of our models by performing an experiment on several typical sequences with differences in movement speed of foreground and background where the experiment sequences are contaminated by the additive white Gaussian noise (AWGN) at different noise decibels (dB). This paper shows very high effectiveness of noise tolerance models that they are indicated by peak signal to noise ratio (PSNR).

  11. Does lake size matter? Combining morphology and process modeling to examine the contribution of lake classes to population-scale processes

    USGS Publications Warehouse

    Winslow, Luke A.; Read, Jordan S.; Hanson, Paul C.; Stanley, Emily H.

    2014-01-01

    With lake abundances in the thousands to millions, creating an intuitive understanding of the distribution of morphology and processes in lakes is challenging. To improve researchers’ understanding of large-scale lake processes, we developed a parsimonious mathematical model based on the Pareto distribution to describe the distribution of lake morphology (area, perimeter and volume). While debate continues over which mathematical representation best fits any one distribution of lake morphometric characteristics, we recognize the need for a simple, flexible model to advance understanding of how the interaction between morphometry and function dictates scaling across large populations of lakes. These models make clear the relative contribution of lakes to the total amount of lake surface area, volume, and perimeter. They also highlight the critical thresholds at which total perimeter, area and volume would be evenly distributed across lake size-classes have Pareto slopes of 0.63, 1 and 1.12, respectively. These models of morphology can be used in combination with models of process to create overarching “lake population” level models of process. To illustrate this potential, we combine the model of surface area distribution with a model of carbon mass accumulation rate. We found that even if smaller lakes contribute relatively less to total surface area than larger lakes, the increasing carbon accumulation rate with decreasing lake size is strong enough to bias the distribution of carbon mass accumulation towards smaller lakes. This analytical framework provides a relatively simple approach to upscaling morphology and process that is easily generalizable to other ecosystem processes.

  12. Generalized concentration addition: a method for examining mixtures containing partial agonists.

    PubMed

    Howard, Gregory J; Webster, Thomas F

    2009-08-07

    Environmentally relevant toxic exposures often consist of simultaneous exposure to multiple agents. Methods to predict the expected outcome of such combinations are critical both to risk assessment and to an accurate judgment of whether combinations are synergistic or antagonistic. Concentration addition (CA) has commonly been used to assess the presence of synergy or antagonism in combinations of similarly acting chemicals, and to predict effects of combinations of such agents. CA has the advantage of clear graphical interpretation: Curves of constant joint effect (isoboles) must be negatively sloped straight lines if the mixture is concentration additive. However, CA cannot be directly used to assess combinations that include partial agonists, although such agents are of considerable interest. Here, we propose a natural extension of CA to a functional form that may be applied to mixtures including full agonists and partial agonists. This extended definition, for which we suggest the term "generalized concentration addition," encompasses linear isoboles with slopes of any sign. We apply this approach to the simple example of agents with dose-response relationships described by Hill functions with slope parameter n=1. The resulting isoboles are in all cases linear, with negative, zero and positive slopes. Using simple mechanistic models of ligand-receptor systems, we show that the same isobole pattern and joint effects are generated by modeled combinations of full and partial agonists. Special cases include combinations of two full agonists and a full agonist plus a competitive antagonist.

  13. Figure of merit and different combinations of observational data sets

    NASA Astrophysics Data System (ADS)

    Su, Qiping; Tuo, Zhong-Liang; Cai, Rong-Gen

    2011-11-01

    To constrain cosmological parameters, one often makes a joint analysis with different combinations of observational data sets. In this paper we take the figure of merit (FoM) for Dark Energy Task Force fiducial model (Chevallier-Polarski-Linder model) to estimate goodness of different combinations of data sets, which include 11 widely used observational data sets (type Ia supernovae, observational hubble parameter, baryon acoustic oscillation, cosmic microwave background, x-ray cluster baryon mass fraction, and gamma-ray bursts). We analyze different combinations and make a comparison for two types of combinations based on two types of basic combinations, which are often adopted in the literature. We find two sets of combinations, which have a strong ability to constrain the dark energy parameters: one has the largest FoM, and the other contains less observational data with a relatively large FoM and a simple fitting procedure.

  14. [Comparison of simple pooling and bivariate model used in meta-analyses of diagnostic test accuracy published in Chinese journals].

    PubMed

    Huang, Yuan-sheng; Yang, Zhi-rong; Zhan, Si-yan

    2015-06-18

    To investigate the use of simple pooling and bivariate model in meta-analyses of diagnostic test accuracy (DTA) published in Chinese journals (January to November, 2014), compare the differences of results from these two models, and explore the impact of between-study variability of sensitivity and specificity on the differences. DTA meta-analyses were searched through Chinese Biomedical Literature Database (January to November, 2014). Details in models and data for fourfold table were extracted. Descriptive analysis was conducted to investigate the prevalence of the use of simple pooling method and bivariate model in the included literature. Data were re-analyzed with the two models respectively. Differences in the results were examined by Wilcoxon signed rank test. How the results differences were affected by between-study variability of sensitivity and specificity, expressed by I2, was explored. The 55 systematic reviews, containing 58 DTA meta-analyses, were included and 25 DTA meta-analyses were eligible for re-analysis. Simple pooling was used in 50 (90.9%) systematic reviews and bivariate model in 1 (1.8%). The remaining 4 (7.3%) articles used other models pooling sensitivity and specificity or pooled neither of them. Of the reviews simply pooling sensitivity and specificity, 41(82.0%) were at the risk of wrongly using Meta-disc software. The differences in medians of sensitivity and specificity between two models were both 0.011 (P<0.001, P=0.031 respectively). Greater differences could be found as I2 of sensitivity or specificity became larger, especially when I2>75%. Most DTA meta-analyses published in Chinese journals(January to November, 2014) combine the sensitivity and specificity by simple pooling. Meta-disc software can pool the sensitivity and specificity only through fixed-effect model, but a high proportion of authors think it can implement random-effect model. Simple pooling tends to underestimate the results compared with bivariate model. The greater the between-study variance is, the more likely the simple pooling has larger deviation. It is necessary to increase the knowledge level of statistical methods and software for meta-analyses of DTA data.

  15. Applied and engineering versions of the theory of elastoplastic processes of active complex loading part 2: Identification and verification

    NASA Astrophysics Data System (ADS)

    Peleshko, V. A.

    2016-06-01

    The deviator constitutive relation of the proposed theory of plasticity has a three-term form (the stress, stress rate, and strain rate vectors formed from the deviators are collinear) and, in the specialized (applied) version, in addition to the simple loading function, contains four dimensionless constants of the material determined from experiments along a two-link strain trajectory with an orthogonal break. The proposed simple mechanism is used to calculate the constants of themodel for four metallic materials that significantly differ in the composition and in the mechanical properties; the obtained constants do not deviate much from their average values (over the four materials). The latter are taken as universal constants in the engineering version of the model, which thus requires only one basic experiment, i. e., a simple loading test. If the material exhibits the strengthening property in cyclic circular deformation, then the model contains an additional constant determined from the experiment along a strain trajectory of this type. (In the engineering version of the model, the cyclic strengthening effect is not taken into account, which imposes a certain upper bound on the difference between the length of the strain trajectory arc and the module of the strain vector.) We present the results of model verification using the experimental data available in the literature about the combined loading along two- and multi-link strain trajectories with various lengths of links and angles of breaks, with plane curvilinear segments of various constant and variable curvature, and with three-dimensional helical segments of various curvature and twist. (All in all, we use more than 80 strain programs; the materials are low- andmedium-carbon steels, brass, and stainless steel.) These results prove that the model can be used to describe the process of arbitrary active (in the sense of nonnegative capacity of the shear) combine loading and final unloading of originally quasi-isotropic elastoplastic materials. In practical calculations, in the absence of experimental data about the properties of a material under combined loading, the use of the engineering version of the model is quite acceptable. The simple identification, wide verifiability, and the availability of a software implementation of the method for solving initial-boundary value problems permit treating the proposed theory as an applied theory.

  16. General model and control of an n rotor helicopter

    NASA Astrophysics Data System (ADS)

    Sidea, A. G.; Yding Brogaard, R.; Andersen, N. A.; Ravn, O.

    2014-12-01

    The purpose of this study was to create a dynamic, nonlinear mathematical model of a multirotor that would be valid for different numbers of rotors. Furthermore, a set of Single Input Single Output (SISO) controllers were implemented for attitude control. Both model and controllers were tested experimentally on a quadcopter. Using the combined model and controllers, simple system simulation and control is possible, by replacing the physical values for the individual systems.

  17. Combining tower mixing ratio and community model data to estimate regional-scale net ecosystem carbon exchange by boundary layer inversion over four flux towers in the United States

    Treesearch

    Xueri Dang; Chun-Ta Lai; David Y. Hollinger; Andrew J. Schauer; Jingfeng Xiao; J. William Munger; Clenton Owensby; James R. Ehleringer

    2011-01-01

    We evaluated an idealized boundary layer (BL) model with simple parameterizations using vertical transport information from community model outputs (NCAR/NCEP Reanalysis and ECMWF Interim Analysis) to estimate regional-scale net CO2 fluxes from 2002 to 2007 at three forest and one grassland flux sites in the United States. The BL modeling...

  18. Comparing convective heat fluxes derived from thermodynamics to a radiative-convective model and GCMs

    NASA Astrophysics Data System (ADS)

    Dhara, Chirag; Renner, Maik; Kleidon, Axel

    2015-04-01

    The convective transport of heat and moisture plays a key role in the climate system, but the transport is typically parameterized in models. Here, we aim at the simplest possible physical representation and treat convective heat fluxes as the result of a heat engine. We combine the well-known Carnot limit of this heat engine with the energy balances of the surface-atmosphere system that describe how the temperature difference is affected by convective heat transport, yielding a maximum power limit of convection. This results in a simple analytic expression for convective strength that depends primarily on surface solar absorption. We compare this expression with an idealized grey atmosphere radiative-convective (RC) model as well as Global Circulation Model (GCM) simulations at the grid scale. We find that our simple expression as well as the RC model can explain much of the geographic variation of the GCM output, resulting in strong linear correlations among the three approaches. The RC model, however, shows a lower bias than our simple expression. We identify the use of the prescribed convective adjustment in RC-like models as the reason for the lower bias. The strength of our model lies in its ability to capture the geographic variation of convective strength with a parameter-free expression. On the other hand, the comparison with the RC model indicates a method for improving the formulation of radiative transfer in our simple approach. We also find that the latent heat fluxes compare very well among the approaches, as well as their sensitivity to surface warming. What our comparison suggests is that the strength of convection and their sensitivity in the climatic mean can be estimated relatively robustly by rather simple approaches.

  19. Mining peripheral arterial disease cases from narrative clinical notes using natural language processing.

    PubMed

    Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J; Arruda-Olson, Adelaide M

    2017-06-01

    Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm with billing code algorithms, using ankle-brachial index test results as the gold standard. We compared the performance of the NLP algorithm to (1) results of gold standard ankle-brachial index; (2) previously validated algorithms based on relevant International Classification of Diseases, Ninth Revision diagnostic codes (simple model); and (3) a combination of International Classification of Diseases, Ninth Revision codes with procedural codes (full model). A dataset of 1569 patients with PAD and controls was randomly divided into training (n = 935) and testing (n = 634) subsets. We iteratively refined the NLP algorithm in the training set including narrative note sections, note types, and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP, 91.8%; full model, 81.8%; simple model, 83%; P < .001), positive predictive value (NLP, 92.9%; full model, 74.3%; simple model, 79.9%; P < .001), and specificity (NLP, 92.5%; full model, 64.2%; simple model, 75.9%; P < .001). A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Modeling Impact of Urbanization in US Cities Using Simple Biosphere Model SiB2

    NASA Technical Reports Server (NTRS)

    Zhang, Ping; Bounoua, Lahouari; Thome, Kurtis; Wolfe, Robert

    2016-01-01

    We combine Landsat- and the Moderate Resolution Imaging Spectroradiometer (MODIS)-based products, as well as climate drivers from Phase 2 of the North American Land Data Assimilation System (NLDAS-2) in a Simple Biosphere land surface model (SiB2) to assess the impact of urbanization in continental USA (excluding Alaska and Hawaii). More than 300 cities and their surrounding suburban and rural areas are defined in this study to characterize the impact of urbanization on surface climate including surface energy, carbon budget, and water balance. These analyses reveal an uneven impact of urbanization across the continent that should inform upon policy options for improving urban growth including heat mitigation and energy use, carbon sequestration and flood prevention.

  1. New generation of elastic network models.

    PubMed

    López-Blanco, José Ramón; Chacón, Pablo

    2016-04-01

    The intrinsic flexibility of proteins and nucleic acids can be grasped from remarkably simple mechanical models of particles connected by springs. In recent decades, Elastic Network Models (ENMs) combined with Normal Model Analysis widely confirmed their ability to predict biologically relevant motions of biomolecules and soon became a popular methodology to reveal large-scale dynamics in multiple structural biology scenarios. The simplicity, robustness, low computational cost, and relatively high accuracy are the reasons behind the success of ENMs. This review focuses on recent advances in the development and application of ENMs, paying particular attention to combinations with experimental data. Successful application scenarios include large macromolecular machines, structural refinement, docking, and evolutionary conservation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Model reductions using a projection formulation

    NASA Technical Reports Server (NTRS)

    De Villemagne, Christian; Skelton, Robert E.

    1987-01-01

    A new methodology for model reduction of MIMO systems exploits the notion of an oblique projection. A reduced model is uniquely defined by a projector whose range space and orthogonal to the null space are chosen among the ranges of generalized controllability and observability matrices. The reduced order models match various combinations (chosen by the designer) of four types of parameters of the full order system associated with (1) low frequency response, (2) high frequency response, (3) low frequency power spectral density, and (4) high frequency power spectral density. Thus, the proposed method is a computationally simple substitute for many existing methods, has an extreme flexibility to embrace combinations of existing methods and offers some new features.

  3. Predicting the cumulative risk of death during hospitalization by modeling weekend, weekday and diurnal mortality risks.

    PubMed

    Coiera, Enrico; Wang, Ying; Magrabi, Farah; Concha, Oscar Perez; Gallego, Blanca; Runciman, William

    2014-05-21

    Current prognostic models factor in patient and disease specific variables but do not consider cumulative risks of hospitalization over time. We developed risk models of the likelihood of death associated with cumulative exposure to hospitalization, based on time-varying risks of hospitalization over any given day, as well as day of the week. Model performance was evaluated alone, and in combination with simple disease-specific models. Patients admitted between 2000 and 2006 from 501 public and private hospitals in NSW, Australia were used for training and 2007 data for evaluation. The impact of hospital care delivered over different days of the week and or times of the day was modeled by separating hospitalization risk into 21 separate time periods (morning, day, night across the days of the week). Three models were developed to predict death up to 7-days post-discharge: 1/a simple background risk model using age, gender; 2/a time-varying risk model for exposure to hospitalization (admission time, days in hospital); 3/disease specific models (Charlson co-morbidity index, DRG). Combining these three generated a full model. Models were evaluated by accuracy, AUC, Akaike and Bayesian information criteria. There was a clear diurnal rhythm to hospital mortality in the data set, peaking in the evening, as well as the well-known 'weekend-effect' where mortality peaks with weekend admissions. Individual models had modest performance on the test data set (AUC 0.71, 0.79 and 0.79 respectively). The combined model which included time-varying risk however yielded an average AUC of 0.92. This model performed best for stays up to 7-days (93% of admissions), peaking at days 3 to 5 (AUC 0.94). Risks of hospitalization vary not just with the day of the week but also time of the day, and can be used to make predictions about the cumulative risk of death associated with an individual's hospitalization. Combining disease specific models with such time varying- estimates appears to result in robust predictive performance. Such risk exposure models should find utility both in enhancing standard prognostic models as well as estimating the risk of continuation of hospitalization.

  4. Linear and Non-Linear Visual Feature Learning in Rat and Humans

    PubMed Central

    Bossens, Christophe; Op de Beeck, Hans P.

    2016-01-01

    The visual system processes visual input in a hierarchical manner in order to extract relevant features that can be used in tasks such as invariant object recognition. Although typically investigated in primates, recent work has shown that rats can be trained in a variety of visual object and shape recognition tasks. These studies did not pinpoint the complexity of the features used by these animals. Many tasks might be solved by using a combination of relatively simple features which tend to be correlated. Alternatively, rats might extract complex features or feature combinations which are nonlinear with respect to those simple features. In the present study, we address this question by starting from a small stimulus set for which one stimulus-response mapping involves a simple linear feature to solve the task while another mapping needs a well-defined nonlinear combination of simpler features related to shape symmetry. We verified computationally that the nonlinear task cannot be trivially solved by a simple V1-model. We show how rats are able to solve the linear feature task but are unable to acquire the nonlinear feature. In contrast, humans are able to use the nonlinear feature and are even faster in uncovering this solution as compared to the linear feature. The implications for the computational capabilities of the rat visual system are discussed. PMID:28066201

  5. A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics

    NASA Astrophysics Data System (ADS)

    McDermott, Randall; Weinschenk, Craig

    2013-11-01

    A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.

  6. The fluid trampoline: droplets bouncing on a soap film

    NASA Astrophysics Data System (ADS)

    Bush, John; Gilet, Tristan

    2008-11-01

    We present the results of a combined experimental and theoretical investigation of droplets falling onto a horizontal soap film. Both static and vertically vibrated soap films are considered. A quasi-static description of the soap film shape yields a force-displacement relation that provides excellent agreement with experiment, and allows us to model the film as a nonlinear spring. This approach yields an accurate criterion for the transition between droplet bouncing and crossing on the static film; moreover, it allows us to rationalize the observed constancy of the contact time and scaling for the coefficient of restitution in the bouncing states. On the vibrating film, a variety of bouncing behaviours were observed, including simple and complex periodic states, multiperiodicity and chaos. A simple theoretical model is developed that captures the essential physics of the bouncing process, reproducing all observed bouncing states. Quantitative agreement between model and experiment is deduced for simple periodic modes, and qualitative agreement for more complex periodic and chaotic bouncing states.

  7. Segmentation by fusion of histogram-based k-means clusters in different color spaces.

    PubMed

    Mignotte, Max

    2008-05-01

    This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.

  8. Generalized estimators of avian abundance from count survey data

    USGS Publications Warehouse

    Royle, J. Andrew

    2004-01-01

    I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture?recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations) constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.

  9. Influence of flooding duration on the biomass growth of alder and willow.

    Treesearch

    Lewis F. Ohmann; M. Dean Knighton; Ronald McRoberts

    1990-01-01

    Simple second-order (quadratic) polynomials were used to model the relationship between 3-year biomass increase (net ovendry weight in grams) and flooding duration (days) for four combinations of shrub type (alder, willow) and soils type (fine-sand, clay-loam).

  10. Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments

    NASA Astrophysics Data System (ADS)

    Berk, Mario; Å pačková, Olga; Straub, Daniel

    2017-12-01

    The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.

  11. A Computational Study of How Orientation Bias in the Lateral Geniculate Nucleus Can Give Rise to Orientation Selectivity in Primary Visual Cortex

    PubMed Central

    Kuhlmann, Levin; Vidyasagar, Trichur R.

    2011-01-01

    Controversy remains about how orientation selectivity emerges in simple cells of the mammalian primary visual cortex. In this paper, we present a computational model of how the orientation-biased responses of cells in lateral geniculate nucleus (LGN) can contribute to the orientation selectivity in simple cells in cats. We propose that simple cells are excited by lateral geniculate fields with an orientation-bias and disynaptically inhibited by unoriented lateral geniculate fields (or biased fields pooled across orientations), both at approximately the same retinotopic co-ordinates. This interaction, combined with recurrent cortical excitation and inhibition, helps to create the sharp orientation tuning seen in simple cell responses. Along with describing orientation selectivity, the model also accounts for the spatial frequency and length–response functions in simple cells, in normal conditions as well as under the influence of the GABAA antagonist, bicuculline. In addition, the model captures the response properties of LGN and simple cells to simultaneous visual stimulation and electrical stimulation of the LGN. We show that the sharp selectivity for stimulus orientation seen in primary visual cortical cells can be achieved without the excitatory convergence of the LGN input cells with receptive fields along a line in visual space, which has been a core assumption in classical models of visual cortex. We have also simulated how the full range of orientations seen in the cortex can emerge from the activity among broadly tuned channels tuned to a limited number of optimum orientations, just as in the classical case of coding for color in trichromatic primates. PMID:22013414

  12. Modelling of backscatter from vegetation layers

    NASA Technical Reports Server (NTRS)

    Van Zyl, J. J.; Engheta, N.; Papas, C. H.; Elachi, C.; Zebker, H.

    1985-01-01

    A simple way to build up a library of models which may be used to distinguish between the different types of vegetation and ground surfaces by means of their backscatter properties is presented. The curve of constant power received by the antenna (Gamma sphere) is calculated for the given Stokes Scattering Operator, and model parameters are adopted of the most similar library model Gamma sphere. Results calculated for a single scattering model resembling coniferous trees are compared with the Gamma spheres of a model resembling tropical region trees. The polarization which would minimize the effect of either the ground surface or the vegetation layer can be calculated and used to analyze the backscatter from the ground surface/vegetation layer combination, and enhance the power received from the desired part of the combination.

  13. A Simple, Analytical Model of Collisionless Magnetic Reconnection in a Pair Plasma

    NASA Technical Reports Server (NTRS)

    Hesse, Michael; Zenitani, Seiji; Kuznetova, Masha; Klimas, Alex

    2011-01-01

    A set of conservation equations is utilized to derive balance equations in the reconnection diffusion region of a symmetric pair plasma. The reconnection electric field is assumed to have the function to maintain the current density in the diffusion region, and to impart thermal energy to the plasma by means of quasi-viscous dissipation. Using these assumptions it is possible to derive a simple set of equations for diffusion region parameters in dependence on inflow conditions and on plasma compressibility. These equations are solved by means of a simple, iterative, procedure. The solutions show expected features such as dominance of enthalpy flux in the reconnection outflow, as well as combination of adiabatic and quasi-viscous heating. Furthermore, the model predicts a maximum reconnection electric field of E(sup *)=0.4, normalized to the parameters at the inflow edge of the diffusion region.

  14. An integrated Gaussian process regression for prediction of remaining useful life of slow speed bearings based on acoustic emission

    NASA Astrophysics Data System (ADS)

    Aye, S. A.; Heyns, P. S.

    2017-02-01

    This paper proposes an optimal Gaussian process regression (GPR) for the prediction of remaining useful life (RUL) of slow speed bearings based on a novel degradation assessment index obtained from acoustic emission signal. The optimal GPR is obtained from an integration or combination of existing simple mean and covariance functions in order to capture the observed trend of the bearing degradation as well the irregularities in the data. The resulting integrated GPR model provides an excellent fit to the data and improves over the simple GPR models that are based on simple mean and covariance functions. In addition, it achieves a low percentage error prediction of the remaining useful life of slow speed bearings. These findings are robust under varying operating conditions such as loading and speed and can be applied to nonlinear and nonstationary machine response signals useful for effective preventive machine maintenance purposes.

  15. A simple, analytical model of collisionless magnetic reconnection in a pair plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hesse, Michael; Zenitani, Seiji; Kuznetsova, Masha

    2009-10-15

    A set of conservation equations is utilized to derive balance equations in the reconnection diffusion region of a symmetric pair plasma. The reconnection electric field is assumed to have the function to maintain the current density in the diffusion region and to impart thermal energy to the plasma by means of quasiviscous dissipation. Using these assumptions it is possible to derive a simple set of equations for diffusion region parameters in dependence on inflow conditions and on plasma compressibility. These equations are solved by means of a simple, iterative procedure. The solutions show expected features such as dominance of enthalpymore » flux in the reconnection outflow, as well as combination of adiabatic and quasiviscous heating. Furthermore, the model predicts a maximum reconnection electric field of E{sup *}=0.4, normalized to the parameters at the inflow edge of the diffusion region.« less

  16. Experimental evaluation of expendable supersonic nozzle concepts

    NASA Technical Reports Server (NTRS)

    Baker, V.; Kwon, O.; Vittal, B.; Berrier, B.; Re, R.

    1990-01-01

    Exhaust nozzles for expendable supersonic turbojet engine missile propulsion systems are required to be simple, short and compact, in addition to having good broad-range thrust-minus-drag performance. A series of convergent-divergent nozzle scale model configurations were designed and wind tunnel tested for a wide range of free stream Mach numbers and nozzle pressure ratios. The models included fixed geometry and simple variable exit area concepts. The experimental and analytical results show that the fixed geometry configurations tested have inferior off-design thrust-minus-drag performance in the transonic Mach range. A simple variable exit area configuration called the Axi-Quad nozzle, combining features of both axisymmetric and two-dimensional convergent-divergent nozzles, performed well over a broad range of operating conditions. Analytical predictions of the flow pattern as well as overall performance of the nozzles, using a fully viscous, compressible CFD code, compared very well with the test data.

  17. Multidisciplinary optimization in aircraft design using analytic technology models

    NASA Technical Reports Server (NTRS)

    Malone, Brett; Mason, W. H.

    1991-01-01

    An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.

  18. A simple generative model of collective online behavior.

    PubMed

    Gleeson, James P; Cellai, Davide; Onnela, Jukka-Pekka; Porter, Mason A; Reed-Tsochas, Felix

    2014-07-22

    Human activities increasingly take place in online environments, providing novel opportunities for relating individual behaviors to population-level outcomes. In this paper, we introduce a simple generative model for the collective behavior of millions of social networking site users who are deciding between different software applications. Our model incorporates two distinct mechanisms: one is associated with recent decisions of users, and the other reflects the cumulative popularity of each application. Importantly, although various combinations of the two mechanisms yield long-time behavior that is consistent with data, the only models that reproduce the observed temporal dynamics are those that strongly emphasize the recent popularity of applications over their cumulative popularity. This demonstrates--even when using purely observational data without experimental design--that temporal data-driven modeling can effectively distinguish between competing microscopic mechanisms, allowing us to uncover previously unidentified aspects of collective online behavior.

  19. A simple generative model of collective online behavior

    PubMed Central

    Gleeson, James P.; Cellai, Davide; Onnela, Jukka-Pekka; Porter, Mason A.; Reed-Tsochas, Felix

    2014-01-01

    Human activities increasingly take place in online environments, providing novel opportunities for relating individual behaviors to population-level outcomes. In this paper, we introduce a simple generative model for the collective behavior of millions of social networking site users who are deciding between different software applications. Our model incorporates two distinct mechanisms: one is associated with recent decisions of users, and the other reflects the cumulative popularity of each application. Importantly, although various combinations of the two mechanisms yield long-time behavior that is consistent with data, the only models that reproduce the observed temporal dynamics are those that strongly emphasize the recent popularity of applications over their cumulative popularity. This demonstrates—even when using purely observational data without experimental design—that temporal data-driven modeling can effectively distinguish between competing microscopic mechanisms, allowing us to uncover previously unidentified aspects of collective online behavior. PMID:25002470

  20. Metallurgical Plant Optimization Through the use of Flowsheet Simulation Modelling

    NASA Astrophysics Data System (ADS)

    Kennedy, Mark William

    Modern metallurgical plants typically have complex flowsheets and operate on a continuous basis. Real time interactions within such processes can be complex and the impacts of streams such as recycles on process efficiency and stability can be highly unexpected prior to actual operation. Current desktop computing power, combined with state-of-the-art flowsheet simulation software like Metsim, allow for thorough analysis of designs to explore the interaction between operating rate, heat and mass balances and in particular the potential negative impact of recycles. Using plant information systems, it is possible to combine real plant data with simple steady state models, using dynamic data exchange links to allow for near real time de-bottlenecking of operations. Accurate analytical results can also be combined with detailed unit operations models to allow for feed-forward model-based-control. This paper will explore some examples of the application of Metsim to real world engineering and plant operational issues.

  1. Stabilization and tracking control of X-Z inverted pendulum with sliding-mode control.

    PubMed

    Wang, Jia-Jun

    2012-11-01

    X-Z inverted pendulum is a new kind of inverted pendulum which can move with the combination of the vertical and horizontal forces. Through a new transformation, the X-Z inverted pendulum is decomposed into three simple models. Based on the simple models, sliding-mode control is applied to stabilization and tracking control of the inverted pendulum. The performance of the sliding mode control is compared with that of the PID control. Simulation results show that the design scheme of sliding-mode control is effective for the stabilization and tracking control of the X-Z inverted pendulum. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Improved modeling of turbulent forced convection heat transfer in straight ducts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rokni, M.; Sunden, B.

    1999-08-01

    This investigation concerns numerical calculation of turbulent forced convective heat transfer and fluid flow in their fully developed state at low Reynolds number. The authors have developed a low Reynolds number version of the nonlinear {kappa}-{epsilon} model combined with the heat flux models of simple eddy diffusivity (SED), low Reynolds number version of generalized gradient diffusion hypothesis (GGDH), and wealth {proportional_to} earning {times} time (WET) in general three-dimensional geometries. The numerical approach is based on the finite volume technique with a nonstaggered grid arrangement and the SIMPLEC algorithm. Results have been obtained with the nonlinear {kappa}-{epsilon} model, combined with themore » Lam-Bremhorst and the Abe-Kondoh-Nagano damping functions for low Reynolds numbers.« less

  3. Funding Higher Education and Wage Uncertainty: Income Contingent Loan versus Mortgage Loan

    ERIC Educational Resources Information Center

    Migali, Giuseppe

    2012-01-01

    We propose a simple theoretical model which shows how the combined effect of wage uncertainty and risk aversion can modify the individual willingness to pay for a HE system financed by an ICL or a ML. We calibrate our model using real data from the 1970 British Cohort Survey together with the features of the English HE financing system. We allow…

  4. Simulation of green roof runoff under different substrate depths and vegetation covers by coupling a simple conceptual and a physically based hydrological model.

    PubMed

    Soulis, Konstantinos X; Valiantzas, John D; Ntoulas, Nikolaos; Kargas, George; Nektarios, Panayiotis A

    2017-09-15

    In spite of the well-known green roof benefits, their widespread adoption in the management practices of urban drainage systems requires the use of adequate analytical and modelling tools. In the current study, green roof runoff modeling was accomplished by developing, testing, and jointly using a simple conceptual model and a physically based numerical simulation model utilizing HYDRUS-1D software. The use of such an approach combines the advantages of the conceptual model, namely simplicity, low computational requirements, and ability to be easily integrated in decision support tools with the capacity of the physically based simulation model to be easily transferred in conditions and locations other than those used for calibrating and validating it. The proposed approach was evaluated with an experimental dataset that included various green roof covers (either succulent plants - Sedum sediforme, or xerophytic plants - Origanum onites, or bare substrate without any vegetation) and two substrate depths (either 8 cm or 16 cm). Both the physically based and the conceptual models matched very closely the observed hydrographs. In general, the conceptual model performed better than the physically based simulation model but the overall performance of both models was sufficient in most cases as it is revealed by the Nash-Sutcliffe Efficiency index which was generally greater than 0.70. Finally, it was showcased how a physically based and a simple conceptual model can be jointly used to allow the use of the simple conceptual model for a wider set of conditions than the available experimental data and in order to support green roof design. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Simple model of inhibition of chain-branching combustion processes

    NASA Astrophysics Data System (ADS)

    Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.

    2017-11-01

    A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.

  6. Determination of recharge fraction of injection water in combined abstraction-injection wells using continuous radon monitoring.

    PubMed

    Lee, Kil Yong; Kim, Yong-Chul; Cho, Soo Young; Kim, Seong Yun; Yoon, Yoon Yeol; Koh, Dong Chan; Ha, Kyucheol; Ko, Kyung-Seok

    2016-12-01

    The recharge fractions of injection water in combined abstraction-injection wells (AIW) were determined using continuous radon monitoring and radon mass balance model. The recharge system consists of three combined abstraction-injection wells, an observation well, a collection tank, an injection tank, and tubing for heating and transferring used groundwater. Groundwater was abstracted from an AIW and sprayed on the water-curtain heating facility and then the used groundwater was injected into the same AIW well by the recharge system. Radon concentrations of fresh groundwater in the AIWs and of used groundwater in the injection tank were measured continuously using a continuous radon monitoring system. Radon concentrations of fresh groundwater in the AIWs and used groundwater in the injection tank were in the ranges of 10,830-13,530 Bq/m 3 and 1500-5600 Bq/m 3 , respectively. A simple radon mass balance model was developed to estimate the recharge fraction of used groundwater in the AIWs. The recharge fraction in the 3 AIWs was in the range of 0.595-0.798. The time series recharge fraction could be obtained using the continuous radon monitoring system with a simple radon mass balance model. The results revealed that the radon mass balance model using continuous radon monitoring was effective for determining the time series recharge fractions in AIWs as well as for characterizing the recharge system. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Thermal Indices and Thermophysiological Modeling for Heat Stress.

    PubMed

    Havenith, George; Fiala, Dusan

    2015-12-15

    The assessment of the risk of human exposure to heat is a topic as relevant today as a century ago. The introduction and use of heat stress indices and models to predict and quantify heat stress and heat strain has helped to reduce morbidity and mortality in industrial, military, sports, and leisure activities dramatically. Models used range from simple instruments that attempt to mimic the human-environment heat exchange to complex thermophysiological models that simulate both internal and external heat and mass transfer, including related processes through (protective) clothing. This article discusses the most commonly used indices and models and looks at how these are deployed in the different contexts of industrial, military, and biometeorological applications, with focus on use to predict related thermal sensations, acute risk of heat illness, and epidemiological analysis of morbidity and mortality. A critical assessment is made of tendencies to use simple indices such as WBGT in more complex conditions (e.g., while wearing protective clothing), or when employed in conjunction with inappropriate sensors. Regarding the more complex thermophysiological models, the article discusses more recent developments including model individualization approaches and advanced systems that combine simulation models with (body worn) sensors to provide real-time risk assessment. The models discussed in the article range from historical indices to recent developments in using thermophysiological models in (bio) meteorological applications as an indicator of the combined effect of outdoor weather settings on humans. Copyright © 2015 John Wiley & Sons, Inc.

  8. A simple analytical model for signal amplification by reversible exchange (SABRE) process.

    PubMed

    Barskiy, Danila A; Pravdivtsev, Andrey N; Ivanov, Konstantin L; Kovtunov, Kirill V; Koptyug, Igor V

    2016-01-07

    We demonstrate an analytical model for the description of the signal amplification by reversible exchange (SABRE) process. The model relies on a combined analysis of chemical kinetics and the evolution of the nuclear spin system during the hyperpolarization process. The presented model for the first time provides rationale for deciding which system parameters (i.e. J-couplings, relaxation rates, reaction rate constants) have to be optimized in order to achieve higher signal enhancement for a substrate of interest in SABRE experiments.

  9. An improved switching converter model using discrete and average techniques

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.; Lee, F. C.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hongyi; Sivapalan, Murugesu; Tian, Fuqiang

    Inspired by the Dunne diagram, the climatic and landscape controls on the partitioning of annual runoff into its various components (Hortonian and Dunne overland flow and subsurface stormflow) are assessed quantitatively, from a purely theoretical perspective. A simple distributed hydrologic model has been built sufficient to simulate the effects of different combinations of climate, soil, and topography on the runoff generation processes. The model is driven by a sequence of simple hypothetical precipitation events, for a large combination of climate and landscape properties, and hydrologic responses at the catchment scale are obtained through aggregation of grid-scale responses. It is found,more » first, that the water balance responses, including relative contributions of different runoff generation mechanisms, could be related to a small set of dimensionless similarity parameters. These capture the competition between the wetting, drying, storage, and drainage functions underlying the catchment responses, and in this way, provide a quantitative approximation of the conceptual Dunne diagram. Second, only a subset of all hypothetical catchment/climate combinations is found to be ‘‘behavioral,’’ in terms of falling sufficiently close to the Budyko curve, describing mean annual runoff as a function of climate aridity. Furthermore, these behavioral combinations are mostly consistent with the qualitative picture presented in the Dunne diagram, indicating clearly the commonality between the Budyko curve and the Dunne diagram. These analyses also suggest clear interrelationships amongst the ‘‘behavioral’’ climate, soil, and topography parameter combinations, implying these catchment properties may be constrained to be codependent in order to satisfy the Budyko curve.« less

  11. Neuromorphic Computing: A Post-Moore's Law Complementary Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuman, Catherine D; Birdwell, John Douglas; Dean, Mark

    2016-01-01

    We describe our approach to post-Moore's law computing with three neuromorphic computing models that share a RISC philosophy, featuring simple components combined with a flexible and programmable structure. We envision these to be leveraged as co-processors, or as data filters to provide in situ data analysis in supercomputing environments.

  12. Is the Water Heating Curve as Described?

    ERIC Educational Resources Information Center

    Riveros, H. G.; Oliva, A. I.

    2008-01-01

    We analysed the heating curve of water which is described in textbooks. An experiment combined with some simple heat transfer calculations is discussed. The theoretical behaviour can be altered by changing the conditions under which the experiment is modelled. By identifying and controlling the different parameters involved during the heating…

  13. Demography and the Evolution of Educational Inequality.

    ERIC Educational Resources Information Center

    Mare, Robert D.

    The combined effects of differential fertility, differential mortality, and intergenerational educational mobility on the distribution of educational attainment in the United States were studied for women in the past half century. A simple model for the reproduction of educational hierarchies was used that takes these factors, plus age structure…

  14. Expectations for inflationary observables: simple or natural?

    NASA Astrophysics Data System (ADS)

    Musoke, Nathan; Easther, Richard

    2017-12-01

    We describe the general inflationary dynamics that can arise with a single, canonically coupled field where the inflaton potential is a 4-th order polynomial. This scenario yields a wide range of combinations of the empirical spectral observables, ns, r and αs. However, not all combinations are possible and next-generation cosmological experiments have the ability to rule out all inflationary scenarios based on this potential. Further, we construct inflationary priors for this potential based on physically motivated choices for its free parameters. These can be used to determine the degree of tuning associated with different combinations of ns, r and αs and will facilitate treatments of the inflationary model selection problem. Finally, we comment on the implications of these results for the naturalness of the overall inflationary paradigm. We argue that ruling out all simple, renormalizable potentials would not necessarily imply that the inflationary paradigm itself was unnatural, but that this eventuality would increase the importance of building inflationary scenarios in the context of broader paradigms of ultra-high energy physics.

  15. Non-planar vibrations of slightly curved pipes conveying fluid in simple and combination parametric resonances

    NASA Astrophysics Data System (ADS)

    Czerwiński, Andrzej; Łuczko, Jan

    2018-01-01

    The paper summarises the experimental investigations and numerical simulations of non-planar parametric vibrations of a statically deformed pipe. Underpinning the theoretical analysis is a 3D dynamic model of curved pipe. The pipe motion is governed by four non-linear partial differential equations with periodically varying coefficients. The Galerkin method was applied, the shape function being that governing the beam's natural vibrations. Experiments were conducted in the range of simple and combination parametric resonances, evidencing the possibility of in-plane and out-of-plane vibrations as well as fully non-planar vibrations in the combination resonance range. It is demonstrated that sub-harmonic and quasi-periodic vibrations are likely to be excited. The method suggested allows the spatial modes to be determined basing on results registered at selected points in the pipe. Results are summarised in the form of time histories, phase trajectory plots and spectral diagrams. Dedicated video materials give us a better insight into the investigated phenomena.

  16. An Information-theoretic Approach to Optimize JWST Observations and Retrievals of Transiting Exoplanet Atmospheres

    NASA Astrophysics Data System (ADS)

    Howe, Alex R.; Burrows, Adam; Deming, Drake

    2017-01-01

    We provide an example of an analysis to explore the optimization of observations of transiting hot Jupiters with the James Webb Space Telescope (JWST) to characterize their atmospheres based on a simple three-parameter forward model. We construct expansive forward model sets for 11 hot Jupiters, 10 of which are relatively well characterized, exploring a range of parameters such as equilibrium temperature and metallicity, as well as considering host stars over a wide range in brightness. We compute posterior distributions of our model parameters for each planet with all of the available JWST spectroscopic modes and several programs of combined observations and compute their effectiveness using the metric of estimated mutual information per degree of freedom. From these simulations, clear trends emerge that provide guidelines for designing a JWST observing program. We demonstrate that these guidelines apply over a wide range of planet parameters and target brightnesses for our simple forward model.

  17. An executable specification for the message processor in a simple combining network

    NASA Technical Reports Server (NTRS)

    Middleton, David

    1995-01-01

    While the primary function of the network in a parallel computer is to communicate data between processors, it is often useful if the network can also perform rudimentary calculations. That is, some simple processing ability in the network itself, particularly for performing parallel prefix computations, can reduce both the volume of data being communicated and the computational load on the processors proper. Unfortunately, typical implementations of such networks require a large fraction of the hardware budget, and so combining networks are viewed as being impractical. The FFP Machine has such a combining network, and various characteristics of the machine allow a good deal of simplification in the network design. Despite being simple in construction however, the network relies on many subtle details to work correctly. This paper describes an executable model of the network which will serve several purposes. It provides a complete and detailed description of the network which can substantiate its ability to support necessary functions. It provides an environment in which algorithms to be run on the network can be designed and debugged more easily than they would on physical hardware. Finally, it provides the foundation for exploring the design of the message receiving facility which connects the network to the individual processors.

  18. The combination of circle topology and leaky integrator neurons remarkably improves the performance of echo state network on time series prediction.

    PubMed

    Xue, Fangzheng; Li, Qian; Li, Xiumin

    2017-01-01

    Recently, echo state network (ESN) has attracted a great deal of attention due to its high accuracy and efficient learning performance. Compared with the traditional random structure and classical sigmoid units, simple circle topology and leaky integrator neurons have more advantages on reservoir computing of ESN. In this paper, we propose a new model of ESN with both circle reservoir structure and leaky integrator units. By comparing the prediction capability on Mackey-Glass chaotic time series of four ESN models: classical ESN, circle ESN, traditional leaky integrator ESN, circle leaky integrator ESN, we find that our circle leaky integrator ESN shows significantly better performance than other ESNs with roughly 2 orders of magnitude reduction of the predictive error. Moreover, this model has stronger ability to approximate nonlinear dynamics and resist noise than conventional ESN and ESN with only simple circle structure or leaky integrator neurons. Our results show that the combination of circle topology and leaky integrator neurons can remarkably increase dynamical diversity and meanwhile decrease the correlation of reservoir states, which contribute to the significant improvement of computational performance of Echo state network on time series prediction.

  19. Bet-hedging as a complex interaction among developmental instability, environmental heterogeneity, dispersal, and life-history strategy.

    PubMed

    Scheiner, Samuel M

    2014-02-01

    One potential evolutionary response to environmental heterogeneity is the production of randomly variable offspring through developmental instability, a type of bet-hedging. I used an individual-based, genetically explicit model to examine the evolution of developmental instability. The model considered both temporal and spatial heterogeneity alone and in combination, the effect of migration pattern (stepping stone vs. island), and life-history strategy. I confirmed that temporal heterogeneity alone requires a threshold amount of variation to select for a substantial amount of developmental instability. For spatial heterogeneity only, the response to selection on developmental instability depended on the life-history strategy and the form and pattern of dispersal with the greatest response for island migration when selection occurred before dispersal. Both spatial and temporal variation alone select for similar amounts of instability, but in combination resulted in substantially more instability than either alone. Local adaptation traded off against bet-hedging, but not in a simple linear fashion. I found higher-order interactions between life-history patterns, dispersal rates, dispersal patterns, and environmental heterogeneity that are not explainable by simple intuition. We need additional modeling efforts to understand these interactions and empirical tests that explicitly account for all of these factors.

  20. Simple scaling of catastrophic landslide dynamics.

    PubMed

    Ekström, Göran; Stark, Colin P

    2013-03-22

    Catastrophic landslides involve the acceleration and deceleration of millions of tons of rock and debris in response to the forces of gravity and dissipation. Their unpredictability and frequent location in remote areas have made observations of their dynamics rare. Through real-time detection and inverse modeling of teleseismic data, we show that landslide dynamics are primarily determined by the length scale of the source mass. When combined with geometric constraints from satellite imagery, the seismically determined landslide force histories yield estimates of landslide duration, momenta, potential energy loss, mass, and runout trajectory. Measurements of these dynamical properties for 29 teleseismogenic landslides are consistent with a simple acceleration model in which height drop and rupture depth scale with the length of the failing slope.

  1. Modeling the frequency-dependent detective quantum efficiency of photon-counting x-ray detectors.

    PubMed

    Stierstorfer, Karl

    2018-01-01

    To find a simple model for the frequency-dependent detective quantum efficiency (DQE) of photon-counting detectors in the low flux limit. Formula for the spatial cross-talk, the noise power spectrum and the DQE of a photon-counting detector working at a given threshold are derived. Parameters are probabilities for types of events like single counts in the central pixel, double counts in the central pixel and a neighboring pixel or single count in a neighboring pixel only. These probabilities can be derived in a simple model by extensive use of Monte Carlo techniques: The Monte Carlo x-ray propagation program MOCASSIM is used to simulate the energy deposition from the x-rays in the detector material. A simple charge cloud model using Gaussian clouds of fixed width is used for the propagation of the electric charge generated by the primary interactions. Both stages are combined in a Monte Carlo simulation randomizing the location of impact which finally produces the required probabilities. The parameters of the charge cloud model are fitted to the spectral response to a polychromatic spectrum measured with our prototype detector. Based on the Monte Carlo model, the DQE of photon-counting detectors as a function of spatial frequency is calculated for various pixel sizes, photon energies, and thresholds. The frequency-dependent DQE of a photon-counting detector in the low flux limit can be described with an equation containing only a small set of probabilities as input. Estimates for the probabilities can be derived from a simple model of the detector physics. © 2017 American Association of Physicists in Medicine.

  2. Neuroendocrine control of seasonal plasticity in the auditory and vocal systems of fish

    PubMed Central

    Forlano, Paul M.; Sisneros, Joseph A.; Rohmann, Kevin N.; Bass, Andrew H.

    2014-01-01

    Seasonal changes in reproductive-related vocal behavior are widespread among fishes. This review highlights recent studies of the vocal plainfin midshipman fish, Porichthys notatus, a neuroethological model system used for the past two decades to explore neural and endocrine mechanisms of vocal-acoustic social behaviors shared with tetrapods. Integrative approaches combining behavior, neurophysiology, neuropharmacology, neuroanatomy, and gene expression methodologies have taken advantage of simple, stereotyped and easily quantifiable behaviors controlled by discrete neural networks in this model system to enable discoveries such as the first demonstration of adaptive seasonal plasticity in the auditory periphery of a vertebrate as well as rapid steroid and neuropeptide effects on vocal physiology and behavior. This simple model system has now revealed cellular and molecular mechanisms underlying seasonal and steroid-driven auditory and vocal plasticity in the vertebrate brain. PMID:25168757

  3. Smith predictor based-sliding mode controller for integrating processes with elevated deadtime.

    PubMed

    Camacho, Oscar; De la Cruz, Francisco

    2004-04-01

    An approach to control integrating processes with elevated deadtime using a Smith predictor sliding mode controller is presented. A PID sliding surface and an integrating first-order plus deadtime model have been used to synthesize the controller. Since the performance of existing controllers with a Smith predictor decrease in the presence of modeling errors, this paper presents a simple approach to combining the Smith predictor with the sliding mode concept, which is a proven, simple, and robust procedure. The proposed scheme has a set of tuning equations as a function of the characteristic parameters of the model. For implementation of our proposed approach, computer based industrial controllers that execute PID algorithms can be used. The performance and robustness of the proposed controller are compared with the Matausek-Micić scheme for linear systems using simulations.

  4. Correlators in tensor models from character calculus

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Morozov, A.

    2017-11-01

    We explain how the calculations of [20], which provided the first evidence for non-trivial structures of Gaussian correlators in tensor models, are efficiently performed with the help of the (Hurwitz) character calculus. This emphasizes a close similarity between technical methods in matrix and tensor models and supports a hope to understand the emerging structures in very similar terms. We claim that the 2m-fold Gaussian correlators of rank r tensors are given by r-linear combinations of dimensions with the Young diagrams of size m. The coefficients are made from the characters of the symmetric group Sm and their exact form depends on the choice of the correlator and on the symmetries of the model. As the simplest application of this new knowledge, we provide simple expressions for correlators in the Aristotelian tensor model as tri-linear combinations of dimensions.

  5. A Simple Probabilistic Model for Estimating the Risk of Standard Air Dives

    DTIC Science & Technology

    2004-12-01

    Decompression Models Table Al. Decompression Table Based on the StandAir Model and Comparison with the VVaI-1 8 Algorithm. A-l-A-4 Table A2. The VVaI-1 8...cannot be as strong as might be desired - especially for dives with long TDTs. Comparisons of the positions of the dive-outcome symbols with the... comparisons for several depth/bottom-time combinations. The three left-hand panels, for dives with short bottom times, show that the crossover point

  6. Combining a Standard Fischer Esterification Experiment with Stereochemical and Molecular-Modeling Concepts

    ERIC Educational Resources Information Center

    Clausen, Thomas P.

    2011-01-01

    The Fisher esterification reaction is ideally suited for the undergraduate organic laboratory because it is easy to carry out and often involves a suitable introduction to basic laboratory techniques including extraction, distillation, and simple spectroscopic (IR and NMR) analyses. Here, a Fisher esterification reaction is described in which the…

  7. Fostering Recursive Thinking in Combinatorics through the Use of Manipulatives and Computing Technology.

    ERIC Educational Resources Information Center

    Abramovich, Sergei; Pieper, Anne

    1996-01-01

    Describes the use of manipulatives for solving simple combinatorial problems which can lead to the discovery of recurrence relations for permutations and combinations. Numerical evidence and visual imagery generated by a computer spreadsheet through modeling these relations can enable students to experience the ease and power of combinatorial…

  8. Semantics of Context-Free Fragments of Natural Languages.

    ERIC Educational Resources Information Center

    Suppes, Patrick

    The objective of this paper is to combine the viewpoint of model-theoretic semantics and generative grammar, to define semantics for context-free languages, and to apply the results to some fragments of natural language. Following the introduction in the first section, Section 2 describes a simple artificial example to illustrate how a semantic…

  9. EVALUATING EFFECTS OF LOW QUALITY HABITATS ON REGIONAL GROWTH IN PEOMYCUS LEUCOPUS: INSIGHTS FROM FIELD-PARAMETERIZED SPATIAL MATRIX MODELS.

    EPA Science Inventory

    Due to complex population dynamics and source-sink metapopulation processes, animal fitness sometimes varies across landscapes in ways that cannot be deduced from simple density patterns. In this study, we examine spatial patterns in fitness using a combination of intensive fiel...

  10. Difference-Equation/Flow-Graph Circuit Analysis

    NASA Technical Reports Server (NTRS)

    Mcvey, I. M.

    1988-01-01

    Numerical technique enables rapid, approximate analyses of electronic circuits containing linear and nonlinear elements. Practiced in variety of computer languages on large and small computers; for circuits simple enough, programmable hand calculators used. Although some combinations of circuit elements make numerical solutions diverge, enables quick identification of divergence and correction of circuit models to make solutions converge.

  11. Used Jmol to Help Students Better Understand Fluxional Processes

    ERIC Educational Resources Information Center

    Coleman, William F.; Fedosky, Edward W.

    2006-01-01

    This new WebWare combines instructional text and Jmol interactive, animated illustrations that help students visualize the mechanism. It is concluded that by animating the fluxional behavior of a simple model for chiral metal catalyst Sn(amidinate)[subscript 2], in which axial/equatorial exchange within the amidinate rings occurs through a Berry…

  12. Functional approach to exploring climatic and landscape controls of runoff generation: 1. Behavioral constraints on runoff volume

    NASA Astrophysics Data System (ADS)

    Li, Hong-Yi; Sivapalan, Murugesu; Tian, Fuqiang; Harman, Ciaran

    2014-12-01

    Inspired by the Dunne diagram, the climatic and landscape controls on the partitioning of annual runoff into its various components (Hortonian and Dunne overland flow and subsurface stormflow) are assessed quantitatively, from a purely theoretical perspective. A simple distributed hydrologic model has been built sufficient to simulate the effects of different combinations of climate, soil, and topography on the runoff generation processes. The model is driven by a sequence of simple hypothetical precipitation events, for a large combination of climate and landscape properties, and hydrologic responses at the catchment scale are obtained through aggregation of grid-scale responses. It is found, first, that the water balance responses, including relative contributions of different runoff generation mechanisms, could be related to a small set of dimensionless similarity parameters. These capture the competition between the wetting, drying, storage, and drainage functions underlying the catchment responses, and in this way, provide a quantitative approximation of the conceptual Dunne diagram. Second, only a subset of all hypothetical catchment/climate combinations is found to be "behavioral," in terms of falling sufficiently close to the Budyko curve, describing mean annual runoff as a function of climate aridity. Furthermore, these behavioral combinations are mostly consistent with the qualitative picture presented in the Dunne diagram, indicating clearly the commonality between the Budyko curve and the Dunne diagram. These analyses also suggest clear interrelationships amongst the "behavioral" climate, soil, and topography parameter combinations, implying these catchment properties may be constrained to be codependent in order to satisfy the Budyko curve.

  13. Combining sprinkling experiments and superconducting gravimetry in the field: a qualitative approach to identify dominant infiltration patterns

    NASA Astrophysics Data System (ADS)

    Reich, Marvin; Mikolaj, Michal; Blume, Theresa; Güntner, Andreas

    2017-04-01

    Hydrological process research at the plot to catchment scale commonly involves invasive field methods, leading to a large amount of point data. A promising alternative, which gained increasing interest in the hydrological community over the last years, is gravimetry. The combination of its non-invasive and integrative nature opens up new possibilities to approach hydrological process research. In this study we combine a field-scale sprinkling experiment with continuous superconducting gravity (SG) measurements. The experimental design consists of 8 sprinkler units, arranged symmetrically within a radius of about ten meters around an iGrav (SG) in a field enclosure. The gravity signal of the infiltrating sprinkling water is analyzed using a simple 3D water mass distribution model. We first conducted a number of virtual sprinkling experiments resulting in different idealized infiltration patterns and determined the pattern specific gravity response. In a next step we determined which combination of idealized infiltration patterns was able to reproduce the gravity response of our real-world experiment at the Wettzell Observatory (Germany). This process hypothesis is then evaluated with measured point-scale soil moisture responses and the results of the time-lapse electric resistivity survey which was carried out during the sprinkling experiment. This study demonstrates that a controlled sprinkling experiment around a gravimeter in combination with a simple infiltration model is sufficient to identify subsurface flow patterns and thus the dominant infiltration processes. As gravimeters become more portable and can actually be deployed in the field, their combination with sprinkling experiments as shown here constitutes a promising possibility to investigate hydrological processes in a non-invasive way.

  14. Active disturbance rejection controller for chemical reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Both, Roxana; Dulf, Eva H.; Muresan, Cristina I., E-mail: roxana.both@aut.utcluj.ro

    2015-03-10

    In the petrochemical industry, the synthesis of 2 ethyl-hexanol-oxo-alcohols (plasticizers alcohol) is of high importance, being achieved through hydrogenation of 2 ethyl-hexenal inside catalytic trickle bed three-phase reactors. For this type of processes the use of advanced control strategies is suitable due to their nonlinear behavior and extreme sensitivity to load changes and other disturbances. Due to the complexity of the mathematical model an approach was to use a simple linear model of the process in combination with an advanced control algorithm which takes into account the model uncertainties, the disturbances and command signal limitations like robust control. However themore » resulting controller is complex, involving cost effective hardware. This paper proposes a simple integer-order control scheme using a linear model of the process, based on active disturbance rejection method. By treating the model dynamics as a common disturbance and actively rejecting it, active disturbance rejection control (ADRC) can achieve the desired response. Simulation results are provided to demonstrate the effectiveness of the proposed method.« less

  15. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  16. Action at a Distance in the Cell's Nucleus

    NASA Astrophysics Data System (ADS)

    Kondev, Jane

    Various functions performed by chromosomes involve long-range communication between DNA sequences that are tens of thousands of bases apart along the genome, and microns apart in the nucleus. In this talk I will discuss experiments and theory relating to two distinct modes of long-range communication in the nucleus, chromosome looping and protein hopping along the chromosome, both in the context of DNA-break repair in yeast. Yeast is an excellent model system for studies that link chromosome conformations to their function as there is ample experimental evidence that yeast chromosome conformations are well described by a simple, random-walk polymer model. Using a combination of polymer physics theory and experiments on yeast cells, I will demonstrate that loss of polymer entropy due to chromosome looping is the driving force for homology search during repair of broken DNA by homologous recombination. I will also discuss the spread of histone modifications along the chromosome and away from the DNA break point in the context of simple physics models based on chromosome looping and kinase hopping, and show how combining physics theory and cell-biology experiment can be used to dissect the molecular mechanism of the spreading process. These examples demonstrate how combined theoretical and experimental studies can reveal physical principles of long-range communication in the nucleus, which play important roles in regulation of gene expression, DNA recombination, and chromatin modification. This work was supported by the NSF DMR-1206146.

  17. Testing particle filters on convective scale dynamics

    NASA Astrophysics Data System (ADS)

    Haslehner, Mylene; Craig, George. C.; Janjic, Tijana

    2014-05-01

    Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Würsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Würsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Würsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical fluid dynamics. - Computers and Fluids, doi:10,1016/j.compfluid.2010.11.011, 1096 2011. Würsch, M. and G. C. Craig, 2013: A simple dynamical model of cumulus convection for data assimilation research, submitted to Met. Zeitschrift.

  18. A simple electrical-mechanical model of the heart applied to the study of electrical-mechanical alternans

    NASA Technical Reports Server (NTRS)

    Clancy, Edward A.; Smith, Joseph M.; Cohen, Richard J.

    1991-01-01

    Recent evidence has shown that a subtle alternation in the surface ECG (electrical alternans) may be correlated with the susceptibility to ventricular fibrillation. In the present work, the author presents evidence that a mechanical alternation in the heartbeat (mechanical alternans) generally accompanies electrical alternans. A simple finite-element computer model which emulates both the electrical and the mechanical activity of the heart is presented. A pilot animal study is also reported. The computer model and the animal study both found that (1) there exists a regime of combined electrical-mechanical alternans during the transition from a normal rhythm towards a fibrillatory rhythm, (2) the detected degree of alternation is correlated with the relative instability of the rhythm, and (3) the electrical and mechanical alternans may result from a dispersion in local electrical properties leading to a spatial-temporal alternation in the electrical conduction process.

  19. Ultimate Longitudinal Strength of Composite Ship Hulls

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangming; Huang, Lingkai; Zhu, Libao; Tang, Yuhang; Wang, Anwen

    2017-01-01

    A simple analytical model to estimate the longitudinal strength of ship hulls in composite materials under buckling, material failure and ultimate collapse is presented in this paper. Ship hulls are regarded as assemblies of stiffened panels which idealized as group of plate-stiffener combinations. Ultimate strain of the plate-stiffener combination is predicted under buckling or material failure with composite beam-column theory. The effects of initial imperfection of ship hull and eccentricity of load are included. Corresponding longitudinal strengths of ship hull are derived in a straightforward method. A longitudinally framed ship hull made of symmetrically stacked unidirectional plies under sagging is analyzed. The results indicate that present analytical results have a good agreement with FEM method. The initial deflection of ship hull and eccentricity of load can dramatically reduce the bending capacity of ship hull. The proposed formulations provide a simple but useful tool for the longitudinal strength estimation in practical design.

  20. 3D inelastic analysis methods for hot section components

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Chen, P. C.; Hartle, M. S.; Huang, H. T.

    1985-01-01

    The objective is to develop analytical tools capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. Three models were developed. A simple model performs time dependent inelastic analysis using the power law creep equation. The second model is the classical model of Professors Walter Haisler and David Allen of Texas A and M University. The third model is the unified model of Bodner, Partom, et al. All models were customized for linear variation of loads and temperatures with all material properties and constitutive models being temperature dependent.

  1. The Evolution of Transition Region Loops Using IRIS and AIA

    NASA Technical Reports Server (NTRS)

    Winebarger, Amy R.; DePontieu, Bart

    2014-01-01

    Over the past 50 years, the model for the structure of the solar transition region has evolved from a simple transition layer between the cooler chromosphere to the hotter corona to a complex and diverse region that is dominated by complete loops that never reach coronal temperatures. The IRIS slitjaw images show many complete transition region loops. Several of the "coronal" channels in the SDO AIA instrument include contributions from weak transition region lines. In this work, we combine slitjaw images from IRIS with these channels to determine the evolution of the loops. We develop a simple model for the temperature and density evolution of the loops that can explain the simultaneous observations. Finally, we estimate the percentage of AIA emission that originates in the transition region.

  2. Using Simplistic Shape/Surface Models to Predict Brightness in Estimation Filters

    NASA Astrophysics Data System (ADS)

    Wetterer, C.; Sheppard, D.; Hunt, B.

    The prerequisite for using brightness (radiometric flux intensity) measurements in an estimation filter is to have a measurement function that accurately predicts a space objects brightness for variations in the parameters of interest. These parameters include changes in attitude and articulations of particular components (e.g. solar panel east-west offsets to direct sun-tracking). Typically, shape models and bidirectional reflectance distribution functions are combined to provide this forward light curve modeling capability. To achieve precise orbit predictions with the inclusion of shape/surface dependent forces such as radiation pressure, relatively complex and sophisticated modeling is required. Unfortunately, increasing the complexity of the models makes it difficult to estimate all those parameters simultaneously because changes in light curve features can now be explained by variations in a number of different properties. The classic example of this is the connection between the albedo and the area of a surface. If, however, the desire is to extract information about a single and specific parameter or feature from the light curve, a simple shape/surface model could be used. This paper details an example of this where a complex model is used to create simulated light curves, and then a simple model is used in an estimation filter to extract out a particular feature of interest. In order for this to be successful, however, the simple model must be first constructed using training data where the feature of interest is known or at least known to be constant.

  3. A Simple Model of Pulsed Ejector Thrust Augmentation

    NASA Technical Reports Server (NTRS)

    Wilson, Jack; Deloof, Richard L. (Technical Monitor)

    2003-01-01

    A simple model of thrust augmentation from a pulsed source is described. In the model it is assumed that the flow into the ejector is quasi-steady, and can be calculated using potential flow techniques. The velocity of the flow is related to the speed of the starting vortex ring formed by the jet. The vortex ring properties are obtained from the slug model, knowing the jet diameter, speed and slug length. The model, when combined with experimental results, predicts an optimum ejector radius for thrust augmentation. Data on pulsed ejector performance for comparison with the model was obtained using a shrouded Hartmann-Sprenger tube as the pulsed jet source. A statistical experiment, in which ejector length, diameter, and nose radius were independent parameters, was performed at four different frequencies. These frequencies corresponded to four different slug length to diameter ratios, two below cut-off, and two above. Comparison of the model with the experimental data showed reasonable agreement. Maximum pulsed thrust augmentation is shown to occur for a pulsed source with slug length to diameter ratio equal to the cut-off value.

  4. Autoradiography and Immunofluorescence Combined for Autecological Study of Single Cell Activity with Nitrobacter as a Model System1

    PubMed Central

    Fliermans, C. B.; Schmidt, E. L.

    1975-01-01

    Specific detection of a particular bacterium by immunofluorescence was combined with estimation of its metabolic activity by autoradiography. The nitrifying bacteria Nitrobacter agilis and N. winogradskyi were used as a model system. Nitrobacter were incubated with NaH14CO3 and 14CO2 prior to study. The same preparations made for autoradiograms were stained with fluorescent antibodies specific for the Nitrobacter species. Examination by epifluorescence and transmitted dark-field microscopy revealed Nitrobacter cells with and without associated silver grains. Direct detection and simultaneous evaluation of metabolic activity of Nitrobacter was demonstrated in pure cultures, in a simple mixed culture, and in a natural soil. Images PMID:1103733

  5. A Behavior-Based Circuit Model of How Outcome Expectations Organize Learned Behavior in Larval "Drosophila"

    ERIC Educational Resources Information Center

    Schleyer, Michael; Saumweber, Timo; Nahrendorf, Wiebke; Fischer, Benjamin; von Alpen, Desiree; Pauls, Dennis; Thum, Andreas; Gerber, Bertram

    2011-01-01

    Drosophila larvae combine a numerically simple brain, a correspondingly moderate behavioral complexity, and the availability of a rich toolbox for transgenic manipulation. This makes them attractive as a study case when trying to achieve a circuit-level understanding of behavior organization. From a series of behavioral experiments, we suggest a…

  6. Simple additive effects are rare: A quantitative review of plant biomass and soil process responses to combined manipulations of CO2 and temperature

    USDA-ARS?s Scientific Manuscript database

    In recent years, increased awareness of the potential interactions between rising atmospheric CO2 concentrations ([CO2]) and temperature has illustrated the importance of multi-factorial ecosystem manipulation experiments for validating Earth System models. To address the urgent need for increased u...

  7. Development of feedforward receptive field structure of a simple cell and its contribution to the orientation selectivity: a modeling study.

    PubMed

    Garg, Akhil R; Obermayer, Klaus; Bhaumik, Basabi

    2005-01-01

    Recent experimental studies of hetero-synaptic interactions in various systems have shown the role of signaling in the plasticity, challenging the conventional understanding of Hebb's rule. It has also been found that activity plays a major role in plasticity, with neurotrophins acting as molecular signals translating activity into structural changes. Furthermore, role of synaptic efficacy in biasing the outcome of competition has also been revealed recently. Motivated by these experimental findings we present a model for the development of simple cell receptive field structure based on the competitive hetero-synaptic interactions for neurotrophins combined with cooperative hetero-synaptic interactions in the spatial domain. We find that with proper balance in competition and cooperation, the inputs from two populations (ON/OFF) of LGN cells segregate starting from the homogeneous state. We obtain segregated ON and OFF regions in simple cell receptive field. Our modeling study supports the experimental findings, suggesting the role of synaptic efficacy and the role of spatial signaling. We find that using this model we obtain simple cell RF, even for positively correlated activity of ON/OFF cells. We also compare different mechanism of finding the response of cortical cell and study their possible role in the sharpening of orientation selectivity. We find that degree of selectivity improvement in individual cells varies from case to case depending upon the structure of RF field and type of sharpening mechanism.

  8. Understanding the complex dynamics of stock markets through cellular automata

    NASA Astrophysics Data System (ADS)

    Qiu, G.; Kandhai, D.; Sloot, P. M. A.

    2007-04-01

    We present a cellular automaton (CA) model for simulating the complex dynamics of stock markets. Within this model, a stock market is represented by a two-dimensional lattice, of which each vertex stands for a trader. According to typical trading behavior in real stock markets, agents of only two types are adopted: fundamentalists and imitators. Our CA model is based on local interactions, adopting simple rules for representing the behavior of traders and a simple rule for price updating. This model can reproduce, in a simple and robust manner, the main characteristics observed in empirical financial time series. Heavy-tailed return distributions due to large price variations can be generated through the imitating behavior of agents. In contrast to other microscopic simulation (MS) models, our results suggest that it is not necessary to assume a certain network topology in which agents group together, e.g., a random graph or a percolation network. That is, long-range interactions can emerge from local interactions. Volatility clustering, which also leads to heavy tails, seems to be related to the combined effect of a fast and a slow process: the evolution of the influence of news and the evolution of agents’ activity, respectively. In a general sense, these causes of heavy tails and volatility clustering appear to be common among some notable MS models that can confirm the main characteristics of financial markets.

  9. Adequate model complexity for scenario analysis of VOC stripping in a trickling filter.

    PubMed

    Vanhooren, H; Verbrugge, T; Boeije, G; Demey, D; Vanrolleghem, P A

    2001-01-01

    Two models describing the stripping of volatile organic contaminants (VOCs) in an industrial trickling filter system are developed. The aim of the models is to investigate the effect of different operating conditions (VOC loads and air flow rates) on the efficiency of VOC stripping and the resulting concentrations in the gas and liquid phases. The first model uses the same principles as the steady-state non-equilibrium activated sludge model Simple Treat, in combination with an existing biofilm model. The second model is a simple mass balance based model only incorporating air and liquid and thus neglecting biofilm effects. In a first approach, the first model was incorporated in a five-layer hydrodynamic model of the trickling filter, using the carrier material design specifications for porosity, water hold-up and specific surface area. A tracer test with lithium was used to validate this approach, and the gas mixing in the filters was studied using continuous CO2 and O2 measurements. With the tracer test results, the biodegradation model was adapted, and it became clear that biodegradation and adsorption to solids can be neglected. On this basis, a simple dynamic mass balance model was built. Simulations with this model reveal that changing the air flow rate in the trickling filter system has little effect on the VOC stripping efficiency at steady state. However, immediately after an air flow rate change, quite high flux and concentration peaks of VOCs can be expected. These phenomena are of major importance for the design of an off-gas treatment facility.

  10. Plate and butt-weld stresses beyond elastic limit, material and structural modeling

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1991-01-01

    Ultimate safety factors of high performance structures depend on stress behavior beyond the elastic limit, a region not too well understood. An analytical modeling approach was developed to gain fundamental insights into inelastic responses of simple structural elements. Nonlinear material properties were expressed in engineering stresses and strains variables and combined with strength of material stress and strain equations similar to numerical piece-wise linear method. Integrations are continuous which allows for more detailed solutions. Included with interesting results are the classical combined axial tension and bending load model and the strain gauge conversion to stress beyond the elastic limit. Material discontinuity stress factors in butt-welds were derived. This is a working-type document with analytical methods and results applicable to all industries of high reliability structures.

  11. Model-based tomographic reconstruction

    DOEpatents

    Chambers, David H; Lehman, Sean K; Goodman, Dennis M

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  12. PVWatts Version 1 Technical Reference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2013-10-01

    The NREL PVWatts(TM) calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes several hidden assumptions about performance parameters. This technical reference details the individual sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimation.

  13. Numerical simulation model of hyperacute/acute stage white matter infarction.

    PubMed

    Sakai, Koji; Yamada, Kei; Oouchi, Hiroyuki; Nishimura, Tsunehiko

    2008-01-01

    Although previous studies have revealed the mechanisms of changes in diffusivity (apparent diffusion coefficient [ADC]) in acute brain infarction, changes in diffusion anisotropy (fractional anisotropy [FA]) in white matter have not been examined. We hypothesized that membrane permeability as well as axonal swelling play important roles, and we therefore constructed a simulation model using random walk simulation to replicate the diffusion of water molecules. We implemented a numerical diffusion simulation model of normal and infarcted human brains using C++ language. We constructed this 2-pool model using simple tubes aligned in a single direction. Random walk simulation diffused water. Axon diameters and membrane permeability were then altered in step-wise fashion. To estimate the effects of axonal swelling, axon diameters were changed from 6 to 10 microm. Membrane permeability was altered from 0% to 40%. Finally, both elements were combined to explain increasing FA in the hyperacute stage of white matter infarction. The simulation demonstrated that simple water shift into the intracellular space reduces ADC and increases FA, but not to the extent expected from actual human cases (ADC approximately 50%; FA approximately +20%). Similarly, membrane permeability alone was insufficient to explain this phenomenon. However, a combination of both factors successfully replicated changes in diffusivity indices. Both axonal swelling and reduced membrane permeability appear important in explaining changes in ADC and FA based on eigenvalues in hyperacute-stage white matter infarction.

  14. A stacking ensemble learning framework for annual river ice breakup dates

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Trevor, Bernard

    2018-06-01

    River ice breakup dates (BDs) are not merely a proxy indicator of climate variability and change, but a direct concern in the management of local ice-caused flooding. A framework of stacking ensemble learning for annual river ice BDs was developed, which included two-level components: member and combining models. The member models described the relations between BD and their affecting indicators; the combining models linked the predicted BD by each member models with the observed BD. Especially, Bayesian regularization back-propagation artificial neural network (BRANN), and adaptive neuro fuzzy inference systems (ANFIS) were employed as both member and combining models. The candidate combining models also included the simple average methods (SAM). The input variables for member models were selected by a hybrid filter and wrapper method. The performances of these models were examined using the leave-one-out cross validation. As the largest unregulated river in Alberta, Canada with ice jams frequently occurring in the vicinity of Fort McMurray, the Athabasca River at Fort McMurray was selected as the study area. The breakup dates and candidate affecting indicators in 1980-2015 were collected. The results showed that, the BRANN member models generally outperformed the ANFIS member models in terms of better performances and simpler structures. The difference between the R and MI rankings of inputs in the optimal member models may imply that the linear correlation based filter method would be feasible to generate a range of candidate inputs for further screening through other wrapper or embedded IVS methods. The SAM and BRANN combining models generally outperformed all member models. The optimal SAM combining model combined two BRANN member models and improved upon them in terms of average squared errors by 14.6% and 18.1% respectively. In this study, for the first time, the stacking ensemble learning was applied to forecasting of river ice breakup dates, which appeared promising for other river ice forecasting problems.

  15. AN INFORMATION-THEORETIC APPROACH TO OPTIMIZE JWST OBSERVATIONS AND RETRIEVALS OF TRANSITING EXOPLANET ATMOSPHERES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howe, Alex R.; Burrows, Adam; Deming, Drake, E-mail: arhowe@umich.edu, E-mail: burrows@astro.princeton.edu, E-mail: ddeming@astro.umd.edu

    We provide an example of an analysis to explore the optimization of observations of transiting hot Jupiters with the James Webb Space Telescope ( JWST ) to characterize their atmospheres based on a simple three-parameter forward model. We construct expansive forward model sets for 11 hot Jupiters, 10 of which are relatively well characterized, exploring a range of parameters such as equilibrium temperature and metallicity, as well as considering host stars over a wide range in brightness. We compute posterior distributions of our model parameters for each planet with all of the available JWST spectroscopic modes and several programs ofmore » combined observations and compute their effectiveness using the metric of estimated mutual information per degree of freedom. From these simulations, clear trends emerge that provide guidelines for designing a JWST observing program. We demonstrate that these guidelines apply over a wide range of planet parameters and target brightnesses for our simple forward model.« less

  16. Modelling fluid accumulation in the neck using simple baseline fluid metrics: implications for sleep apnea.

    PubMed

    Vena, Daniel; Yadollahi, A; Bradley, T Douglas

    2014-01-01

    Obstructive sleep apnea (OSA) is a common respiratory disorder among adults. Recently we have shown that sedentary lifestyle causes an increase in diurnal leg fluid volume (LFV), which can shift into the neck at night when lying down to sleep and increase OSA severity. The purpose of this work was to investigate various metrics that represent baseline fluid retention in the legs and examine their correlation with neck fluid volume (NFV) and to develop a robust model for predicting fluid accumulation in the neck. In 13 healthy awake non-obese men, LFV and NFV were recorded continuously and simultaneously while standing for 5 minutes and then lying supine for 90 minutes. Simple regression was used to examine correlations between baseline LFV, baseline neck circumference (NC) and change in LFV with the outcome variables: change in NC (ΔNC) and in NFV (ΔNFV90) after lying supine for 90 minutes. An exhaustive grid search was implemented to find combinations of input variables which best modeled outcomes. We found strong positive correlations between baseline LFV (supine and standing) and ΔNFV90. Models developed for predicting ΔNFV90 included baseline standing LFV, baseline NC combined with change in LFV after lying supine for 90 minutes. These correlations and the developed models suggest that a greater baseline LFV might contribute to increased fluid accumulation in the neck. These results give more evidence that sedentary lifestyle might play a role in the pathogenesis of OSA by increasing the baseline LFV. The best models for predicting ΔNC include baseline LFV and NC; they improved accuracies of estimating ΔNC over individual predictors, suggesting that a combination of baseline fluid metrics is a good predictor of the change in NC while lying supine. Future work is aimed at adding additional baseline demographic features to improve model accuracy and eventually use it as a screening tool to predict severity of OSA prior to sleep.

  17. Graphical function mapping as a new way to explore cause-and-effect chains

    USGS Publications Warehouse

    Evans, Mary Anne

    2016-01-01

    Graphical function mapping provides a simple method for improving communication within interdisciplinary research teams and between scientists and nonscientists. This article introduces graphical function mapping using two examples and discusses its usefulness. Function mapping projects the outcome of one function into another to show the combined effect. Using this mathematical property in a simpler, even cartoon-like, graphical way allows the rapid combination of multiple information sources (models, empirical data, expert judgment, and guesses) in an intuitive visual to promote further discussion, scenario development, and clear communication.

  18. Modeled changes of cerebellar activity in mutant mice are predictive of their learning impairments

    NASA Astrophysics Data System (ADS)

    Badura, Aleksandra; Clopath, Claudia; Schonewille, Martijn; de Zeeuw, Chris I.

    2016-11-01

    Translating neuronal activity to measurable behavioral changes has been a long-standing goal of systems neuroscience. Recently, we have developed a model of phase-reversal learning of the vestibulo-ocular reflex, a well-established, cerebellar-dependent task. The model, comprising both the cerebellar cortex and vestibular nuclei, reproduces behavioral data and accounts for the changes in neural activity during learning in wild type mice. Here, we used our model to predict Purkinje cell spiking as well as behavior before and after learning of five different lines of mutant mice with distinct cell-specific alterations of the cerebellar cortical circuitry. We tested these predictions by obtaining electrophysiological data depicting changes in neuronal spiking. We show that our data is largely consistent with the model predictions for simple spike modulation of Purkinje cells and concomitant behavioral learning in four of the mutants. In addition, our model accurately predicts a shift in simple spike activity in a mutant mouse with a brainstem specific mutation. This combination of electrophysiological and computational techniques opens a possibility of predicting behavioral impairments from neural activity.

  19. Modeled changes of cerebellar activity in mutant mice are predictive of their learning impairments

    PubMed Central

    Badura, Aleksandra; Clopath, Claudia; Schonewille, Martijn; De Zeeuw, Chris I.

    2016-01-01

    Translating neuronal activity to measurable behavioral changes has been a long-standing goal of systems neuroscience. Recently, we have developed a model of phase-reversal learning of the vestibulo-ocular reflex, a well-established, cerebellar-dependent task. The model, comprising both the cerebellar cortex and vestibular nuclei, reproduces behavioral data and accounts for the changes in neural activity during learning in wild type mice. Here, we used our model to predict Purkinje cell spiking as well as behavior before and after learning of five different lines of mutant mice with distinct cell-specific alterations of the cerebellar cortical circuitry. We tested these predictions by obtaining electrophysiological data depicting changes in neuronal spiking. We show that our data is largely consistent with the model predictions for simple spike modulation of Purkinje cells and concomitant behavioral learning in four of the mutants. In addition, our model accurately predicts a shift in simple spike activity in a mutant mouse with a brainstem specific mutation. This combination of electrophysiological and computational techniques opens a possibility of predicting behavioral impairments from neural activity. PMID:27805050

  20. Less-simplified models of dark matter for direct detection and the LHC

    NASA Astrophysics Data System (ADS)

    Choudhury, Arghya; Kowalska, Kamila; Roszkowski, Leszek; Sessolo, Enrico Maria; Williams, Andrew J.

    2016-04-01

    We construct models of dark matter with suppressed spin-independent scattering cross section utilizing the existing simplified model framework. Even simple combinations of simplified models can exhibit interference effects that cause the tree level contribution to the scattering cross section to vanish, thus demonstrating that direct detection limits on simplified models are not robust when embedded in a more complicated and realistic framework. In general for fermionic WIMP masses ≳ 10 GeV direct detection limits on the spin-independent scattering cross section are much stronger than those coming from the LHC. However these model combinations, which we call less-simplified models, represent situations where LHC searches become more competitive than direct detection experiments even for moderate dark matter mass. We show that a complementary use of several searches at the LHC can strongly constrain the direct detection blind spots by setting limits on the coupling constants and mediators' mass. We derive the strongest limits for combinations of vector + scalar, vector + "squark", and "squark" + scalar mediator, and present the corresponding projections for the LHC 14 TeV for a number of searches: mono-jet, jets + missing energy, and searches for heavy vector resonances.

  1. Known-component 3D-2D registration for quality assurance of spine surgery pedicle screw placement

    NASA Astrophysics Data System (ADS)

    Uneri, A.; De Silva, T.; Stayman, J. W.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gokaslan, Z. L.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2015-10-01

    A 3D-2D image registration method is presented that exploits knowledge of interventional devices (e.g. K-wires or spine screws—referred to as ‘known components’) to extend the functionality of intraoperative radiography/fluoroscopy by providing quantitative measurement and quality assurance (QA) of the surgical product. The known-component registration (KC-Reg) algorithm uses robust 3D-2D registration combined with 3D component models of surgical devices known to be present in intraoperative 2D radiographs. Component models were investigated that vary in fidelity from simple parametric models (e.g. approximation of a screw as a simple cylinder, referred to as ‘parametrically-known’ component [pKC] registration) to precise models based on device-specific CAD drawings (referred to as ‘exactly-known’ component [eKC] registration). 3D-2D registration from three intraoperative radiographs was solved using the covariance matrix adaptation evolution strategy (CMA-ES) to maximize image-gradient similarity, relating device placement relative to 3D preoperative CT of the patient. Spine phantom and cadaver studies were conducted to evaluate registration accuracy and demonstrate QA of the surgical product by verification of the type of devices delivered and conformance within the ‘acceptance window’ of the spinal pedicle. Pedicle screws were successfully registered to radiographs acquired from a mobile C-arm, providing TRE 1-4 mm and  <5° using simple parametric (pKC) models, further improved to  <1 mm and  <1° using eKC registration. Using advanced pKC models, screws that did not match the device models specified in the surgical plan were detected with an accuracy of  >99%. Visualization of registered devices relative to surgical planning and the pedicle acceptance window provided potentially valuable QA of the surgical product and reliable detection of pedicle screw breach. 3D-2D registration combined with 3D models of known surgical devices offers a novel method for intraoperative QA. The method provides a near-real-time independent check against pedicle breach, facilitating revision within the same procedure if necessary and providing more rigorous verification of the surgical product.

  2. HIA, the next step: Defining models and roles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Putters, Kim

    If HIA is to be an effective instrument for optimising health interests in the policy making process it has to recognise the different contests in which policy is made and the relevance of both technical rationality and political rationality. Policy making may adopt a rational perspective in which there is a systematic and orderly progression from problem formulation to solution or a network perspective in which there are multiple interdependencies, extensive negotiation and compromise, and the steps from problem to formulation are not followed sequentially or in any particular order. Policy problems may be simple with clear causal pathways andmore » responsibilities or complex with unclear causal pathways and disputed responsibilities. Network analysis is required to show which stakeholders are involved, their support for health issues and the degree of consensus. From this analysis three models of HIA emerge. The first is the phases model which is fitted to simple problems and a rational perspective of policymaking. This model involves following structured steps. The second model is the rounds (Echternach) model that is fitted to complex problems and a network perspective of policymaking. This model is dynamic and concentrates on network solutions taking these steps in no particular order. The final model is the 'garbage can' model fitted to contexts which combine simple and complex problems. In this model HIA functions as a problem solver and signpost keeping all possible solutions and stakeholders in play and allowing solutions to emerge over time. HIA models should be the beginning rather than the conclusion of discussion the worlds of HIA and policymaking.« less

  3. Derivation of flood frequency curves in poorly gauged Mediterranean catchments using a simple stochastic hydrological rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Aronica, G. T.; Candela, A.

    2007-12-01

    SummaryIn this paper a Monte Carlo procedure for deriving frequency distributions of peak flows using a semi-distributed stochastic rainfall-runoff model is presented. The rainfall-runoff model here used is very simple one, with a limited number of parameters and practically does not require any calibration, resulting in a robust tool for those catchments which are partially or poorly gauged. The procedure is based on three modules: a stochastic rainfall generator module, a hydrologic loss module and a flood routing module. In the rainfall generator module the rainfall storm, i.e. the maximum rainfall depth for a fixed duration, is assumed to follow the two components extreme value (TCEV) distribution whose parameters have been estimated at regional scale for Sicily. The catchment response has been modelled by using the Soil Conservation Service-Curve Number (SCS-CN) method, in a semi-distributed form, for the transformation of total rainfall to effective rainfall and simple form of IUH for the flood routing. Here, SCS-CN method is implemented in probabilistic form with respect to prior-to-storm conditions, allowing to relax the classical iso-frequency assumption between rainfall and peak flow. The procedure is tested on six practical case studies where synthetic FFC (flood frequency curve) were obtained starting from model variables distributions by simulating 5000 flood events combining 5000 values of total rainfall depth for the storm duration and AMC (antecedent moisture conditions) conditions. The application of this procedure showed how Monte Carlo simulation technique can reproduce the observed flood frequency curves with reasonable accuracy over a wide range of return periods using a simple and parsimonious approach, limited data input and without any calibration of the rainfall-runoff model.

  4. Matrix Fatigue Cracking Mechanisms of Alpha(2) TMC for Hypersonic Applications

    NASA Technical Reports Server (NTRS)

    Gabb, Timothy P.; Gayda, John

    1994-01-01

    The objective of this work was to understand matrix cracking mechanisms in a unidirectional alpha(sub 2) TMC in possible hypersonic applications. A (0)(sub 8) SCS-6/Ti-24Al-11Nb (at. percent) TMC was first subjected to a variety of simple isothermal and nonisothermal fatigue cycles to evaluate the damage mechanisms in simple conditions. A modified ascent mission cycle test was then performed to evaluate the combined effects of loading modes. This cycle mixes mechanical cycling at 150 and 483 C, sustained loads, and a slow thermal cycle to 815 C. At low cyclic stresses and strains more common in hypersonic applications, environment-assisted surface cracking limited fatigue resistance. This damage mechanism was most acute for out-of-phase nonisothermal cycles having extended cycle periods and the ascent mission cycle. A simple linear fraction damage model was employed to help understand this damage mechanism. Time-dependent environmental damage was found to strongly influence out-of-phase and mission life, with mechanical cycling damage due to the combination of external loading and CTE mismatch stresses playing a smaller role. The mechanical cycling and sustained loads in the mission cycle also had a smaller role.

  5. Cost effectiveness of a pharmacist-led information technology intervention for reducing rates of clinically important errors in medicines management in general practices (PINCER).

    PubMed

    Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J

    2014-06-01

    We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.

  6. Simple models for studying complex spatiotemporal patterns of animal behavior

    NASA Astrophysics Data System (ADS)

    Tyutyunov, Yuri V.; Titova, Lyudmila I.

    2017-06-01

    Minimal mathematical models able to explain complex patterns of animal behavior are essential parts of simulation systems describing large-scale spatiotemporal dynamics of trophic communities, particularly those with wide-ranging species, such as occur in pelagic environments. We present results obtained with three different modelling approaches: (i) an individual-based model of animal spatial behavior; (ii) a continuous taxis-diffusion-reaction system of partial-difference equations; (iii) a 'hybrid' approach combining the individual-based algorithm of organism movements with explicit description of decay and diffusion of the movement stimuli. Though the models are based on extremely simple rules, they all allow description of spatial movements of animals in a predator-prey system within a closed habitat, reproducing some typical patterns of the pursuit-evasion behavior observed in natural populations. In all three models, at each spatial position the animal movements are determined by local conditions only, so the pattern of collective behavior emerges due to self-organization. The movement velocities of animals are proportional to the density gradients of specific cues emitted by individuals of the antagonistic species (pheromones, exometabolites or mechanical waves of the media, e.g., sound). These cues play a role of taxis stimuli: prey attract predators, while predators repel prey. Depending on the nature and the properties of the movement stimulus we propose using either a simplified individual-based model, a continuous taxis pursuit-evasion system, or a little more detailed 'hybrid' approach that combines simulation of the individual movements with the continuous model describing diffusion and decay of the stimuli in an explicit way. These can be used to improve movement models for many species, including large marine predators.

  7. Higher-Order Extended Lagrangian Born–Oppenheimer Molecular Dynamics for Classical Polarizable Models

    DOE PAGES

    Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.

    2018-01-09

    Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less

  8. Translucent Radiosity: Efficiently Combining Diffuse Inter-Reflection and Subsurface Scattering.

    PubMed

    Sheng, Yu; Shi, Yulong; Wang, Lili; Narasimhan, Srinivasa G

    2014-07-01

    It is hard to efficiently model the light transport in scenes with translucent objects for interactive applications. The inter-reflection between objects and their environments and the subsurface scattering through the materials intertwine to produce visual effects like color bleeding, light glows, and soft shading. Monte-Carlo based approaches have demonstrated impressive results but are computationally expensive, and faster approaches model either only inter-reflection or only subsurface scattering. In this paper, we present a simple analytic model that combines diffuse inter-reflection and isotropic subsurface scattering. Our approach extends the classical work in radiosity by including a subsurface scattering matrix that operates in conjunction with the traditional form factor matrix. This subsurface scattering matrix can be constructed using analytic, measurement-based or simulation-based models and can capture both homogeneous and heterogeneous translucencies. Using a fast iterative solution to radiosity, we demonstrate scene relighting and dynamically varying object translucencies at near interactive rates.

  9. Higher-Order Extended Lagrangian Born-Oppenheimer Molecular Dynamics for Classical Polarizable Models.

    PubMed

    Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M N

    2018-02-13

    Generalized extended Lagrangian Born-Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate "shadow" potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential to any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.

  10. Higher-Order Extended Lagrangian Born–Oppenheimer Molecular Dynamics for Classical Polarizable Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.

    Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less

  11. X-DRAIN and XDS: a simplified road erosion prediction method

    Treesearch

    William J. Elliot; David E. Hall; S. R. Graves

    1998-01-01

    To develop a simple road sediment delivery tool, the WEPP program modeled sedimentation from forest roads for more than 50,000 combinations of distance between cross drains, road gradient, soil texture, distance from stream, steepness of the buffer between the road and the stream, and climate. The sediment yield prediction from each of these runs was stored in a data...

  12. The application of finite volume methods for modelling three-dimensional incompressible flow on an unstructured mesh

    NASA Astrophysics Data System (ADS)

    Lonsdale, R. D.; Webster, R.

    This paper demonstrates the application of a simple finite volume approach to a finite element mesh, combining the economy of the former with the geometrical flexibility of the latter. The procedure is used to model a three-dimensional flow on a mesh of linear eight-node brick (hexahedra). Simulations are performed for a wide range of flow problems, some in excess of 94,000 nodes. The resulting computer code ASTEC that incorporates these procedures is described.

  13. Applicability of Existing C3 (Command, Control and Communications) Vulnerability and Hardness Analyses to Sentry System Issues.

    DTIC Science & Technology

    1983-01-13

    Naval .1 Ordnance Systems Command ) codes are detailed propagation simulations mostly at lower frequencies . These are combined with WEPH code phenomenology...AD B062349L. Scope /Abstract: This report describes a simple model for predicting the loads on box-like target structures subject to air blast. A... model and applying it to targets which can be approximated by a series of rectangular parallelopipeds. In this report the physical phenomena of high

  14. Sensitivity Study for Long Term Reliability

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2008-01-01

    This paper illustrates using Markov models to establish system and maintenance requirements for small electronic controllers where the goal is a high probability of continuous service for a long period of time. The system and maintenance items considered are quality of components, various degrees of simple redundancy, redundancy with reconfiguration, diagnostic levels, periodic maintenance, and preventive maintenance. Markov models permit a quantitative investigation with comparison and contrast. An element of special interest is the use of conditional probability to study the combination of imperfect diagnostics and periodic maintenance.

  15. Influence of smooth temperature variation on hotspot ignition

    NASA Astrophysics Data System (ADS)

    Reinbacher, Fynn; Regele, Jonathan David

    2018-01-01

    Autoignition in thermally stratified reactive mixtures originates in localised hotspots. The ignition behaviour is often characterised using linear temperature gradients and more recently constant temperature plateaus combined with temperature gradients. Acoustic timescale characterisation of plateau regions has been successfully used to characterise the type of mechanical disturbance that will be created from a plateau core ignition. This work combines linear temperature gradients with superelliptic cores in order to more accurately account for a local temperature maximum of finite size and the smooth temperature variation contained inside realistic hotspot centres. A one-step Arrhenius reaction is used to model a H2-air reactive mixture. Using the superelliptic approach a range of behaviours for temperature distributions are investigated by varying the temperature profile between the gradient only and plateau and gradient bounding cases. Each superelliptic case is compared to a respective plateau and gradient case where simple acoustic timescale characterisation may be performed. It is shown that hot spots equivalent with excitation-to-acoustic timescale ratios sufficiently greater than unity exhibit behaviour very similar to a simple plateau-gradient model. However, for larger hot spots with timescale ratios sufficiently less than unity the reaction behaviour is highly dependent on the smooth temperature profile contained within the core region.

  16. Perception of multi-stable dot lattices in the visual periphery: an effect of internal positional noise.

    PubMed

    Põder, Endel

    2011-02-16

    Dot lattices are very simple multi-stable images where the dots can be perceived as being grouped in different ways. The probabilities of grouping along different orientations as dependent on inter-dot distances along these orientations can be predicted by a simple quantitative model. L. Bleumers, P. De Graef, K. Verfaillie, and J. Wagemans (2008) found that for peripheral presentation, this model should be combined with random guesses on a proportion of trials. The present study shows that the probability of random responses decreases with decreasing ambiguity of lattices and is different for bi-stable and tri-stable lattices. With central presentation, similar effects can be produced by adding positional noise to the dots. The results suggest that different levels of internal positional noise might explain the differences between peripheral and central proximity grouping.

  17. Prototype design based on NX subdivision modeling application

    NASA Astrophysics Data System (ADS)

    Zhan, Xianghui; Li, Xiaoda

    2018-04-01

    Prototype design is an important part of the product design, through a quick and easy way to draw a three-dimensional product prototype. Combined with the actual production, the prototype could be modified several times, resulting in a highly efficient and reasonable design before the formal design. Subdivision modeling is a common method of modeling product prototypes. Through Subdivision modeling, people can in a short time with a simple operation to get the product prototype of the three-dimensional model. This paper discusses the operation method of Subdivision modeling for geometry. Take a vacuum cleaner as an example, the NX Subdivision modeling functions are applied. Finally, the development of Subdivision modeling is forecasted.

  18. SCEC UCVM - Unified California Velocity Model

    NASA Astrophysics Data System (ADS)

    Small, P.; Maechling, P. J.; Jordan, T. H.; Ely, G. P.; Taborda, R.

    2011-12-01

    The SCEC Unified California Velocity Model (UCVM) is a software framework for a state-wide California velocity model. UCVM provides researchers with two new capabilities: (1) the ability to query Vp, Vs, and density from any standard regional California velocity model through a uniform interface, and (2) the ability to combine multiple velocity models into a single state-wide model. These features are crucial in order to support large-scale ground motion simulations and to facilitate improvements in the underlying velocity models. UCVM provides integrated support for the following standard velocity models: SCEC CVM-H, SCEC CVM-S and the CVM-SI variant, USGS Bay Area (cencalvm), Lin-Thurber Statewide, and other smaller regional models. New models may be easily incorporated as they become available. Two query interfaces are provided: a Linux command line program, and a C application programming interface (API). The C API query interface is simple, fully independent of any specific model, and MPI-friendly. Input coordinates are geographic longitude/latitude and the vertical coordinate may be either depth or elevation. Output parameters include Vp, Vs, and density along with the identity of the model from which these material properties were obtained. In addition to access to the standard models, UCVM also includes a high resolution statewide digital elevation model, Vs30 map, and an optional near-surface geo-technical layer (GTL) based on Ely's Vs30-derived GTL. The elevation and Vs30 information is bundled along with the returned Vp,Vs velocities and density, so that all relevant information is retrieved with a single query. When the GTL is enabled, it is blended with the underlying crustal velocity models along a configurable transition depth range with an interpolation function. Multiple, possibly overlapping, regional velocity models may be combined together into a single state-wide model. This is accomplished by tiling the regional models on top of one another in three dimensions in a researcher-specified order. No reconciliation is performed within overlapping model regions, although a post-processing tool is provided to perform a simple numerical smoothing. Lastly, a 3D region from a combined model may be extracted and exported into a CVM-Etree. This etree may then be queried by UCVM much like a standard velocity model but with less overhead and generally better performance due to the efficiency of the etree data structure.

  19. Experimental designs and risk assessment in combination toxicology: panel discussion.

    PubMed

    Henschler, D; Bolt, H M; Jonker, D; Pieters, M N; Groten, J P

    1996-01-01

    Advancing our knowledge on the toxicology of combined exposures to chemicals and implementation of this knowledge in guidelines for health risk assessment of such combined exposures are necessities dictated by the simple fact that humans are continuously exposed to a multitude of chemicals. A prerequisite for successful research and fruitful discussions on the toxicology of combined exposures (mixtures of chemicals) is the use of defined terminology implemented by an authoritative international body such as, for example, the International Union of Pure and Applied Chemistry (IUPAC) Toxicology Committee. The extreme complexity of mixture toxicology calls for new research methodologies to study interactive effects, taking into account limited resources. Of these methodologies, statistical designs and mathematical modelling of toxicokinetics and toxicodynamics seem to be most promising. Emphasis should be placed on low-dose modelling and experimental validation. The scientifically sound so-called bottom-up approach should be supplemented with more pragmatic approaches, focusing on selection of the most hazardous chemicals in a mixture and careful consideration of the mode of action and possible interactive effects of these chemicals. Pragmatic approaches may be of particular importance to study and evaluate complex mixtures; after identification of the 'top ten' (most risky) chemicals in the mixture they can be examined and evaluated as a defined (simple) chemical mixture. In setting exposure limits for individual chemicals, the use of an additional safety factor to compensate for potential increased risk due to simultaneous exposure to other chemicals, has no clear scientific justification. The use of such an additional factor is a political rather than a scientific choice.

  20. Adaptive nonlinear control for autonomous ground vehicles

    NASA Astrophysics Data System (ADS)

    Black, William S.

    We present the background and motivation for ground vehicle autonomy, and focus on uses for space-exploration. Using a simple design example of an autonomous ground vehicle we derive the equations of motion. After providing the mathematical background for nonlinear systems and control we present two common methods for exactly linearizing nonlinear systems, feedback linearization and backstepping. We use these in combination with three adaptive control methods: model reference adaptive control, adaptive sliding mode control, and extremum-seeking model reference adaptive control. We show the performances of each combination through several simulation results. We then consider disturbances in the system, and design nonlinear disturbance observers for both single-input-single-output and multi-input-multi-output systems. Finally, we show the performance of these observers with simulation results.

  1. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses.

    PubMed

    Fuller, Robert William; Wong, Tony E; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections.

  2. Some anticipated contributions to core fluid dynamics from the GRM

    NASA Technical Reports Server (NTRS)

    Vanvorhies, C.

    1985-01-01

    It is broadly maintained that the secular variation (SV) of the large scale geomagnetic field contains information on the fluid dynamics of Earth's electrically conducting outer core. The electromagnetic theory appropriate to a simple Earth model has recently been combined with reduced geomagnetic data in order to extract some of this information and ascertain its significance. The simple Earth model consists of a rigid, electrically insulating mantle surrounding a spherical, inviscid, and perfectly conducting liquid outer core. This model was tested against seismology by using truncated spherical harmonic models of the observed geomagnetic field to locate Earth's core-mantle boundary, CMB. Further electromagnetic theory has been developed and applied to the problem of estimating the horizontal fluid motion just beneath CMB. Of particular geophysical interest are the hypotheses that these motions: (1) include appreciable surface divergence indicative of vertical motion at depth, and (2) are steady for time intervals of a decade or more. In addition to the extended testing of the basic Earth model, the proposed GRM provides a unique opportunity to test these dynamical hypotheses.

  3. A hybrid approach to determining cornea mechanical properties in vivo using a combination of nano-indentation and inverse finite element analysis.

    PubMed

    Abyaneh, M H; Wildman, R D; Ashcroft, I A; Ruiz, P D

    2013-11-01

    An analysis of the material properties of porcine corneas has been performed. A simple stress relaxation test was performed to determine the viscoelastic properties and a rheological model was built based on the Generalized Maxwell (GM) approach. A validation experiment using nano-indentation showed that an isotropic GM model was insufficient for describing the corneal material behaviour when exposed to a complex stress state. A new technique was proposed for determining the properties, using a combination of nano-indentation experiment, an isotropic and orthotropic GM model and inverse finite element method. The good agreement using this method suggests that this is a promising technique for measuring material properties in vivo and further work should focus on the reliability of the approach in practice. © 2013 Elsevier Ltd. All rights reserved.

  4. Finite mixture modeling approach for developing crash modification factors in highway safety analysis.

    PubMed

    Park, Byung-Jung; Lord, Dominique; Wu, Lingtao

    2016-10-28

    This study aimed to investigate the relative performance of two models (negative binomial (NB) model and two-component finite mixture of negative binomial models (FMNB-2)) in terms of developing crash modification factors (CMFs). Crash data on rural multilane divided highways in California and Texas were modeled with the two models, and crash modification functions (CMFunctions) were derived. The resultant CMFunction estimated from the FMNB-2 model showed several good properties over that from the NB model. First, the safety effect of a covariate was better reflected by the CMFunction developed using the FMNB-2 model, since the model takes into account the differential responsiveness of crash frequency to the covariate. Second, the CMFunction derived from the FMNB-2 model is able to capture nonlinear relationships between covariate and safety. Finally, following the same concept as those for NB models, the combined CMFs of multiple treatments were estimated using the FMNB-2 model. The results indicated that they are not the simple multiplicative of single ones (i.e., their safety effects are not independent under FMNB-2 models). Adjustment Factors (AFs) were then developed. It is revealed that current Highway Safety Manual's method could over- or under-estimate the combined CMFs under particular combination of covariates. Safety analysts are encouraged to consider using the FMNB-2 models for developing CMFs and AFs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Finite element modeling and analysis of tires

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Andersen, C. M.

    1983-01-01

    Predicting the response of tires under various loading conditions using finite element technology is addressed. Some of the recent advances in finite element technology which have high potential for application to tire modeling problems are reviewed. The analysis and modeling needs for tires are identified. Reduction methods for large-scale nonlinear analysis, with particular emphasis on treatment of combined loads, displacement-dependent and nonconservative loadings; development of simple and efficient mixed finite element models for shell analysis, identification of equivalent mixed and purely displacement models, and determination of the advantages of using mixed models; and effective computational models for large-rotation nonlinear problems, based on a total Lagrangian description of the deformation are included.

  6. A Developmental Learning Approach of Mobile Manipulator via Playing

    PubMed Central

    Wu, Ruiqi; Zhou, Changle; Chao, Fei; Zhu, Zuyuan; Lin, Chih-Min; Yang, Longzhi

    2017-01-01

    Inspired by infant development theories, a robotic developmental model combined with game elements is proposed in this paper. This model does not require the definition of specific developmental goals for the robot, but the developmental goals are implied in the goals of a series of game tasks. The games are characterized into a sequence of game modes based on the complexity of the game tasks from simple to complex, and the task complexity is determined by the applications of developmental constraints. Given a current mode, the robot switches to play in a more complicated game mode when it cannot find any new salient stimuli in the current mode. By doing so, the robot gradually achieves it developmental goals by playing different modes of games. In the experiment, the game was instantiated into a mobile robot with the playing task of picking up toys, and the game is designed with a simple game mode and a complex game mode. A developmental algorithm, “Lift-Constraint, Act and Saturate,” is employed to drive the mobile robot move from the simple mode to the complex one. The experimental results show that the mobile manipulator is able to successfully learn the mobile grasping ability after playing simple and complex games, which is promising in developing robotic abilities to solve complex tasks using games. PMID:29046632

  7. The NIST Simple Guide for Evaluating and Expressing Measurement Uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio

    2016-11-01

    NIST has recently published guidance on the evaluation and expression of the uncertainty of NIST measurement results [1, 2], supplementing but not replacing B. N. Taylor and C. E. Kuyatt's (1994) Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results (NIST Technical Note 1297) [3], which tracks closely the Guide to the expression of uncertainty in measurement (GUM) [4], originally published in 1995 by the Joint Committee for Guides in Metrology of the International Bureau of Weights and Measures (BIPM). The scope of this Simple Guide, however, is much broader than the scope of both NIST Technical Note 1297 and the GUM, because it attempts to address several of the uncertainty evaluation challenges that have arisen at NIST since the 1990s, for example to include molecular biology, greenhouse gases and climate science measurements, and forensic science. The Simple Guide also expands the scope of those two other guidance documents by recognizing observation equations (that is, statistical models) as bona fide measurement models. These models are indispensable to reduce data from interlaboratory studies, to combine measurement results for the same measurand obtained by different methods, and to characterize the uncertainty of calibration and analysis functions used in the measurement of force, temperature, or composition of gas mixtures. This presentation reviews the salient aspects of the Simple Guide, illustrates the use of models and methods for uncertainty evaluation not contemplated in the GUM, and also demonstrates the NIST Uncertainty Machine [5] and the NIST Consensus Builder, which are web-based applications accessible worldwide that facilitate evaluations of measurement uncertainty and the characterization of consensus values in interlaboratory studies.

  8. Adapting SimpleTreat for simulating behaviour of chemical substances during industrial sewage treatment.

    PubMed

    Struijs, J; van de Meent, D; Schowanek, D; Buchholz, H; Patoux, R; Wolf, T; Austin, T; Tolls, J; van Leeuwen, K; Galay-Burgos, M

    2016-09-01

    The multimedia model SimpleTreat, evaluates the distribution and elimination of chemicals by municipal sewage treatment plants (STP). It is applied in the framework of REACH (Registration, Evaluation, Authorization and Restriction of Chemicals). This article describes an adaptation of this model for application to industrial sewage treatment plants (I-STP). The intended use of this re-parametrized model is focused on risk assessment during manufacture and subsequent uses of chemicals, also in the framework of REACH. The results of an inquiry on the operational characteristics of industrial sewage treatment installations were used to re-parameterize the model. It appeared that one property of industrial sewage, i.e. Biological Oxygen Demand (BOD) in combination with one parameter of the activated sludge process, the hydraulic retention time (HRT) is satisfactory to define treatment of industrial wastewater by means of the activated sludge process. The adapted model was compared to the original municipal version, SimpleTreat 4.0, by means of a sensitivity analysis. The consistency of the model output was assessed by computing the emission to water from an I-STP of a set of fictitious chemicals. This set of chemicals exhibit a range of physico-chemical and biodegradability properties occurring in industrial wastewater. Predicted removal rates of a chemical from raw sewage are higher in industrial than in municipal STPs. The latter have typically shorter hydraulic retention times with diminished opportunity for elimination of the chemical due to volatilization and biodegradation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. [Clinical study of cervical spondylotic radiculopathy treated with massage therapy combined with Magnetic sticking therapy at the auricular points and the cost comparison].

    PubMed

    Wang, Saina; Sheng, Feng; Pan, Yunhua; Xu, Feng; Wang, Zhichao; Cheng, Lei

    2015-08-01

    To compare the clinical efficacy on cervical spondylotic radiculopathy between the combined therapy of massage and magnetic-sticking at the auricular points and the simple massage therapy, and conduct the health economics evaluation. Seventy-two patients of cervical spondylotic radiculopathy were randomized into a combined therapy group, and a simple massage group, 36 cases in each one. Finally, 35 cases and 34 cases were met the inclusive criteria in the corresponding groups separately. In the combined therapy group, the massage therapy and the magnetic sticking therapy at auricular points were combined in the treatment. Massage therapy was mainly applied to Fengchi (GB 20), Jianjing (GB 21), Jianwaishu (SI 14), Jianyu (LI 15) and Quchi (LI 11). The main auricular points for magnetic sticking pressure were Jingzhui (AH13), Gan (On12) Shen (CO10), Shenmen (TF4), Pizhixia (AT4). In the simple massage group, the simple massage therapy was given, the massage parts and methods were the same as those in the combined therapy group. The treatment was given once every two days, three times a week, for 4 weeks totally. The cervical spondylosis effect scale and the simplified McGill pain questionnaire were adopted to observe the improvements in the clinical symptoms, clinical examination, daily life movement, superficial muscular pain in the neck and the health economics cost in the patients of the two groups. The effect was evaluated in the two groups. The effective rate and the clinical curative rate in the combined therapy group were better than those in the control group [100. 0% (35/35) vs 85. 3% (29/34), 42. 9% (15/35) vs 17. 6% (6/34), both P<0. 05]. The scores of the spontaneous symptoms, clinical examnation, daily life movement and superficialmuscular pain in the neck were improved apparently after treatment as compared with those before treatment in the patients of the two groups (all P<0. 001). In terms of the improvements in the spontaneous symptoms, clinical examination total scores and superficial muscular pain in the' neck were more significant in the combined therapy group as compared with those in the simple massage group (P<0. 05, P<0. 01, P<0. 001). The cost at the unit effect in the combined therapy group was lower than that in the simple massage group (P<0. 05). Compared with the simple massage therapy, the massage therapy combined with magnetic sticking therapy at auricular points achieves the better effect and lower cost in health economics.

  10. Simulating Freshwater Availability under Future Climate Conditions

    NASA Astrophysics Data System (ADS)

    Zhao, F.; Zeng, N.; Motesharrei, S.; Gustafson, K. C.; Rivas, J.; Miralles-Wilhelm, F.; Kalnay, E.

    2013-12-01

    Freshwater availability is a key factor for regional development. Precipitation, evaporation, river inflow and outflow are the major terms in the estimate of regional water supply. In this study, we aim to obtain a realistic estimate for these variables from 1901 to 2100. First we calculated the ensemble mean precipitation using the 2011-2100 RCP4.5 output (re-sampled to half-degree spatial resolution) from 16 General Circulation Models (GCMs) participating the Coupled Model Intercomparison Project Phase 5 (CMIP5). The projections are then combined with the half-degree 1901-2010 Climate Research Unit (CRU) TS3.2 dataset after bias correction. We then used the combined data to drive our UMD Earth System Model (ESM), in order to generate evaporation and runoff. We also developed a River-Routing Scheme based on the idea of Taikan Oki, as part of the ESM. It is capable of calculating river inflow and outflow for any region, driven by the gridded runoff output. River direction and slope information from Global Dominant River Tracing (DRT) dataset are included in our scheme. The effects of reservoirs/dams are parameterized based on a few simple factors such as soil moisture, population density and geographic regions. Simulated river flow is validated with river gauge measurements for the world's major rivers. We have applied our river flow calculation to two data-rich watersheds in the United States: Phoenix AMA watershed and the Potomac River Basin. The results are used in our SImple WAter model (SIWA) to explore water management options.

  11. Multi-Detection Events, Probability Density Functions, and Reduced Location Area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Schrom, Brian T.

    2016-03-01

    Abstract Several efforts have been made in the Comprehensive Nuclear-Test-Ban Treaty (CTBT) community to assess the benefits of combining detections of radionuclides to improve the location estimates available from atmospheric transport modeling (ATM) backtrack calculations. We present a Bayesian estimation approach rather than a simple dilution field of regard approach to allow xenon detections and non-detections to be combined mathematically. This system represents one possible probabilistic approach to radionuclide event formation. Application of this method to a recent interesting radionuclide event shows a substantial reduction in the location uncertainty of that event.

  12. Principal components colour display of ERTS imagery

    NASA Technical Reports Server (NTRS)

    Taylor, M. M.

    1974-01-01

    In the technique presented, colours are not derived from single bands, but rather from independent linear combinations of the bands. Using a simple model of the processing done by the visual system, three informationally independent linear combinations of the four ERTS bands are mapped onto the three visual colour dimensions of brightness, redness-greenness and blueness-yellowness. The technique permits user-specific transformations which enhance particular features, but this is not usually needed, since a single transformation provides a picture which conveys much of the information implicit in the ERTS data. Examples of experimental vector images with matched individual band images are shown.

  13. Mathematical Modeling Of Life-Support Systems

    NASA Technical Reports Server (NTRS)

    Seshan, Panchalam K.; Ganapathi, Balasubramanian; Jan, Darrell L.; Ferrall, Joseph F.; Rohatgi, Naresh K.

    1994-01-01

    Generic hierarchical model of life-support system developed to facilitate comparisons of options in design of system. Model represents combinations of interdependent subsystems supporting microbes, plants, fish, and land animals (including humans). Generic model enables rapid configuration of variety of specific life support component models for tradeoff studies culminating in single system design. Enables rapid evaluation of effects of substituting alternate technologies and even entire groups of technologies and subsystems. Used to synthesize and analyze life-support systems ranging from relatively simple, nonregenerative units like aquariums to complex closed-loop systems aboard submarines or spacecraft. Model, called Generic Modular Flow Schematic (GMFS), coded in such chemical-process-simulation languages as Aspen Plus and expressed as three-dimensional spreadsheet.

  14. A proposed mathematical model for sleep patterning.

    PubMed

    Lawder, R E

    1984-01-01

    The simple model of a ramp, intersecting a triangular waveform, yields results which conform with seven generalized observations of sleep patterning; including the progressive lengthening of 'rapid-eye-movement' (REM) sleep periods within near-constant REM/nonREM cycle periods. Predicted values of REM sleep time, and of Stage 3/4 nonREM sleep time, can be computed using the observed values of other parameters. The distributions of the actual REM and Stage 3/4 times relative to the predicted values were closer to normal than the distributions relative to simple 'best line' fits. It was found that sleep onset tends to occur at a particular moment in the individual subject's '90-min cycle' (the use of a solar time-scale masks this effect), which could account for a subject with a naturally short sleep/wake cycle synchronizing to a 24-h rhythm. A combined 'sleep control system' model offers quantitative simulation of the sleep patterning of endogenous depressives and, with a different perturbation, qualitative simulation of the symptoms of narcolepsy.

  15. Colour and luminance contrasts predict the human detection of natural stimuli in complex visual environments.

    PubMed

    White, Thomas E; Rojas, Bibiana; Mappes, Johanna; Rautiala, Petri; Kemp, Darrell J

    2017-09-01

    Much of what we know about human colour perception has come from psychophysical studies conducted in tightly-controlled laboratory settings. An enduring challenge, however, lies in extrapolating this knowledge to the noisy conditions that characterize our actual visual experience. Here we combine statistical models of visual perception with empirical data to explore how chromatic (hue/saturation) and achromatic (luminant) information underpins the detection and classification of stimuli in a complex forest environment. The data best support a simple linear model of stimulus detection as an additive function of both luminance and saturation contrast. The strength of each predictor is modest yet consistent across gross variation in viewing conditions, which accords with expectation based upon general primate psychophysics. Our findings implicate simple visual cues in the guidance of perception amidst natural noise, and highlight the potential for informing human vision via a fusion between psychophysical modelling and real-world behaviour. © 2017 The Author(s).

  16. Improved methods for the measurement and modeling of PV module and system performance for all operating conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, D.L.

    1995-11-01

    The objective of this work was to develop improved performance model for modules and systems for for all operating conditions for use in module specifications, system and BOS component design, and system rating or monitoring. The approach taken was to identify and quantify the influence of dominant factors of solar irradiance, cell temperature, angle-of-incidence; and solar spectrum; use outdoor test procedures to separate the effects of electrical, thermal, and optical performance; use fundamental cell characteristics to improve analysis; and combine factors in simple model using the common variables.

  17. Stressed Oxidation Life Prediction for C/SiC Composites

    NASA Technical Reports Server (NTRS)

    Levine, Stanley R.

    2004-01-01

    The residual strength and life of C/SiC is dominated by carbon interface and fiber oxidation if seal coat and matrix cracks are open to allow oxygen ingress. Crack opening is determined by the combination of thermal, mechanical and thermal expansion mismatch induced stresses. When cracks are open, life can be predicted by simple oxidation based models with reaction controlled kinetics at low temperature, and by gas phase diffusion controlled kinetics at high temperatures. Key life governing variables in these models include temperature, stress, initial strength, oxygen partial pressure, and total pressure. These models are described in this paper.

  18. Prediction of power requirements for a longwall armored face conveyor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broadfoot, A.R.; Betz, R.E.

    1995-12-31

    Longwall armored face conveyors (AFC`s) have traditionally been designed using a combination of heuristics and simple models. However, as longwalls increase in length these design procedures are proving to be inadequate. The result has either been costly loss of production due to AFC stalling or component failure, or larger than necessary capital investment due to overdesign. In order to allow accurate estimation of the power requirements for an AFC this paper develops a comprehensive model of all the friction forces associated with the AFC. Power requirement predictions obtained from these models are then compared with measurements from two mine faces.

  19. Horizontal Running Mattress Suture Modified with Intermittent Simple Loops

    PubMed Central

    Chacon, Anna H; Shiman, Michael I; Strozier, Narissa; Zaiac, Martin N

    2013-01-01

    Using the combination of a horizontal running mattress suture with intermittent loops achieves both good eversion with the horizontal running mattress plus the ease of removal of the simple loops. This combination technique also avoids the characteristic railroad track marks that result from prolonged non-absorbable suture retention. The unique feature of our technique is the incorporation of one simple running suture after every two runs of the horizontal running mattress suture. To demonstrate its utility, we used the suturing technique on several patients and analyzed the cosmetic outcome with post-operative photographs in comparison to other suturing techniques. In summary, the combination of running horizontal mattress suture with simple intermittent loops demonstrates functional and cosmetic benefits that can be readily taught, comprehended, and employed, leading to desirable aesthetic results and wound edge eversion. PMID:23723610

  20. Analysis of the correlative factors for velopharyngeal closure of patients with cleft palate after primary repair.

    PubMed

    Chen, Qi; Li, Yang; Shi, Bing; Yin, Heng; Zheng, Guang-Ning; Zheng, Qian

    2013-12-01

    The objective of this study was to analyze the correlative factors for velopharyngeal closure of patients with cleft palate after primary repair. Ninety-five nonsyndromic patients with cleft palate were enrolled. Two surgical techniques were applied in the patients: simple palatoplasty and combined palatoplasty with pharyngoplasty. All patients were assessed 6 months after the operation. The postoperative velopharyngeal closure (VPC) rate was compared by χ(2) test and the correlative factors were analyzed with logistic regression model. The postoperative VPC rate of young patients was higher than that of old patients, the group with incomplete cleft palate was higher than the group with complete cleft palate, and combined palatoplasty with pharyngoplasty was higher than simple palatoplasty. Operative age, cleft type, and surgical technique were the contributing factors for postoperative VPC rate. Operative age, cleft type, and surgical technique were significant factors influencing postoperative VPC rate of patients with cleft palate. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Critical Analysis of Different Methods to Retrieve Atmosphere Humidity Profiles from GNSS Radio Occultation Observations

    NASA Astrophysics Data System (ADS)

    Vespe, Francesco; Benedetto, Catia

    2013-04-01

    The huge amount of GPS Radio Occultation (RO) observations currently available thanks to space mission like COSMIC, CHAMP, GRACE, TERRASAR-X etc., have greatly encouraged the research of new algorithms suitable to extract humidity, temperature and pressure profiles of the atmosphere in a more and more precise way. For what concern the humidity profiles in these last years two different approaches have been widely proved and applied: the "Simple" and the 1DVAR methods. The Simple methods essentially determine dry refractivity profiles from temperature analysis profiles and hydrostatic equation. Then the dry refractivity is subtracted from RO refractivity to achieve the wet component. Finally from the wet refractivity is achieved humidity. The 1DVAR approach combines RO observations with profiles given by the background models with both the terms weighted with the inverse of covariance matrix. The advantage of "Simple" methods is that they are not affected by bias due to the background models. We have proposed in the past the BPV approach to retrieve humidity. Our approach can be classified among the "Simple" methods. The BPV approach works with dry atmospheric CIRA-Q models which depend on latitude, DoY and height. The dry CIRA-Q refractivity profile is selected estimating the involved parameters in a non linear least square fashion achieved by fitting RO observed bending angles through the stratosphere. The BPV as well as all the other "Simple" methods, has as drawback the unphysical occurrence of negative "humidity". Thus we propose to apply a modulated weighting of the fit residuals just to minimize the effects of this inconvenient. After a proper tuning of the approach, we plan to present the results of the validation.

  2. Combining forecast weights: Why and how?

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim

    2012-09-01

    This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.

  3. Methods for quantifying simple gravity sensing in Drosophila melanogaster.

    PubMed

    Inagaki, Hidehiko K; Kamikouchi, Azusa; Ito, Kei

    2010-01-01

    Perception of gravity is essential for animals: most animals possess specific sense organs to detect the direction of the gravitational force. Little is known, however, about the molecular and neural mechanisms underlying their behavioral responses to gravity. Drosophila melanogaster, having a rather simple nervous system and a large variety of molecular genetic tools available, serves as an ideal model for analyzing the mechanisms underlying gravity sensing. Here we describe an assay to measure simple gravity responses of flies behaviorally. This method can be applied for screening genetic mutants of gravity perception. Furthermore, in combination with recent genetic techniques to silence or activate selective sets of neurons, it serves as a powerful tool to systematically identify neural substrates required for the proper behavioral responses to gravity. The assay requires 10 min to perform, and two experiments can be performed simultaneously, enabling 12 experiments per hour.

  4. Analysis of hardening behavior of sheet metals by a new simple shear test method taking into account the Bauschinger effect

    NASA Astrophysics Data System (ADS)

    Bang, Sungsik; Rickhey, Felix; Kim, Minsoo; Lee, Hyungyil; Kim, Naksoo

    2013-12-01

    In this study we establish a process to predict hardening behavior considering the Bauschinger effect for zircaloy-4 sheets. When a metal is compressed after tension in forming, the yield strength decreases. For this reason, the Bauschinger effect should be considered in FE simulations of spring-back. We suggested a suitable specimen size and a method for determining the optimum tightening torque for simple shear tests. Shear stress-strain curves are obtained for five materials. We developed a method to convert the shear load-displacement curve to the effective stress-strain curve with FEA. We simulated the simple shear forward/reverse test using the combined isotropic/kinematic hardening model. We also investigated the change of the load-displacement curve by varying the hardening coefficients. We determined the hardening coefficients so that they follow the hardening behavior of zircaloy-4 in experiments.

  5. Simple Peer-to-Peer SIP Privacy

    NASA Astrophysics Data System (ADS)

    Koskela, Joakim; Tarkoma, Sasu

    In this paper, we introduce a model for enhancing privacy in peer-to-peer communication systems. The model is based on data obfuscation, preventing intermediate nodes from tracking calls, while still utilizing the shared resources of the peer network. This increases security when moving between untrusted, limited and ad-hoc networks, when the user is forced to rely on peer-to-peer schemes. The model is evaluated using a Host Identity Protocol-based prototype on mobile devices, and is found to provide good privacy, especially when combined with a source address hiding scheme. The contribution of this paper is to present the model and results obtained from its use, including usability considerations.

  6. Robot geometry calibration

    NASA Technical Reports Server (NTRS)

    Hayati, Samad; Tso, Kam; Roston, Gerald

    1988-01-01

    Autonomous robot task execution requires that the end effector of the robot be positioned accurately relative to a reference world-coordinate frame. The authors present a complete formulation to identify the actual robot geometric parameters. The method applies to any serial link manipulator with arbitrary order and combination of revolute and prismatic joints. A method is also presented to solve the inverse kinematic of the actual robot model which usually is not a so-called simple robot. Experimental results performed by utilizing a PUMA 560 with simple measurement hardware are presented. As a result of this calibration a precision move command is designed and integrated into a robot language, RCCL, and used in the NASA Telerobot Testbed.

  7. Maintenance of algal endosymbionts in Paramecium bursaria: a simple model based on population dynamics.

    PubMed

    Iwai, Sosuke; Fujiwara, Kenji; Tamura, Takuro

    2016-09-01

    Algal endosymbiosis is widely distributed in eukaryotes including many protists and metazoans, and plays important roles in aquatic ecosystems, combining phagotrophy and phototrophy. To maintain a stable symbiotic relationship, endosymbiont population size in the host must be properly regulated and maintained at a constant level; however, the mechanisms underlying the maintenance of algal endosymbionts are still largely unknown. Here we investigate the population dynamics of the unicellular ciliate Paramecium bursaria and its Chlorella-like algal endosymbiont under various experimental conditions in a simple culture system. Our results suggest that endosymbiont population size in P. bursaria was not regulated by active processes such as cell division coupling between the two organisms, or partitioning of the endosymbionts at host cell division. Regardless, endosymbiont population size was eventually adjusted to a nearly constant level once cells were grown with light and nutrients. To explain this apparent regulation of population size, we propose a simple mechanism based on the different growth properties (specifically the nutrient requirements) of the two organisms, and based from this develop a mathematical model to describe the population dynamics of host and endosymbiont. The proposed mechanism and model may provide a basis for understanding the maintenance of algal endosymbionts. © 2015 Society for Applied Microbiology and John Wiley & Sons Ltd.

  8. Kinematic strategies for mitigating gust perturbations in insects.

    PubMed

    Vance, J T; Faruque, I; Humbert, J S

    2013-03-01

    Insects are attractive models for the development of micro-aerial vehicles (MAVs) due to their relatively simple sensing, actuation and control architectures as compared to vertebrates, and because of their robust flight ability in dynamic and heterogeneous environments, characterized by turbulence and gusts of wind. How do insects respond to gust perturbations? We investigated this question by perturbing freely-flying honey bees and stalk-eye flies with low-pressure bursts of compressed air to simulate a wind gust. Body and wing kinematics were analyzed from flight sequences, recorded using three high-speed digital video cameras. Bees quickly responded to body rotations caused by gusts through bilateral asymmetry in stroke amplitude, whereas stalk-eye flies used a combination of asymmetric stroke amplitude and wing rotation angle. Both insects coordinated asymmetric and symmetric kinematics in response to gusts, which provides model strategies for simple yet robust flight characteristics for MAVs.

  9. Design of Friction Stir Spot Welding Tools by Using a Novel Thermal-Mechanical Approach

    PubMed Central

    Su, Zheng-Ming; Qiu, Qi-Hong; Lin, Pai-Chen

    2016-01-01

    A simple thermal-mechanical model for friction stir spot welding (FSSW) was developed to obtain similar weld performance for different weld tools. Use of the thermal-mechanical model and a combined approach enabled the design of weld tools for various sizes but similar qualities. Three weld tools for weld radii of 4, 5, and 6 mm were made to join 6061-T6 aluminum sheets. Performance evaluations of the three weld tools compared fracture behavior, microstructure, micro-hardness distribution, and welding temperature of welds in lap-shear specimens. For welds made by the three weld tools under identical processing conditions, failure loads were approximately proportional to tool size. Failure modes, microstructures, and micro-hardness distributions were similar. Welding temperatures correlated with frictional heat generation rate densities. Because the three weld tools sufficiently met all design objectives, the proposed approach is considered a simple and feasible guideline for preliminary tool design. PMID:28773800

  10. Design of Friction Stir Spot Welding Tools by Using a Novel Thermal-Mechanical Approach.

    PubMed

    Su, Zheng-Ming; Qiu, Qi-Hong; Lin, Pai-Chen

    2016-08-09

    A simple thermal-mechanical model for friction stir spot welding (FSSW) was developed to obtain similar weld performance for different weld tools. Use of the thermal-mechanical model and a combined approach enabled the design of weld tools for various sizes but similar qualities. Three weld tools for weld radii of 4, 5, and 6 mm were made to join 6061-T6 aluminum sheets. Performance evaluations of the three weld tools compared fracture behavior, microstructure, micro-hardness distribution, and welding temperature of welds in lap-shear specimens. For welds made by the three weld tools under identical processing conditions, failure loads were approximately proportional to tool size. Failure modes, microstructures, and micro-hardness distributions were similar. Welding temperatures correlated with frictional heat generation rate densities. Because the three weld tools sufficiently met all design objectives, the proposed approach is considered a simple and feasible guideline for preliminary tool design.

  11. Modeling helical proteins using residual dipolar couplings, sparse long-range distance constraints and a simple residue-based force field

    PubMed Central

    Eggimann, Becky L.; Vostrikov, Vitaly V.; Veglia, Gianluigi; Siepmann, J. Ilja

    2013-01-01

    We present a fast and simple protocol to obtain moderate-resolution backbone structures of helical proteins. This approach utilizes a combination of sparse backbone NMR data (residual dipolar couplings and paramagnetic relaxation enhancements) or EPR data with a residue-based force field and Monte Carlo/simulated annealing protocol to explore the folding energy landscape of helical proteins. By using only backbone NMR data, which are relatively easy to collect and analyze, and strategically placed spin relaxation probes, we show that it is possible to obtain protein structures with correct helical topology and backbone RMS deviations well below 4 Å. This approach offers promising alternatives for the structural determination of proteins in which nuclear Overha-user effect data are difficult or impossible to assign and produces initial models that will speed up the high-resolution structure determination by NMR spectroscopy. PMID:24639619

  12. Using entropy measures to characterize human locomotion.

    PubMed

    Leverick, Graham; Szturm, Tony; Wu, Christine Q

    2014-12-01

    Entropy measures have been widely used to quantify the complexity of theoretical and experimental dynamical systems. In this paper, the value of using entropy measures to characterize human locomotion is demonstrated based on their construct validity, predictive validity in a simple model of human walking and convergent validity in an experimental study. Results show that four of the five considered entropy measures increase meaningfully with the increased probability of falling in a simple passive bipedal walker model. The same four entropy measures also experienced statistically significant increases in response to increasing age and gait impairment caused by cognitive interference in an experimental study. Of the considered entropy measures, the proposed quantized dynamical entropy (QDE) and quantization-based approximation of sample entropy (QASE) offered the best combination of sensitivity to changes in gait dynamics and computational efficiency. Based on these results, entropy appears to be a viable candidate for assessing the stability of human locomotion.

  13. Reactive extraction of lactic acid with trioctylamine/methylene chloride/n-hexane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, D.H.; Hong, W.H.

    The trioctylamine (TOA)/methylene chloride (MC)/n-hexane system was used as the extraction agent for the extraction of lactic acid. Curves of equilibrium and hydration were obtained at various temperatures and concentrations of TOA. A modified mass action model was proposed to interpret the equilibrium and the hydration curves. The reaction mechanism and the corresponding parameters which best represent the equilibrium data were estimated, and the concentration of water in the organic phase was predicted by inserting the parameters into the simple mathematical equation of the modified model. The concentration of MC and the change of temperature were important factors for themore » extraction and the stripping process. The stripping was performed by a simple distillation which was a combination of temperature-swing regeneration and diluent-swing regeneration. The type of inactive diluent has no influence on the stripping. The stripping efficiencies were about 70%.« less

  14. Human ecology and behaviour in malaria control in tropical Africa

    PubMed Central

    MacCormack, C. P.

    1984-01-01

    Since about 250 BC, human modification of African environments has created increasingly favourable breeding conditions for Anopheles gambiae. Subsequent adaptations to the increased malaria risk are briefly described and reference is made to Macdonald's mathematical model for the disease. Since values for the variables in that model are high in tropical Africa, there is little possibility that simple, inexpensive, self-help primary health care initiatives can control malaria in the region. However, in combination with more substantial public health initiatives, simple primary health care activities might be done by communities to (1) prevent mosquitos from feeding on people, (2) prevent or reduce mosquito breeding, (3) destroy adult mosquitos, and (4) eliminate malaria parasites from human hosts. Lay methods of protection and self-care are examined and some topics for further research are indicated. Culturally appropriate health education methods are also suggested. PMID:6335685

  15. A new technique for thermodynamic engine modeling

    NASA Astrophysics Data System (ADS)

    Matthews, R. D.; Peters, J. E.; Beckel, S. A.; Shizhi, M.

    1983-12-01

    Reference is made to the equations given by Matthews (1983) for piston engine performance, which show that this performance depends on four fundamental engine efficiencies (combustion, thermodynamic cycle or indicated thermal, volumetric, and mechanical) as well as on engine operation and design parameters. This set of equations is seen to suggest a different technique for engine modeling; that is, that each efficiency should be modeled individually and the efficiency submodels then combined to obtain an overall engine model. A simple method for predicting the combustion efficiency of piston engines is therefore required. Various methods are proposed here and compared with experimental results. These combustion efficiency models are then combined with various models for the volumetric, mechanical, and indicated thermal efficiencies to yield three different engine models of varying degrees of sophistication. Comparisons are then made of the predictions of the resulting engine models with experimental data. It is found that combustion efficiency is almost independent of load, speed, and compression ratio and is not strongly dependent on fuel type, at least so long as the hydrogen-to-carbon ratio is reasonably close to that for isooctane.

  16. Description and validation of the Simple, Efficient, Dynamic, Global, Ecological Simulator (SEDGES v.1.0)

    NASA Astrophysics Data System (ADS)

    Paiewonsky, Pablo; Elison Timm, Oliver

    2018-03-01

    In this paper, we present a simple dynamic global vegetation model whose primary intended use is auxiliary to the land-atmosphere coupling scheme of a climate model, particularly one of intermediate complexity. The model simulates and provides important ecological-only variables but also some hydrological and surface energy variables that are typically either simulated by land surface schemes or else used as boundary data input for these schemes. The model formulations and their derivations are presented here, in detail. The model includes some realistic and useful features for its level of complexity, including a photosynthetic dependency on light, full coupling of photosynthesis and transpiration through an interactive canopy resistance, and a soil organic carbon dependence for bare-soil albedo. We evaluate the model's performance by running it as part of a simple land surface scheme that is driven by reanalysis data. The evaluation against observational data includes net primary productivity, leaf area index, surface albedo, and diagnosed variables relevant for the closure of the hydrological cycle. In this setup, we find that the model gives an adequate to good simulation of basic large-scale ecological and hydrological variables. Of the variables analyzed in this paper, gross primary productivity is particularly well simulated. The results also reveal the current limitations of the model. The most significant deficiency is the excessive simulation of evapotranspiration in mid- to high northern latitudes during their winter to spring transition. The model has a relative advantage in situations that require some combination of computational efficiency, model transparency and tractability, and the simulation of the large-scale vegetation and land surface characteristics under non-present-day conditions.

  17. Throughput and latency programmable optical transceiver by using DSP and FEC control.

    PubMed

    Tanimura, Takahito; Hoshida, Takeshi; Kato, Tomoyuki; Watanabe, Shigeki; Suzuki, Makoto; Morikawa, Hiroyuki

    2017-05-15

    We propose and experimentally demonstrate a proof-of-concept of a programmable optical transceiver that enables simultaneous optimization of multiple programmable parameters (modulation format, symbol rate, power allocation, and FEC) for satisfying throughput, signal quality, and latency requirements. The proposed optical transceiver also accommodates multiple sub-channels that can transport different optical signals with different requirements. Multi-degree-of-freedom of the parameters often leads to difficulty in finding the optimum combination among the parameters due to an explosion of the number of combinations. The proposed optical transceiver reduces the number of combinations and finds feasible sets of programmable parameters by using constraints of the parameters combined with a precise analytical model. For precise BER prediction with the specified set of parameters, we model the sub-channel BER as a function of OSNR, modulation formats, symbol rates, and power difference between sub-channels. Next, we formulate simple constraints of the parameters and combine the constraints with the analytical model to seek feasible sets of programmable parameters. Finally, we experimentally demonstrate the end-to-end operation of the proposed optical transceiver with offline manner including low-density parity-check (LDPC) FEC encoding and decoding under a specific use case with latency-sensitive application and 40-km transmission.

  18. Geosimulation of urban growth and demographic decline in the Ruhr: a case study for 2025 using the artificial intelligence of cells and agents

    NASA Astrophysics Data System (ADS)

    Rienow, Andreas; Stenger, Dirk

    2014-07-01

    The Ruhr is an "old acquaintance" in the discourse of urban decline in old industrialized cities. The agglomeration has to struggle with archetypical problems of former monofunctional manufacturing cities. Surprisingly, the image of a shrinking city has to be refuted if you shift the focus from socioeconomic wealth to its morphological extension. Thus, it is the objective of this study to meet the challenge of modeling urban sprawl and demographic decline by combining two artificial intelligent solutions: The popular urban cellular automaton SLEUTH simulates urban growth using four simple but effective growth rules. In order to improve its performance, SLEUTH has been modified among others by combining it with a robust probability map based on support vector machines. Additionally, a complex multi-agent system is developed to simulate residential mobility in a shrinking city agglomeration: residential mobility and the housing market of shrinking city systems focuses on the dynamic of interregional housing markets implying the development of potential dwelling areas. The multi-agent system comprises the simulation of population patterns, housing prices, and housing demand in shrinking city agglomerations. Both models are calibrated and validated regarding their localization and quantification performance. Subsequently, the urban landscape configuration and composition of the Ruhr 2025 are simulated. A simple spatial join is used to combine the results serving as valuable inputs for future regional planning in the context of multifarious demographic change and preceding urban growth.

  19. Artificial intelligence exploration of unstable protocells leads to predictable properties and discovery of collective behavior.

    PubMed

    Points, Laurie J; Taylor, James Ward; Grizou, Jonathan; Donkers, Kevin; Cronin, Leroy

    2018-01-30

    Protocell models are used to investigate how cells might have first assembled on Earth. Some, like oil-in-water droplets, can be seemingly simple models, while able to exhibit complex and unpredictable behaviors. How such simple oil-in-water systems can come together to yield complex and life-like behaviors remains a key question. Herein, we illustrate how the combination of automated experimentation and image processing, physicochemical analysis, and machine learning allows significant advances to be made in understanding the driving forces behind oil-in-water droplet behaviors. Utilizing >7,000 experiments collected using an autonomous robotic platform, we illustrate how smart automation cannot only help with exploration, optimization, and discovery of new behaviors, but can also be core to developing fundamental understanding of such systems. Using this process, we were able to relate droplet formulation to behavior via predicted physical properties, and to identify and predict more occurrences of a rare collective droplet behavior, droplet swarming. Proton NMR spectroscopic and qualitative pH methods enabled us to better understand oil dissolution, chemical change, phase transitions, and droplet and aqueous phase flows, illustrating the utility of the combination of smart-automation and traditional analytical chemistry techniques. We further extended our study for the simultaneous exploration of both the oil and aqueous phases using a robotic platform. Overall, this work shows that the combination of chemistry, robotics, and artificial intelligence enables discovery, prediction, and mechanistic understanding in ways that no one approach could achieve alone.

  20. Sensory Gain Outperforms Efficient Readout Mechanisms in Predicting Attention-Related Improvements in Behavior

    PubMed Central

    Ester, Edward F.; Deering, Sean

    2014-01-01

    Spatial attention has been postulated to facilitate perceptual processing via several different mechanisms. For instance, attention can amplify neural responses in sensory areas (sensory gain), mediate neural variability (noise modulation), or alter the manner in which sensory signals are selectively read out by postsensory decision mechanisms (efficient readout). Even in the context of simple behavioral tasks, it is unclear how well each of these mechanisms can account for the relationship between attention-modulated changes in behavior and neural activity because few studies have systematically mapped changes between stimulus intensity, attentional focus, neural activity, and behavioral performance. Here, we used a combination of psychophysics, event-related potentials (ERPs), and quantitative modeling to explicitly link attention-related changes in perceptual sensitivity with changes in the ERP amplitudes recorded from human observers. Spatial attention led to a multiplicative increase in the amplitude of an early sensory ERP component (the P1, peaking ∼80–130 ms poststimulus) and in the amplitude of the late positive deflection component (peaking ∼230–330 ms poststimulus). A simple model based on signal detection theory demonstrates that these multiplicative gain changes were sufficient to account for attention-related improvements in perceptual sensitivity, without a need to invoke noise modulation. Moreover, combining the observed multiplicative gain with a postsensory readout mechanism resulted in a significantly poorer description of the observed behavioral data. We conclude that, at least in the context of relatively simple visual discrimination tasks, spatial attention modulates perceptual sensitivity primarily by modulating the gain of neural responses during early sensory processing PMID:25274817

  1. A biochemically semi-detailed model of auxin-mediated vein formation in plant leaves.

    PubMed

    Roussel, Marc R; Slingerland, Martin J

    2012-09-01

    We present here a model intended to capture the biochemistry of vein formation in plant leaves. The model consists of three modules. Two of these modules, those describing auxin signaling and transport in plant cells, are biochemically detailed. We couple these modules to a simple model for PIN (auxin efflux carrier) protein localization based on an extracellular auxin sensor. We study the single-cell responses of this combined model in order to verify proper functioning of the modeled biochemical network. We then assemble a multicellular model from the single-cell building blocks. We find that the model can, under some conditions, generate files of polarized cells, but not true veins. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. A simple thermodynamic model useful for calculating gas solubilities in water/brine/hydrocarbon mixtures from 0 to 250 C and 1 to 150 bars

    NASA Astrophysics Data System (ADS)

    Perez, R. J.; Shevalier, M.; Hutcheon, I.

    2004-05-01

    Gas solubility is of considerable interest, not only for the theoretical understanding of vapor-liquid equilibria, but also due to extensive applications in combined geochemical, engineering, and environmental problems, such as greenhouse gas sequestration. Reliable models for gas solubility calculations in salt waters and hydrocarbons are also valuable when evaluating fluid inclusions saturated with gas components. We have modeled the solubility of methane, ethane, hydrogen, carbon dioxide, hydrogen sulfide, and five other gases in a water-brine-hydrocarbon system by solving a non-linear system of equations composed by modified Henry's Law Constants (HLC), gas fugacities, and assuming binary mixtures. HLCs are a function of pressure, temperature, brine salinity, and hydrocarbon density. Experimental data of vapor pressures and mutual solubilities of binary mixtures provide the basis for the calibration of the proposed model. It is demonstrated that, by using the Setchenow equation, only a relatively simple modification of the pure water model is required to assess the solubility of gases in brine solutions. Henry's Law constants for gases in hydrocarbons are derived using regular solution theory and Ostwald coefficients available from the literature. We present a set of two-parameter polynomial expressions, which allow simple computation and formulation of the model. Our calculations show that solubility predictions using modified HLCs are acceptable within 0 to 250 C, 1 to 150 bars, salinity up to 5 molar, and gas concentrations up to 4 molar. Our model is currently being used in the IEA Weyburn CO2 monitoring and storage project.

  3. Computational principles underlying recognition of acoustic signals in grasshoppers and crickets.

    PubMed

    Ronacher, Bernhard; Hennig, R Matthias; Clemens, Jan

    2015-01-01

    Grasshoppers and crickets independently evolved hearing organs and acoustic communication. They differ considerably in the organization of their auditory pathways, and the complexity of their songs, which are essential for mate attraction. Recent approaches aimed at describing the behavioral preference functions of females in both taxa by a simple modeling framework. The basic structure of the model consists of three processing steps: (1) feature extraction with a bank of 'LN models'-each containing a linear filter followed by a nonlinearity, (2) temporal integration, and (3) linear combination. The specific properties of the filters and nonlinearities were determined using a genetic learning algorithm trained on a large set of different song features and the corresponding behavioral response scores. The model showed an excellent prediction of the behavioral responses to the tested songs. Most remarkably, in both taxa the genetic algorithm found Gabor-like functions as the optimal filter shapes. By slight modifications of Gabor filters several types of preference functions could be modeled, which are observed in different cricket species. Furthermore, this model was able to explain several so far enigmatic results in grasshoppers. The computational approach offered a remarkably simple framework that can account for phenotypically rather different preference functions across several taxa.

  4. Convective Detrainment and Control of the Tropical Water Vapor Distribution

    NASA Astrophysics Data System (ADS)

    Kursinski, E. R.; Rind, D.

    2006-12-01

    Sherwood et al. (2006) developed a simple power law model describing the relative humidity distribution in the tropical free troposphere where the power law exponent is the ratio of a drying time scale (tied to subsidence rates) and a moistening time which is the average time between convective moistening events whose temporal distribution is described as a Poisson distribution. Sherwood et al. showed that the relative humidity distribution observed by GPS occultations and MLS is indeed close to a power law, approximately consistent with the simple model's prediction. Here we modify this simple model to be in terms of vertical length scales rather than time scales in a manner that we think more correctly matches the model predictions to the observations. The subsidence is now in terms of the vertical distance the air mass has descended since it last detrained from a convective plume. The moisture source term becomes a profile of convective detrainment flux versus altitude. The vertical profile of the convective detrainment flux is deduced from the observed distribution of the specific humidity at each altitude combined with sinking rates estimated from radiative cooling. The resulting free tropospheric detrainment profile increases with altitude above 3 km somewhat like an exponential profile which explains the approximate power law behavior observed by Sherwood et al. The observations also reveal a seasonal variation in the detrainment profile reflecting changes in the convective behavior expected by some based on observed seasonal changes in the vertical structure of convective regions. The simple model results will be compared with the moisture control mechanisms in a GCM with many additional mechanisms, the GISS climate model, as described in Rind (2006). References Rind. D., 2006: Water-vapor feedback. In Frontiers of Climate Modeling, J. T. Kiehl and V. Ramanathan (eds), Cambridge University Press [ISBN-13 978-0-521- 79132-8], 251-284. Sherwood, S., E. R. Kursinski and W. Read, A distribution law for free-tropospheric relative humidity, J. Clim. In press. 2006

  5. A new adaptive multiple modelling approach for non-linear and non-stationary systems

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Gong, Yu; Hong, Xia

    2016-07-01

    This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.

  6. Removing flicker based on sparse color correspondences in old film restoration

    NASA Astrophysics Data System (ADS)

    Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran

    2018-04-01

    In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.

  7. Low-Velocity Impact Response of Sandwich Beams with Functionally Graded Core

    NASA Technical Reports Server (NTRS)

    Apetre, N. A.; Sankar, B. V.; Ambur, D. R.

    2006-01-01

    The problem of low-speed impact of a one-dimensional sandwich panel by a rigid cylindrical projectile is considered. The core of the sandwich panel is functionally graded such that the density, and hence its stiffness, vary through the thickness. The problem is a combination of static contact problem and dynamic response of the sandwich panel obtained via a simple nonlinear spring-mass model (quasi-static approximation). The variation of core Young's modulus is represented by a polynomial in the thickness coordinate, but the Poisson's ratio is kept constant. The two-dimensional elasticity equations for the plane sandwich structure are solved using a combination of Fourier series and Galerkin method. The contact problem is solved using the assumed contact stress distribution method. For the impact problem we used a simple dynamic model based on quasi-static behavior of the panel - the sandwich beam was modeled as a combination of two springs, a linear spring to account for the global deflection and a nonlinear spring to represent the local indentation effects. Results indicate that the contact stiffness of thc beam with graded core Increases causing the contact stresses and other stress components in the vicinity of contact to increase. However, the values of maximum strains corresponding to the maximum impact load arc reduced considerably due to grading of thc core properties. For a better comparison, the thickness of the functionally graded cores was chosen such that the flexural stiffness was equal to that of a beam with homogeneous core. The results indicate that functionally graded cores can be used effectively to mitigate or completely prevent impact damage in sandwich composites.

  8. Comparison of geometrical shock dynamics and kinematic models for shock-wave propagation

    NASA Astrophysics Data System (ADS)

    Ridoux, J.; Lardjane, N.; Monasse, L.; Coulouvrat, F.

    2018-03-01

    Geometrical shock dynamics (GSD) is a simplified model for nonlinear shock-wave propagation, based on the decomposition of the shock front into elementary ray tubes. Assuming small changes in the ray tube area, and neglecting the effect of the post-shock flow, a simple relation linking the local curvature and velocity of the front, known as the A{-}M rule, is obtained. More recently, a new simplified model, referred to as the kinematic model, was proposed. This model is obtained by combining the three-dimensional Euler equations and the Rankine-Hugoniot relations at the front, which leads to an equation for the normal variation of the shock Mach number at the wave front. In the same way as GSD, the kinematic model is closed by neglecting the post-shock flow effects. Although each model's approach is different, we prove their structural equivalence: the kinematic model can be rewritten under the form of GSD with a specific A{-}M relation. Both models are then compared through a wide variety of examples including experimental data or Eulerian simulation results when available. Attention is drawn to the simple cases of compression ramps and diffraction over convex corners. The analysis is completed by the more complex cases of the diffraction over a cylinder, a sphere, a mound, and a trough.

  9. Robust high-performance control for robotic manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1991-01-01

    Model-based and performance-based control techniques are combined for an electrical robotic control system. Thus, two distinct and separate design philosophies have been merged into a single control system having a control law formulation including two distinct and separate components, each of which yields a respective signal component that is combined into a total command signal for the system. Those two separate system components include a feedforward controller and a feedback controller. The feedforward controller is model-based and contains any known part of the manipulator dynamics that can be used for on-line control to produce a nominal feedforward component of the system's control signal. The feedback controller is performance-based and consists of a simple adaptive PID controller which generates an adaptive control signal to complement the nominal feedforward signal.

  10. Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aab, A.; Abreu, P.; Andringa, S.

    2017-04-01

    We present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above 5 ⋅ 10{sup 18} eV, i.e. the region of the all-particle spectrum above the so-called 'ankle' feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show that uncertainties aboutmore » physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results.« less

  11. Robust high-performance control for robotic manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1989-01-01

    Model-based and performance-based control techniques are combined for an electrical robotic control system. Thus, two distinct and separate design philosophies were merged into a single control system having a control law formulation including two distinct and separate components, each of which yields a respective signal componet that is combined into a total command signal for the system. Those two separate system components include a feedforward controller and feedback controller. The feedforward controller is model-based and contains any known part of the manipulator dynamics that can be used for on-line control to produce a nominal feedforward component of the system's control signal. The feedback controller is performance-based and consists of a simple adaptive PID controller which generates an adaptive control signal to complement the nomical feedforward signal.

  12. Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Aab, A.; Abreu, P.; Aglietta, M.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Anastasi, G. A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arqueros, F.; Arsene, N.; Asorey, H.; Assis, P.; Aublin, J.; Avila, G.; Badescu, A. M.; Balaceanu, A.; Barreira Luz, R. J.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Biteau, J.; Blaess, S. G.; Blanco, A.; Blazek, J.; Bleve, C.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Borodai, N.; Botti, A. M.; Brack, J.; Brancus, I.; Bretz, T.; Bridgeman, A.; Briechle, F. L.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, L.; Cancio, A.; Canfora, F.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Chavez, A. G.; Chinellato, J. A.; Chudoba, J.; Clay, R. W.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Cronin, J.; D'Amico, S.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Jong, S. J.; De Mauro, G.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; Debatin, J.; Deligny, O.; Di Giulio, C.; di Matteo, A.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; D'Olivo, J. C.; Dorosti, Q.; dos Anjos, R. C.; Dova, M. T.; Dundovic, A.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Fick, B.; Figueira, J. M.; Filipčič, A.; Fratu, O.; Freire, M. M.; Fujii, T.; Fuster, A.; Gaior, R.; García, B.; Garcia-Pinto, D.; Gaté, F.; Gemmeke, H.; Gherghel-Lascu, A.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Głas, D.; Glaser, C.; Golup, G.; Gómez Berisso, M.; Gómez Vitale, P. F.; González, N.; Gorgi, A.; Gorham, P.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huege, T.; Hulsman, J.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Johnsen, J. A.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Katkov, I.; Keilhauer, B.; Kemp, E.; Kemp, J.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kuempel, D.; Kukec Mezek, G.; Kunka, N.; Kuotb Awad, A.; LaHurd, D.; Lauscher, M.; Legumina, R.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopes, L.; López, R.; López Casado, A.; Luce, Q.; Lucero, A.; Malacari, M.; Mallamaci, M.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Mariş, I. C.; Marsella, G.; Martello, D.; Martinez, H.; Martínez Bravo, O.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melo, D.; Menshikov, A.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Mockler, D.; Mollerach, S.; Montanet, F.; Morello, C.; Mostafá, M.; Müller, A. L.; Müller, G.; Muller, M. A.; Müller, S.; Mussa, R.; Naranjo, I.; Nellen, L.; Nguyen, P. H.; Niculescu-Oglinzanu, M.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, H.; Núñez, L. A.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Palatka, M.; Pallotta, J.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pedreira, F.; Pȩkala, J.; Pelayo, R.; Peña-Rodriguez, J.; Pereira, L. A. S.; Perlín, M.; Perrone, L.; Peters, C.; Petrera, S.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Ramos-Pollan, R.; Rautenberg, J.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rogozin, D.; Roncoroni, M. J.; Roth, M.; Roulet, E.; Rovero, A. C.; Ruehl, P.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santos, E. M.; Santos, E.; Sarazin, F.; Sarmento, R.; Sarmiento, C. A.; Sato, R.; Schauer, M.; Scherini, V.; Schieler, H.; Schimp, M.; Schmidt, D.; Scholten, O.; Schovánek, P.; Schröoder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sigl, G.; Silli, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sonntag, S.; Sorokin, J.; Squartini, R.; Stanca, D.; Stanič, S.; Stasielak, J.; Stassi, P.; Strafella, F.; Suarez, F.; Suarez Durán, M.; Sudholz, T.; Suomijärvi, T.; Supanitsky, A. D.; Swain, J.; Szadkowski, Z.; Taboada, A.; Taborda, O. A.; Tapia, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Tomankova, L.; Tomé, B.; Torralba Elipe, G.; Travnicek, P.; Trini, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Vergara Quispe, I. D.; Verzi, V.; Vicha, J.; Villaseñor, L.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weindl, A.; Wiencke, L.; Wilczyński, H.; Winchen, T.; Wirtz, M.; Wittkowski, D.; Wundheiler, B.; Yang, L.; Yelos, D.; Yushkov, A.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zimmermann, B.; Ziolkowski, M.; Zong, Z.; Zong, Z.

    2017-04-01

    We present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above 5 ṡ 1018 eV, i.e. the region of the all-particle spectrum above the so-called "ankle" feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show that uncertainties about physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results.

  13. Universal adsorption at the vapor-liquid interface near the consolute point

    NASA Technical Reports Server (NTRS)

    Schmidt, James W.

    1990-01-01

    The ellipticity of the vapor-liquid interface above mixtures of methylcyclohexane (C7H14) and perfluoromethylcyclohexane (C7F14) has been measured near the consolute point T(c) = 318.6 K. The data are consistent with a model of the interface that combines a short-ranged density-vs height profile in the vapor phase with a much longer-ranged composition-versus-height profile in the liquid. The value of the free parameter produced by fitting the model to the data is consistent with results from two other simple mixtures and a mixture of a polymer and solvent. This experiment combines precision ellipsometry of the vapor-liquid interface with in situ measurements of refractive indices of the liquid phases, and it precisely locates the consolute point.

  14. Research of MPPT for photovoltaic generation based on two-dimensional cloud model

    NASA Astrophysics Data System (ADS)

    Liu, Shuping; Fan, Wei

    2013-03-01

    The cloud model is a mathematical representation to fuzziness and randomness in linguistic concepts. It represents a qualitative concept with expected value Ex, entropy En and hyper entropy He, and integrates the fuzziness and randomness of a linguistic concept in a unified way. This model is a new method for transformation between qualitative and quantitative in the knowledge. This paper is introduced MPPT (maximum power point tracking, MPPT) controller based two- dimensional cloud model through analysis of auto-optimization MPPT control of photovoltaic power system and combining theory of cloud model. Simulation result shows that the cloud controller is simple and easy, directly perceived through the senses, and has strong robustness, better control performance.

  15. Using NASTRAN to model missile inertia loads

    NASA Technical Reports Server (NTRS)

    Marvin, R.; Porter, C.

    1985-01-01

    An important use of NASTRAN is in the area of structural loads analysis on weapon systems carried aboard aircraft. The program is used to predict bending moments and shears in missile bodies, when subjected to aircraft induced accelerations. The missile, launcher and aircraft wing are idealized, using rod and beam type elements for solution economy. Using the inertia relief capability of NASTRAN, the model is subjected to various acceleration combinations. It is found to be difficult to model the launcher sway braces and hooks which transmit compression only or tension only type forces respectively. A simple, iterative process was developed to overcome this modeling difficulty. A proposed code modification would help model compression or tension only contact type problems.

  16. Unsteady hovering wake parameters identified from dynamic model tests, part 1

    NASA Technical Reports Server (NTRS)

    Hohenemser, K. H.; Crews, S. T.

    1977-01-01

    The development of a 4-bladed model rotor is reported that can be excited with a simple eccentric mechanism in progressing and regressing modes with either harmonic or transient inputs. Parameter identification methods were applied to the problem of extracting parameters for linear perturbation models, including rotor dynamic inflow effects, from the measured blade flapping responses to transient pitch stirring excitations. These perturbation models were then used to predict blade flapping response to other pitch stirring transient inputs, and rotor wake and blade flapping responses to harmonic inputs. The viability and utility of using parameter identification methods for extracting the perturbation models from transients are demonstrated through these combined analytical and experimental studies.

  17. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model.

    PubMed

    Chao, Anne; Jost, Lou; Hsieh, T C; Ma, K H; Sherwin, William B; Rollins, Lee Ann

    2015-01-01

    Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation") between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.

  18. Authentication of Whey Protein Powders by Portable Mid-Infrared Spectrometers Combined with Pattern Recognition Analysis.

    PubMed

    Wang, Ting; Tan, Siow Ying; Mutilangi, William; Aykas, Didem P; Rodriguez-Saona, Luis E

    2015-10-01

    The objective of this study was to develop a simple and rapid method to differentiate whey protein types (WPC, WPI, and WPH) used for beverage manufacturing by combining the spectral signature collected from portable mid-infrared spectrometers and pattern recognition analysis. Whey protein powders from different suppliers are produced using a large number of processing and compositional variables, resulting in variation in composition, concentration, protein structure, and thus functionality. Whey protein powders including whey protein isolates, whey protein concentrates and whey protein hydrolysates were obtained from different suppliers and their spectra collected using portable mid-infrared spectrometers (single and triple reflection) by pressing the powder onto an Attenuated Total Reflectance (ATR) diamond crystal with a pressure clamp. Spectra were analyzed by soft independent modeling of class analogy (SIMCA) generating a classification model showing the ability to differentiate whey protein types by forming tight clusters with interclass distance values of >3, considered to be significantly different from each other. The major bands centered at 1640 and 1580 cm(-1) were responsible for separation and were associated with differences in amide I and amide II vibrations of proteins, respectively. Another important band in whey protein clustering was associated with carboxylate vibrations of acidic amino acids (∼1570 cm(-1)). The use of a portable mid-IR spectrometer combined with pattern recognition analysis showed potential for discriminating whey protein ingredients that can help to streamline the analytical procedure so that it is more applicable for field-based screening of ingredients. A rapid, simple and accurate method was developed to authenticate commercial whey protein products by using portable mid-infrared spectrometers combined with chemometrics, which could help ensure the functionality of whey protein ingredients in food applications. © 2015 Institute of Food Technologists®

  19. Automatic speech recognition using a predictive echo state network classifier.

    PubMed

    Skowronski, Mark D; Harris, John G

    2007-04-01

    We have combined an echo state network (ESN) with a competitive state machine framework to create a classification engine called the predictive ESN classifier. We derive the expressions for training the predictive ESN classifier and show that the model was significantly more noise robust compared to a hidden Markov model in noisy speech classification experiments by 8+/-1 dB signal-to-noise ratio. The simple training algorithm and noise robustness of the predictive ESN classifier make it an attractive classification engine for automatic speech recognition.

  20. Modeling 3-D objects with planar surfaces for prediction of electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Koch, M. B.; Beck, F. B.; Cockrell, C. R.

    1992-01-01

    Electromagnetic scattering analysis of objects at resonance is difficult because low frequency techniques are slow and computer intensive, and high frequency techniques may not be reliable. A new technique for predicting the electromagnetic backscatter from electrically conducting objects at resonance is studied. This technique is based on modeling three dimensional objects as a combination of flat plates where some of the plates are blocking the scattering from others. A cube is analyzed as a simple example. The preliminary results compare well with the Geometrical Theory of Diffraction and with measured data.

  1. Scaling for the SOL/separatrix χ ⊥ following from the heuristic drift model for the power scrape-off layer width

    NASA Astrophysics Data System (ADS)

    Huber, A.; Chankin, A. V.

    2017-06-01

    A simple two-point representation of the tokamak scrape-off layer (SOL) in the conduction limited regime, based on the parallel and perpendicular energy balance equations in combination with the heat flux width predicted by a heuristic drift-based model, was used to derive a scaling for the cross-field thermal diffusivity {χ }\\perp . For fixed plasma shape and neglecting weak power dependence indexes 1/8, the scaling {χ }\\perp \\propto {P}{{S}{{O}}{{L}}}/(n{B}θ {R}2) is derived.

  2. PVWatts Version 5 Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2014-09-01

    The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.

  3. High-temperature superconductivity using a model of hydrogen bonds.

    PubMed

    Kaplan, Daniel; Imry, Yoseph

    2018-05-29

    Recently, there has been much interest in high-temperature superconductors and more recently in hydrogen-based superconductors. This work offers a simple model that explains the behavior of the superconducting gap based on naive BCS (Bardeen-Cooper-Schrieffer) theory and reproduces most effects seen in experiments, including the isotope effect and [Formula: see text] enhancement as a function of pressure. We show that this is due to a combination of the factors appearing in the gap equation: the matrix element between the proton states and the level splitting of the proton.

  4. A buoyant tornado-probe concept incorporating an inverted lifting device. [and balloon combination

    NASA Technical Reports Server (NTRS)

    Grant, F. C.

    1973-01-01

    Addition of an inverted lifting device to a simple balloon probe is shown to make possible low-altitude entry to tornado cores with easier launch conditions than for the simple balloon probe. Balloon-lifter combinations are particularly suitable for penetration of tornadoes with average to strong circulation, but tornadoes of less than average circulation which are inaccessible to simple balloon probes become accessible. The increased launch radius which is needed for access to tornadoes over a wide range of circulation results in entry times of about 3 minutes. For a simple balloon probe the uninflated balloon must be first dropped on, or near, the track of the tornado from a safe distance. The increase in typical launch radius from about 0.75 kilometer to slightly over 1.0 kilometer with a balloon-lifter combination suggests that a direct air launch may be feasible.

  5. Nonlinear and threshold-dominated runoff generation controls DOC export in a small peat catchment

    NASA Astrophysics Data System (ADS)

    Birkel, C.; Broder, T.; Biester, H.

    2017-03-01

    We used a relatively simple two-layer, coupled hydrology-biogeochemistry model to simultaneously simulate streamflow and stream dissolved organic carbon (DOC) concentrations in a small lead and arsenic contaminated upland peat catchment in northwestern Germany. The model procedure was informed by an initial data mining analysis, in combination with regression relationships of discharge, DOC, and element export. We assessed the internal model DOC processing based on stream DOC hysteresis patterns and 3-hourly time step groundwater level and soil DOC data for two consecutive summer periods in 2013 and 2014. The parsimonious model (i.e., few calibrated parameters) showed the importance of nonlinear and rapid near-surface runoff generation mechanisms that caused around 60% of simulated DOC load. The total load was high even though these pathways were only activated during storm events on average 30% of the monitoring time—as also shown by the experimental data. Overall, the drier period 2013 resulted in increased nonlinearity but exported less DOC (115 kg C ha-1 yr-1 ± 11 kg C ha-1 yr-1) compared to the equivalent but wetter period in 2014 (189 kg C ha-1 yr-1 ± 38 kg C ha-1 yr-1). The exceedance of a critical water table threshold (-10 cm) triggered a rapid near-surface runoff response with associated higher DOC transport connecting all available DOC pools and subsequent dilution. We conclude that the combination of detailed experimental work with relatively simple, coupled hydrology-biogeochemistry models not only allowed the model to be internally constrained but also provided important insight into how DOC and tightly coupled pollutants or trace elements are mobilized.

  6. Influence of forming conditions on fiber tilt

    Treesearch

    David W. Vahey; John M. Considine; Michael A. and MacGregor

    2013-01-01

    Fiber tilt describes the projection of fiber length in the thickness direction of paper. The projection is described by the tilt angle of fibers with respect to the plane of the sheet. A simple model for fiber tilt is based on jet-to-wire velocity differential in combination with cross-flows on the wire. The tilt angle of a fiber is found to vary as the sine of its in-...

  7. Influence of smooth temperature variation on hotspot ignition

    DOE PAGES

    Reinbacher, Fynn; Regele, Jonathan David

    2017-10-06

    Autoignition in thermally stratified reactive mixtures originates in localised hotspots. The ignition behaviour is often characterised using linear temperature gradients and more recently constant temperature plateaus combined with temperature gradients. Acoustic timescale characterisation of plateau regions has been successfully used to characterise the type of mechanical disturbance that will be created from a plateau core ignition. This work combines linear temperature gradients with superelliptic cores in order to more accurately account for a local temperature maximum of finite size and the smooth temperature variation contained inside realistic hotspot centres. A one-step Arrhenius reaction is used to model a H 2–airmore » reactive mixture. Using the superelliptic approach a range of behaviours for temperature distributions are investigated by varying the temperature profile between the gradient only and plateau and gradient bounding cases. Each superelliptic case is compared to a respective plateau and gradient case where simple acoustic timescale characterisation may be performed. It is shown that hot spots equivalent with excitation-to-acoustic timescale ratios sufficiently greater than unity exhibit behaviour very similar to a simple plateau-gradient model. Furthermore, for larger hot spots with timescale ratios sufficiently less than unity the reaction behaviour is highly dependent on the smooth temperature profile contained within the core region.« less

  8. Variable Combinations of Specific Ephrin Ligand/Eph Receptor Pairs Control Embryonic Tissue Separation

    PubMed Central

    Rohani, Nazanin; Parmeggiani, Andrea; Winklbauer, Rudolf; Fagotto, François

    2014-01-01

    Ephrins and Eph receptors are involved in the establishment of vertebrate tissue boundaries. The complexity of the system is puzzling, however in many instances, tissues express multiple ephrins and Ephs on both sides of the boundary, a situation that should in principle cause repulsion between cells within each tissue. Although co-expression of ephrins and Eph receptors is widespread in embryonic tissues, neurons, and cancer cells, it is still unresolved how the respective signals are integrated into a coherent output. We present a simple explanation for the confinement of repulsion to the tissue interface: Using the dorsal ectoderm–mesoderm boundary of the Xenopus embryo as a model, we identify selective functional interactions between ephrin–Eph pairs that are expressed in partial complementary patterns. The combined repulsive signals add up to be strongest across the boundary, where they reach sufficient intensity to trigger cell detachments. The process can be largely explained using a simple model based exclusively on relative ephrin and Eph concentrations and binding affinities. We generalize these findings for the ventral ectoderm–mesoderm boundary and the notochord boundary, both of which appear to function on the same principles. These results provide a paradigm for how developmental systems may integrate multiple cues to generate discrete local outcomes. PMID:25247423

  9. Rational design and dynamics of self-propelled colloidal bead chains: from rotators to flagella.

    PubMed

    Vutukuri, Hanumantha Rao; Bet, Bram; van Roij, René; Dijkstra, Marjolein; Huck, Wilhelm T S

    2017-12-01

    The quest for designing new self-propelled colloids is fuelled by the demand for simple experimental models to study the collective behaviour of their more complex natural counterparts. Most synthetic self-propelled particles move by converting the input energy into translational motion. In this work we address the question if simple self-propelled spheres can assemble into more complex structures that exhibit rotational motion, possibly coupled with translational motion as in flagella. We exploit a combination of induced dipolar interactions and a bonding step to create permanent linear bead chains, composed of self-propelled Janus spheres, with a well-controlled internal structure. Next, we study how flexibility between individual swimmers in a chain can affect its swimming behaviour. Permanent rigid chains showed only active rotational or spinning motion, whereas longer semi-flexible chains showed both translational and rotational motion resembling flagella like-motion, in the presence of the fuel. Moreover, we are able to reproduce our experimental results using numerical calculations with a minimal model, which includes full hydrodynamic interactions with the fluid. Our method is general and opens a new way to design novel self-propelled colloids with complex swimming behaviours, using different complex starting building blocks in combination with the flexibility between them.

  10. Variable combinations of specific ephrin ligand/Eph receptor pairs control embryonic tissue separation.

    PubMed

    Rohani, Nazanin; Parmeggiani, Andrea; Winklbauer, Rudolf; Fagotto, François

    2014-09-01

    Ephrins and Eph receptors are involved in the establishment of vertebrate tissue boundaries. The complexity of the system is puzzling, however in many instances, tissues express multiple ephrins and Ephs on both sides of the boundary, a situation that should in principle cause repulsion between cells within each tissue. Although co-expression of ephrins and Eph receptors is widespread in embryonic tissues, neurons, and cancer cells, it is still unresolved how the respective signals are integrated into a coherent output. We present a simple explanation for the confinement of repulsion to the tissue interface: Using the dorsal ectoderm-mesoderm boundary of the Xenopus embryo as a model, we identify selective functional interactions between ephrin-Eph pairs that are expressed in partial complementary patterns. The combined repulsive signals add up to be strongest across the boundary, where they reach sufficient intensity to trigger cell detachments. The process can be largely explained using a simple model based exclusively on relative ephrin and Eph concentrations and binding affinities. We generalize these findings for the ventral ectoderm-mesoderm boundary and the notochord boundary, both of which appear to function on the same principles. These results provide a paradigm for how developmental systems may integrate multiple cues to generate discrete local outcomes.

  11. Influence of smooth temperature variation on hotspot ignition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reinbacher, Fynn; Regele, Jonathan David

    Autoignition in thermally stratified reactive mixtures originates in localised hotspots. The ignition behaviour is often characterised using linear temperature gradients and more recently constant temperature plateaus combined with temperature gradients. Acoustic timescale characterisation of plateau regions has been successfully used to characterise the type of mechanical disturbance that will be created from a plateau core ignition. This work combines linear temperature gradients with superelliptic cores in order to more accurately account for a local temperature maximum of finite size and the smooth temperature variation contained inside realistic hotspot centres. A one-step Arrhenius reaction is used to model a H 2–airmore » reactive mixture. Using the superelliptic approach a range of behaviours for temperature distributions are investigated by varying the temperature profile between the gradient only and plateau and gradient bounding cases. Each superelliptic case is compared to a respective plateau and gradient case where simple acoustic timescale characterisation may be performed. It is shown that hot spots equivalent with excitation-to-acoustic timescale ratios sufficiently greater than unity exhibit behaviour very similar to a simple plateau-gradient model. Furthermore, for larger hot spots with timescale ratios sufficiently less than unity the reaction behaviour is highly dependent on the smooth temperature profile contained within the core region.« less

  12. Imaging and Quantitation of a Succession of Transient Intermediates Reveal the Reversible Self-Assembly Pathway of a Simple Icosahedral Virus Capsid.

    PubMed

    Medrano, María; Fuertes, Miguel Ángel; Valbuena, Alejandro; Carrillo, Pablo J P; Rodríguez-Huete, Alicia; Mateu, Mauricio G

    2016-11-30

    Understanding the fundamental principles underlying supramolecular self-assembly may facilitate many developments, from novel antivirals to self-organized nanodevices. Icosahedral virus particles constitute paradigms to study self-assembly using a combination of theory and experiment. Unfortunately, assembly pathways of the structurally simplest virus capsids, those more accessible to detailed theoretical studies, have been difficult to study experimentally. We have enabled the in vitro self-assembly under close to physiological conditions of one of the simplest virus particles known, the minute virus of mice (MVM) capsid, and experimentally analyzed its pathways of assembly and disassembly. A combination of electron microscopy and high-resolution atomic force microscopy was used to structurally characterize and quantify a succession of transient assembly and disassembly intermediates. The results provided an experiment-based model for the reversible self-assembly pathway of a most simple (T = 1) icosahedral protein shell. During assembly, trimeric capsid building blocks are sequentially added to the growing capsid, with pentamers of building blocks and incomplete capsids missing one building block as conspicuous intermediates. This study provided experimental verification of many features of self-assembly of a simple T = 1 capsid predicted by molecular dynamics simulations. It also demonstrated atomic force microscopy imaging and automated analysis, in combination with electron microscopy, as a powerful single-particle approach to characterize at high resolution and quantify transient intermediates during supramolecular self-assembly/disassembly reactions. Finally, the efficient in vitro self-assembly achieved for the oncotropic, cell nucleus-targeted MVM capsid may facilitate its development as a drug-encapsidating nanoparticle for anticancer targeted drug delivery.

  13. Multidomain proteins under force

    NASA Astrophysics Data System (ADS)

    Valle-Orero, Jessica; Andrés Rivas-Pardo, Jaime; Popa, Ionel

    2017-04-01

    Advancements in single-molecule force spectroscopy techniques such as atomic force microscopy and magnetic tweezers allow investigation of how domain folding under force can play a physiological role. Combining these techniques with protein engineering and HaloTag covalent attachment, we investigate similarities and differences between four model proteins: I10 and I91—two immunoglobulin-like domains from the muscle protein titin, and two α + β fold proteins—ubiquitin and protein L. These proteins show a different mechanical response and have unique extensions under force. Remarkably, when normalized to their contour length, the size of the unfolding and refolding steps as a function of force reduces to a single master curve. This curve can be described using standard models of polymer elasticity, explaining the entropic nature of the measured steps. We further validate our measurements with a simple energy landscape model, which combines protein folding with polymer physics and accounts for the complex nature of tandem domains under force. This model can become a useful tool to help in deciphering the complexity of multidomain proteins operating under force.

  14. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  15. Chaos and unpredictability in evolution.

    PubMed

    Doebeli, Michael; Ispolatov, Iaroslav

    2014-05-01

    The possibility of complicated dynamic behavior driven by nonlinear feedbacks in dynamical systems has revolutionized science in the latter part of the last century. Yet despite examples of complicated frequency dynamics, the possibility of long-term evolutionary chaos is rarely considered. The concept of "survival of the fittest" is central to much evolutionary thinking and embodies a perspective of evolution as a directional optimization process exhibiting simple, predictable dynamics. This perspective is adequate for simple scenarios, when frequency-independent selection acts on scalar phenotypes. However, in most organisms many phenotypic properties combine in complicated ways to determine ecological interactions, and hence frequency-dependent selection. Therefore, it is natural to consider models for evolutionary dynamics generated by frequency-dependent selection acting simultaneously on many different phenotypes. Here we show that complicated, chaotic dynamics of long-term evolutionary trajectories in phenotype space is very common in a large class of such models when the dimension of phenotype space is large, and when there are selective interactions between the phenotypic components. Our results suggest that the perspective of evolution as a process with simple, predictable dynamics covers only a small fragment of long-term evolution. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.

  16. [Observation on therapeutic effect of acupuncture combined with chinese herbs on polycystic ovary syndrome of kidney deficiency and phlegm stasis type].

    PubMed

    Shi, Yin; Feng, Hui-jun; Liu, Hui-rong; Zhu, Dan

    2009-02-01

    To observe the clinical therapeutic effect of acupuncture combined with Chinese herbs on polycystic ovary syndrome of kidney deficiency and phlegm stasis type and probe into the mechanism. Sixty-three cases of polycystic ovary syndrome of kidney deficiency and phlegm stasis type were randomly divided in to a combined acupuncture and Chinese herb group (n=32) treated with acupuncture at Qihai (CV 6), Guanyuan (CV 4), et al. and oral administration of Chinese herbs, and a simple Chinese herb group (n=31) treated with oral administration of the same Chinese herbs as in the combined acupuncture and Chinese herb group. The therapeutic effects and changes of follicle stimulating hormone (FSH), luteotropic hormone (LH), testosterone (T) and LH/FSH were compared between the two groups. The total effective rate was 93.8% in the combined acupuncture and Chinese herb group and 80.6% in the simple Chinese herb group, the former being significantly better than the latter (P < 0.05). The decrease of T in the combined acupuncture and Chinese herb group was significantly su perior to that in the simple Chinese herb group (P < 0.01). Acupuncture combined with Chinese herb therapy is superior to the simple Chinese herb in the clinical therapeutic effect on polycystic ovary syndrome of kidney deficiency and phlegm stasis type and decrease of T level, indicating this method is a better one for polycystic ovary syndrome of kidney deficiency and phlegm stasis type.

  17. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses

    PubMed Central

    Wong, Tony E.; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections. PMID:29287095

  18. Sewer deterioration modeling with condition data lacking historical records.

    PubMed

    Egger, C; Scheidegger, A; Reichert, P; Maurer, M

    2013-11-01

    Accurate predictions of future conditions of sewer systems are needed for efficient rehabilitation planning. For this purpose, a range of sewer deterioration models has been proposed which can be improved by calibration with observed sewer condition data. However, if datasets lack historical records, calibration requires a combination of deterioration and sewer rehabilitation models, as the current state of the sewer network reflects the combined effect of both processes. Otherwise, physical sewer lifespans are overestimated as pipes in poor condition that were rehabilitated are no longer represented in the dataset. We therefore propose the combination of a sewer deterioration model with a simple rehabilitation model which can be calibrated with datasets lacking historical information. We use Bayesian inference for parameter estimation due to the limited information content of the data and limited identifiability of the model parameters. A sensitivity analysis gives an insight into the model's robustness against the uncertainty of the prior. The analysis reveals that the model results are principally sensitive to the means of the priors of specific model parameters, which should therefore be elicited with care. The importance sampling technique applied for the sensitivity analysis permitted efficient implementation for regional sensitivity analysis with reasonable computational outlay. Application of the combined model with both simulated and real data shows that it effectively compensates for the bias induced by a lack of historical data. Thus, the novel approach makes it possible to calibrate sewer pipe deterioration models even when historical condition records are lacking. Since at least some prior knowledge of the model parameters is available, the strength of Bayesian inference is particularly evident in the case of small datasets. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    PubMed Central

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  20. Improving Estimates and Forecasts of Lake Carbon Pools and Fluxes Using Data Assimilation

    NASA Astrophysics Data System (ADS)

    Zwart, J. A.; Hararuk, O.; Prairie, Y.; Solomon, C.; Jones, S.

    2017-12-01

    Lakes are biogeochemical hotspots on the landscape, contributing significantly to the global carbon cycle despite their small areal coverage. Observations and models of lake carbon pools and fluxes are rarely explicitly combined through data assimilation despite significant use of this technique in other fields with great success. Data assimilation adds value to both observations and models by constraining models with observations of the system and by leveraging knowledge of the system formalized by the model to objectively fill information gaps. In this analysis, we highlight the utility of data assimilation in lake carbon cycling research by using the Ensemble Kalman Filter to combine simple lake carbon models with observations of lake carbon pools. We demonstrate the use of data assimilation to improve a model's representation of lake carbon dynamics, to reduce uncertainty in estimates of lake carbon pools and fluxes, and to improve the accuracy of carbon pool size estimates relative to estimates derived from observations alone. Data assimilation techniques should be embraced as valuable tools for lake biogeochemists interested in learning about ecosystem dynamics and forecasting ecosystem states and processes.

  1. Predictions from a flavour GUT model combined with a SUSY breaking sector

    NASA Astrophysics Data System (ADS)

    Antusch, Stefan; Hohl, Christian

    2017-10-01

    We discuss how flavour GUT models in the context of supergravity can be completed with a simple SUSY breaking sector, such that the flavour-dependent (non-universal) soft breaking terms can be calculated. As an example, we discuss a model based on an SU(5) GUT symmetry and A 4 family symmetry, plus additional discrete "shaping symmetries" and a ℤ 4 R symmetry. We calculate the soft terms and identify the relevant high scale input parameters, and investigate the resulting predictions for the low scale observables, such as flavour violating processes, the sparticle spectrum and the dark matter relic density.

  2. Prediction of power requirements for a longwall armored face conveyor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broadfoot, A.R.; Betz, R.E.

    1997-01-01

    Longwall armored face conveyors (AFC`s) have traditionally been designed using a combination of heuristics and simple models. However, as longwalls increase in length, these design procedures are proving to be inadequate. The result has either been a costly loss of production due to AFC stalling or component failure, or larger than necessary capital investment due to overdesign. In order to allow accurate estimation of the power requirements for an AFC, this paper develops a comprehensive model of all the friction forces associated with the AFC. Power requirement predictions obtained from these models are then compared with measurements from two minemore » faces.« less

  3. Mitochondrial fusion through membrane automata.

    PubMed

    Giannakis, Konstantinos; Andronikos, Theodore

    2015-01-01

    Studies have shown that malfunctions in mitochondrial processes can be blamed for diseases. However, the mechanism behind these operations is yet not sufficiently clear. In this work we present a novel approach to describe a biomolecular model for mitochondrial fusion using notions from the membrane computing. We use a case study defined in BioAmbient calculus and we show how to translate it in terms of a P automata variant. We combine brane calculi with (mem)brane automata to produce a new scheme capable of describing simple, realistic models. We propose the further use of similar methods and the test of other biomolecular models with the same behaviour.

  4. Fatigue crack growth with single overload - Measurement and modeling

    NASA Technical Reports Server (NTRS)

    Davidson, D. L.; Hudak, S. J., Jr.; Dexter, R. J.

    1987-01-01

    This paper compares experiments with an analytical model of fatigue crack growth under variable amplitude. The stereoimaging technique was used to measure displacements near the tips of fatigue cracks undergoing simple variations in load amplitude-single overloads and overload/underload combinations. Measured displacements were used to compute strains, and stresses were determined from the strains. Local values of crack driving force (Delta-K effective) were determined using both locally measured opening loads and crack tip opening displacements. Experimental results were compared with simulations made for the same load variation conditions using Newman's FAST-2 model. Residual stresses caused by overloads, crack opening loads, and growth retardation periods were compared.

  5. Oscillations and Multiple Equilibria in Microvascular Blood Flow.

    PubMed

    Karst, Nathaniel J; Storey, Brian D; Geddes, John B

    2015-07-01

    We investigate the existence of oscillatory dynamics and multiple steady-state flow rates in a network with a simple topology and in vivo microvascular blood flow constitutive laws. Unlike many previous analytic studies, we employ the most biologically relevant models of the physical properties of whole blood. Through a combination of analytic and numeric techniques, we predict in a series of two-parameter bifurcation diagrams a range of dynamical behaviors, including multiple equilibria flow configurations, simple oscillations in volumetric flow rate, and multiple coexistent limit cycles at physically realizable parameters. We show that complexity in network topology is not necessary for complex behaviors to arise and that nonlinear rheology, in particular the plasma skimming effect, is sufficient to support oscillatory dynamics similar to those observed in vivo.

  6. Graphene oxide caged in cellulose microbeads for removal of malachite green dye from aqueous solution.

    PubMed

    Zhang, Xiaomei; Yu, Hongwen; Yang, Hongjun; Wan, Yuchun; Hu, Hong; Zhai, Zhuang; Qin, Jieming

    2015-01-01

    A simple sol-gel method using non-toxic and cost-effective precursors has been developed to prepare graphene oxide (GO)/cellulose bead (GOCB) composites for removal of dye pollutants. Taking advantage of the combined benefits of GO and cellulose, the prepared GOCB composites exhibit excellent removal efficiency towards malachite green (>96%) and can be reused for over 5 times through simple filtration method. The high-decontamination performance of the GOCB system is strongly dependent on encapsulation amount of GO, temperature and pH value. In addition, the adsorption behavior of this new adsorbent fits well with the Langmuir isotherm and pseudo-second-order kinetic model. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Upgrades to the REA method for producing probabilistic climate change projections

    NASA Astrophysics Data System (ADS)

    Xu, Ying; Gao, Xuejie; Giorgi, Filippo

    2010-05-01

    We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3

  8. Coupling of rainfall-induced landslide triggering model with predictions of debris flow runout distances

    NASA Astrophysics Data System (ADS)

    Lehmann, Peter; von Ruette, Jonas; Fan, Linfeng; Or, Dani

    2014-05-01

    Rapid debris flows initiated by rainfall induced shallow landslides present a highly destructive natural hazard in steep terrain. The impact and run-out paths of debris flows depend on the volume, composition and initiation zone of released material and are requirements to make accurate debris flow predictions and hazard maps. For that purpose we couple the mechanistic 'Catchment-scale Hydro-mechanical Landslide Triggering (CHLT)' model to compute timing, location, and landslide volume with simple approaches to estimate debris flow runout distances. The runout models were tested using two landslide inventories obtained in the Swiss Alps following prolonged rainfall events. The predicted runout distances were in good agreement with observations, confirming the utility of such simple models for landscape scale estimates. In a next step debris flow paths were computed for landslides predicted with the CHLT model for a certain range of soil properties to explore its effect on runout distances. This combined approach offers a more complete spatial picture of shallow landslide and subsequent debris flow hazards. The additional information provided by CHLT model concerning location, shape, soil type and water content of the released mass may also be incorporated into more advanced models of runout to improve predictability and impact of such abruptly-released mass.

  9. Relationships between rainfall and Combined Sewer Overflow (CSO) occurrences

    NASA Astrophysics Data System (ADS)

    Mailhot, A.; Talbot, G.; Lavallée, B.

    2015-04-01

    Combined Sewer Overflow (CSO) has been recognized as a major environmental issue in many countries. In Canada, the proposed reinforcement of the CSO frequency regulations will result in new constraints on municipal development. Municipalities will have to demonstrate that new developments do not increase CSO frequency above a reference level based on historical CSO records. Governmental agencies will also have to define a framework to assess the impact of new developments on CSO frequency and the efficiency of the various proposed measures to maintain CSO frequency at its historic level. In such a context, it is important to correctly assess the average number of days with CSO and to define relationships between CSO frequency and rainfall characteristics. This paper investigates such relationships using available CSO and rainfall datasets for Quebec. CSO records for 4285 overflow structures (OS) were analyzed. A simple model based on rainfall thresholds was developed to forecast the occurrence of CSO on a given day based on daily rainfall values. The estimated probability of days with CSO have been used to estimate the rainfall threshold value at each OS by imposing that the probability of exceeding this rainfall value for a given day be equal to the estimated probability of days with CSO. The forecast skill of this model was assessed for 3437 OS using contingency tables. The statistical significance of the forecast skill could be assessed for 64.2% of these OS. The threshold model has demonstrated significant forecast skill for 91.3% of these OS confirming that for most OS a simple threshold model can be used to assess the occurrence of CSO.

  10. Nonequilibrium thermodynamics of the shear-transformation-zone model

    NASA Astrophysics Data System (ADS)

    Luo, Alan M.; Ã-ttinger, Hans Christian

    2014-02-01

    The shear-transformation-zone (STZ) model has been applied numerous times to describe the plastic deformation of different types of amorphous systems. We formulate this model within the general equation for nonequilibrium reversible-irreversible coupling (GENERIC) framework, thereby clarifying the thermodynamic structure of the constitutive equations and guaranteeing thermodynamic consistency. We propose natural, physically motivated forms for the building blocks of the GENERIC, which combine to produce a closed set of time evolution equations for the state variables, valid for any choice of free energy. We demonstrate an application of the new GENERIC-based model by choosing a simple form of the free energy. In addition, we present some numerical results and contrast those with the original STZ equations.

  11. DAVE: A plug and play model for distributed multimedia application development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mines, R.F.; Friesen, J.A.; Yang, C.L.

    1994-07-01

    This paper presents a model being used for the development of distributed multimedia applications. The Distributed Audio Video Environment (DAVE) was designed to support the development of a wide range of distributed applications. The implementation of this model is described. DAVE is unique in that it combines a simple ``plug and play`` programming interface, supports both centralized and fully distributed applications, provides device and media extensibility, promotes object reuseability, and supports interoperability and network independence. This model enables application developers to easily develop distributed multimedia applications and create reusable multimedia toolkits. DAVE was designed for developing applications such as videomore » conferencing, media archival, remote process control, and distance learning.« less

  12. Fostering Elementary School Students' Understanding of Simple Electricity by Combining Simulation and Laboratory Activities

    ERIC Educational Resources Information Center

    Jaakkola, T.; Nurmi, S.

    2008-01-01

    Computer simulations and laboratory activities have been traditionally treated as substitute or competing methods in science teaching. The aim of this experimental study was to investigate if it would be more beneficial to combine simulation and laboratory activities than to use them separately in teaching the concepts of simple electricity. Based…

  13. Combined Endoscopic/Sonographic-Based Risk Matrix Model for Predicting One-Year Risk of Surgery: A Prospective Observational Study of a Tertiary Center Severe/Refractory Crohn's Disease Cohort.

    PubMed

    Rispo, Antonio; Imperatore, Nicola; Testa, Anna; Bucci, Luigi; Luglio, Gaetano; De Palma, Giovanni Domenico; Rea, Matilde; Nardone, Olga Maria; Caporaso, Nicola; Castiglione, Fabiana

    2018-03-08

    In the management of Crohn's Disease (CD) patients, having a simple score combining clinical, endoscopic and imaging features to predict the risk of surgery could help to tailor treatment more effectively. AIMS: to prospectively evaluate the one-year risk factors for surgery in refractory/severe CD and to generate a risk matrix for predicting the probability of surgery at one year. CD patients needing a disease re-assessment at our tertiary IBD centre underwent clinical, laboratory, endoscopy and bowel sonography (BS) examinations within one week. The optimal cut-off values in predicting surgery were identified using ROC curves for Simple Endoscopic Score for CD (SES-CD), bowel wall thickness (BWT) at BS, and small bowel CD extension at BS. Binary logistic regression and Cox's regression were then carried out. Finally, the probabilities of surgery were calculated for selected baseline levels of covariates and results were arranged in a prediction matrix. Of 100 CD patients, 30 underwent surgery within one year. SES-CD©9 (OR 15.3; p<0.001), BWT©7 mm (OR 15.8; p<0.001), small bowel CD extension at BS©33 cm (OR 8.23; p<0.001) and stricturing/penetrating behavior (OR 4.3; p<0.001) were the only independent factors predictive of surgery at one-year based on binary logistic and Cox's regressions. Our matrix model combined these risk factors and the probability of surgery ranged from 0.48% to 87.5% (sixteen combinations). Our risk matrix combining clinical, endoscopic and ultrasonographic findings can accurately predict the one-year risk of surgery in patients with severe/refractory CD requiring a disease re-evaluation. This tool could be of value in clinical practice, serving as the basis for a tailored management of CD patients.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  15. Models of globular proteins in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Wentzel, Nathaniel James

    Protein crystallization is a continuing area of research. Currently, there is no universal theory for the conditions required to crystallize proteins. A better understanding of protein crystallization will be helpful in determining protein structure and preventing and treating certain diseases. In this thesis, we will extend the understanding of globular proteins in aqueous solutions by analyzing various models for protein interactions. Experiments have shown that the liquid-liquid phase separation curves for lysozyme in solution with salt depend on salt type and salt concentration. We analyze a simple square well model for this system whose well depth depends on salt type and salt concentration, to determine the phase coexistence surfaces from experimental data. The surfaces, calculated from a single Monte Carlo simulation and a simple scaling argument, are shown as a function of temperature, salt concentration and protein concentration for two typical salts. Urate Oxidase from Asperigillus flavus is a protein used for studying the effects of polymers on the crystallization of large proteins. Experiments have determined some aspects of the phase diagram. We use Monte Carlo techniques and perturbation theory to predict the phase diagram for a model of urate oxidase in solution with PEG. The model used includes an electrostatic interaction, van der Waals attraction, and a polymerinduced depletion interaction. The results agree quantitatively with experiments. Anisotropy plays a role in globular protein interactions, including the formation of hemoglobin fibers in sickle cell disease. Also, the solvent conditions have been shown to play a strong role in the phase behavior of some aqueous protein solutions. Each has previously been treated separately in theoretical studies. Here we propose and analyze a simple, combined model that treats both anisotropy and solvent effects. We find that this model qualitatively explains some phase behavior, including the existence of a lower critical point under certain conditions.

  16. Economic decision making and the application of nonparametric prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2007-01-01

    Sustained increases in energy prices have focused attention on gas resources in low permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are large. Planning and development decisions for extraction of such resources must be area-wide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm the decision to enter such plays depends on reconnaissance level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional scale cost functions. The context of the worked example is the Devonian Antrim shale gas play, Michigan Basin. One finding relates to selection of the resource prediction model to be used with economic models. Models which can best predict aggregate volume over larger areas (many hundreds of sites) may lose granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined by extraneous factors. The paper also shows that when these simple prediction models are used to strategically order drilling prospects, the gain in gas volume over volumes associated with simple random site selection amounts to 15 to 20 percent. It also discusses why the observed benefit of updating predictions from results of new drilling, as opposed to following static predictions, is somewhat smaller. Copyright 2007, Society of Petroleum Engineers.

  17. A simple physical model for deep moonquake occurrence times

    USGS Publications Warehouse

    Weber, R.C.; Bills, B.G.; Johnson, C.L.

    2010-01-01

    The physical process that results in moonquakes is not yet fully understood. The periodic occurrence times of events from individual clusters are clearly related to tidal stress, but also exhibit departures from the temporal regularity this relationship would seem to imply. Even simplified models that capture some of the relevant physics require a large number of variables. However, a single, easily accessible variable - the time interval I(n) between events - can be used to reveal behavior not readily observed using typical periodicity analyses (e.g., Fourier analyses). The delay-coordinate (DC) map, a particularly revealing way to display data from a time series, is a map of successive intervals: I(n+. 1) plotted vs. I(n). We use a DC approach to characterize the dynamics of moonquake occurrence. Moonquake-like DC maps can be reproduced by combining sequences of synthetic events that occur with variable probability at tidal periods. Though this model gives a good description of what happens, it has little physical content, thus providing only little insight into why moonquakes occur. We investigate a more mechanistic model. In this study, we present a series of simple models of deep moonquake occurrence, with consideration of both tidal stress and stress drop during events. We first examine the behavior of inter-event times in a delay-coordinate context, and then examine the output, in that context, of a sequence of simple models of tidal forcing and stress relief. We find, as might be expected, that the stress relieved by moonquakes influences their occurrence times. Our models may also provide an explanation for the opposite-polarity events observed at some clusters. ?? 2010.

  18. Assessing the Impact of Retreat Mechanisms in a Simple Antarctic Ice Sheet Model Using Bayesian Calibration.

    PubMed

    Ruckert, Kelsey L; Shaffer, Gary; Pollard, David; Guan, Yawen; Wong, Tony E; Forest, Chris E; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing climate forcings is an important driver of sea-level changes. Anthropogenic climate change may drive a sizeable AIS tipping point response with subsequent increases in coastal flooding risks. Many studies analyzing flood risks use simple models to project the future responses of AIS and its sea-level contributions. These analyses have provided important new insights, but they are often silent on the effects of potentially important processes such as Marine Ice Sheet Instability (MISI) or Marine Ice Cliff Instability (MICI). These approximations can be well justified and result in more parsimonious and transparent model structures. This raises the question of how this approximation impacts hindcasts and projections. Here, we calibrate a previously published and relatively simple AIS model, which neglects the effects of MICI and regional characteristics, using a combination of observational constraints and a Bayesian inversion method. Specifically, we approximate the effects of missing MICI by comparing our results to those from expert assessments with more realistic models and quantify the bias during the last interglacial when MICI may have been triggered. Our results suggest that the model can approximate the process of MISI and reproduce the projected median melt from some previous expert assessments in the year 2100. Yet, our mean hindcast is roughly 3/4 of the observed data during the last interglacial period and our mean projection is roughly 1/6 and 1/10 of the mean from a model accounting for MICI in the year 2100. These results suggest that missing MICI and/or regional characteristics can lead to a low-bias during warming period AIS melting and hence a potential low-bias in projected sea levels and flood risks.

  19. A game-theoretical approach to multimedia social networks security.

    PubMed

    Liu, Enqiang; Liu, Zengliang; Shao, Fei; Zhang, Zhiyong

    2014-01-01

    The contents access and sharing in multimedia social networks (MSNs) mainly rely on access control models and mechanisms. Simple adoptions of security policies in the traditional access control model cannot effectively establish a trust relationship among parties. This paper proposed a novel two-party trust architecture (TPTA) to apply in a generic MSN scenario. According to the architecture, security policies are adopted through game-theoretic analyses and decisions. Based on formalized utilities of security policies and security rules, the choice of security policies in content access is described as a game between the content provider and the content requester. By the game method for the combination of security policies utility and its influences on each party's benefits, the Nash equilibrium is achieved, that is, an optimal and stable combination of security policies, to establish and enhance trust among stakeholders.

  20. The Early Life Of A Gamma-ray Burst

    NASA Astrophysics Data System (ADS)

    O'Brien, P. T.; Willingale, D.

    2006-09-01

    We present results for 100 gamma-ray bursts observed promptly by the Swift satellite. Combining the early gamma-ray and X-ray data from the BAT and XRT, we show that although individual GRBs can display complex light curves, including a variety of decay phases and flares, their early emission can be described by a relatively simple combination of central engine activity and the interaction of a relativistic jet with the surrounding environment. We also discuss the later fading, which in the optical/IR has traditionally been explained as a jet-break. The Swift data reveal many bursts have a relatively early break in their X-ray light curves contradicting the standard jet break model derived from optical data. We discuss the implications of this for GRB jet models and for using GRBs as standard candles.

  1. A Game-Theoretical Approach to Multimedia Social Networks Security

    PubMed Central

    Liu, Enqiang; Liu, Zengliang; Shao, Fei; Zhang, Zhiyong

    2014-01-01

    The contents access and sharing in multimedia social networks (MSNs) mainly rely on access control models and mechanisms. Simple adoptions of security policies in the traditional access control model cannot effectively establish a trust relationship among parties. This paper proposed a novel two-party trust architecture (TPTA) to apply in a generic MSN scenario. According to the architecture, security policies are adopted through game-theoretic analyses and decisions. Based on formalized utilities of security policies and security rules, the choice of security policies in content access is described as a game between the content provider and the content requester. By the game method for the combination of security policies utility and its influences on each party's benefits, the Nash equilibrium is achieved, that is, an optimal and stable combination of security policies, to establish and enhance trust among stakeholders. PMID:24977226

  2. Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory

    DOE PAGES

    Aab, A.; Abreu, P.; Aglietta, M.; ...

    2017-04-20

    In this paper, we present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above 5 • 10 18 eV, i.e. the region of the all-particle spectrum above the so-called 'ankle' feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show thatmore » uncertainties about physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results.« less

  3. Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aab, A.; Abreu, P.; Aglietta, M.

    In this paper, we present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above 5 • 10 18 eV, i.e. the region of the all-particle spectrum above the so-called 'ankle' feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show thatmore » uncertainties about physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results.« less

  4. Tree-Structured Digital Organisms Model

    NASA Astrophysics Data System (ADS)

    Suzuki, Teruhiko; Nobesawa, Shiho; Tahara, Ikuo

    Tierra and Avida are well-known models of digital organisms. They describe a life process as a sequence of computation codes. A linear sequence model may not be the only way to describe a digital organism, though it is very simple for a computer-based model. Thus we propose a new digital organism model based on a tree structure, which is rather similar to the generic programming. With our model, a life process is a combination of various functions, as if life in the real world is. This implies that our model can easily describe the hierarchical structure of life, and it can simulate evolutionary computation through mutual interaction of functions. We verified our model by simulations that our model can be regarded as a digital organism model according to its definitions. Our model even succeeded in creating species such as viruses and parasites.

  5. Failure models for textile composites

    NASA Technical Reports Server (NTRS)

    Cox, Brian

    1995-01-01

    The goals of this investigation were to: (1) identify mechanisms of failure and determine how the architecture of reinforcing fibers in 3D woven composites controlled stiffness, strength, strain to failure, work of fracture, notch sensitivity, and fatigue life; and (2) to model composite stiffness, strength, and fatigue life. A total of 11 different angle and orthogonal interlock woven composites were examined. Composite properties depended on the weave architecture, the tow size, and the spatial distributions and strength of geometrical flaws. Simple models were developed for elastic properties, strength, and fatigue life. A more complicated stochastic model, called the 'Binary Model,' was developed for damage tolerance and ultimate failure. These 3D woven composites possessed an extraordinary combination of strength, damage tolerance, and notch insensitivity.

  6. Mashups over the Deep Web

    NASA Astrophysics Data System (ADS)

    Hornung, Thomas; Simon, Kai; Lausen, Georg

    Combining information from different Web sources often results in a tedious and repetitive process, e.g. even simple information requests might require to iterate over a result list of one Web query and use each single result as input for a subsequent query. One approach for this chained queries are data-centric mashups, which allow to visually model the data flow as a graph, where the nodes represent the data source and the edges the data flow.

  7. Method and system for automated on-chip material and structural certification of MEMS devices

    DOEpatents

    Sinclair, Michael B.; DeBoer, Maarten P.; Smith, Norman F.; Jensen, Brian D.; Miller, Samuel L.

    2003-05-20

    A new approach toward MEMS quality control and materials characterization is provided by a combined test structure measurement and mechanical response modeling approach. Simple test structures are cofabricated with the MEMS devices being produced. These test structures are designed to isolate certain types of physical response, so that measurement of their behavior under applied stress can be easily interpreted as quality control and material properties information.

  8. Size-Frequency Distributions of Rocks on Mars and Earth Analog Sites: Implications for Future Landed Missions

    NASA Technical Reports Server (NTRS)

    Golombeck, M.; Rapp, D.

    1996-01-01

    The size-frequency distribution of rocks and the Vicking landing sites and a variety of rocky locations on the Earth that formed from a number of geologic processes all have the general shape of simple exponential curves, which have been combined with remote sensing data and models on rock abundance to predict the frequency of boulders potentially hazardous to future Mars landers and rovers.

  9. The Kinetics of Dissolution Revisited

    NASA Astrophysics Data System (ADS)

    Antonel, Paula S.; Hoijemberg, Pablo A.; Maiante, Leandro M.; Lagorio, M. Gabriela

    2003-09-01

    An experiment analyzing the kinetics of dissolution of a solid with cylindrical geometry in water is presented. The dissolution process is followed by measuring the solid mass and its size parameters (thickness and diameter) as a function of time. It is verified that the dissolution rate follows the Nernst model. Data treatment is compared with the dissolution of a spherical solid previously described. Kinetics, diffusion concepts, and polynomial fitting of experimental data are combined in this simple experiment.

  10. Comparative Performance Evaluation of Rainfall-runoff Models, Six of Black-box Type and One of Conceptual Type, From The Galway Flow Forecasting System (gffs) Package, Applied On Two Irish Catchments

    NASA Astrophysics Data System (ADS)

    Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.

    The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.

  11. Slow crack growth in glass in combined mode I and mode II loading

    NASA Technical Reports Server (NTRS)

    Shetty, D. K.; Rosenfield, A. R.

    1991-01-01

    Slow crack growth in soda-lime glass under combined mode I and mode II loading was investigated in precracked disk specimens in which pure mode I, pure mode II, and various combinations of mode I and mode II were achieved by loading in diametral compression at selected angles with respect to symmetric radial cracks. It is shown that slow crack growth under these conditions can be described by a simple exponential relationship with elastic strain energy release rate as the effective crack-driving force parameter. It is possible to interpret this equation in terms of theoretical models that treat subcritical crack growth as a thermally activated bond-rupture process with an activation energy dependent on the environment, and the elastic energy release rate as the crack-driving force parameter.

  12. Simple atmospheric perturbation models for sonic-boom-signature distortion studies

    NASA Technical Reports Server (NTRS)

    Ehernberger, L. J.; Wurtele, Morton G.; Sharman, Robert D.

    1994-01-01

    Sonic-boom propagation from flight level to ground is influenced by wind and speed-of-sound variations resulting from temperature changes in both the mean atmospheric structure and small-scale perturbations. Meteorological behavior generally produces complex combinations of atmospheric perturbations in the form of turbulence, wind shears, up- and down-drafts and various wave behaviors. Differences between the speed of sound at the ground and at flight level will influence the threshold flight Mach number for which the sonic boom first reaches the ground as well as the width of the resulting sonic-boom carpet. Mean atmospheric temperature and wind structure as a function of altitude vary with location and time of year. These average properties of the atmosphere are well-documented and have been used in many sonic-boom propagation assessments. In contrast, smaller scale atmospheric perturbations are also known to modulate the shape and amplitude of sonic-boom signatures reaching the ground, but specific perturbation models have not been established for evaluating their effects on sonic-boom propagation. The purpose of this paper is to present simple examples of atmospheric vertical temperature gradients, wind shears, and wave motions that can guide preliminary assessments of nonturbulent atmospheric perturbation effects on sonic-boom propagation to the ground. The use of simple discrete atmospheric perturbation structures can facilitate the interpretation of the resulting sonic-boom propagation anomalies as well as intercomparisons among varied flight conditions and propagation models.

  13. Joint effect of unlinked genotypes: application to type 2 diabetes in the EPIC-Potsdam case-cohort study.

    PubMed

    Knüppel, Sven; Meidtner, Karina; Arregui, Maria; Holzhütter, Hermann-Georg; Boeing, Heiner

    2015-07-01

    Analyzing multiple single nucleotide polymorphisms (SNPs) is a promising approach to finding genetic effects beyond single-locus associations. We proposed the use of multilocus stepwise regression (MSR) to screen for allele combinations as a method to model joint effects, and compared the results with the often used genetic risk score (GRS), conventional stepwise selection, and the shrinkage method LASSO. In contrast to MSR, the GRS, conventional stepwise selection, and LASSO model each genotype by the risk allele doses. We reanalyzed 20 unlinked SNPs related to type 2 diabetes (T2D) in the EPIC-Potsdam case-cohort study (760 cases, 2193 noncases). No SNP-SNP interactions and no nonlinear effects were found. Two SNP combinations selected by MSR (Nagelkerke's R² = 0.050 and 0.048) included eight SNPs with mean allele combination frequency of 2%. GRS and stepwise selection selected nearly the same SNP combinations consisting of 12 and 13 SNPs (Nagelkerke's R² ranged from 0.020 to 0.029). LASSO showed similar results. The MSR method showed the best model fit measured by Nagelkerke's R² suggesting that further improvement may render this method a useful tool in genetic research. However, our comparison suggests that the GRS is a simple way to model genetic effects since it does not consider linkage, SNP-SNP interactions, and no non-linear effects. © 2015 John Wiley & Sons Ltd/University College London.

  14. Large-scale coastal and fluvial models constrain the late Holocene evolution of the Ebro Delta

    NASA Astrophysics Data System (ADS)

    Nienhuis, Jaap H.; Ashton, Andrew D.; Kettner, Albert J.; Giosan, Liviu

    2017-09-01

    The distinctive plan-view shape of the Ebro Delta coast reveals a rich morphologic history. The degree to which the form and depositional history of the Ebro and other deltas represent autogenic (internal) dynamics or allogenic (external) forcing remains a prominent challenge for paleo-environmental reconstructions. Here we use simple coastal and fluvial morphodynamic models to quantify paleo-environmental changes affecting the Ebro Delta over the late Holocene. Our findings show that these models are able to broadly reproduce the Ebro Delta morphology, with simple fluvial and wave climate histories. Based on numerical model experiments and the preserved and modern shape of the Ebro Delta plain, we estimate that a phase of rapid shoreline progradation began approximately 2100 years BP, requiring approximately a doubling in coarse-grained fluvial sediment supply to the delta. River profile simulations suggest that an instantaneous and sustained increase in coarse-grained sediment supply to the delta requires a combined increase in both flood discharge and sediment supply from the drainage basin. The persistence of rapid delta progradation throughout the last 2100 years suggests an anthropogenic control on sediment supply and flood intensity. Using proxy records of the North Atlantic Oscillation, we do not find evidence that changes in wave climate aided this delta expansion. Our findings highlight how scenario-based investigations of deltaic systems using simple models can assist first-order quantitative paleo-environmental reconstructions, elucidating the effects of past human influence and climate change, and allowing a better understanding of the future of deltaic landforms.

  15. The Use of Partial Least Square Regression and Spectral Data in UV-Visible Region for Quantification of Adulteration in Indonesian Palm Civet Coffee

    PubMed Central

    Yulia, Meinilwita

    2017-01-01

    Asian palm civet coffee or kopi luwak (Indonesian words for coffee and palm civet) is well known as the world's priciest and rarest coffee. To protect the authenticity of luwak coffee and protect consumer from luwak coffee adulteration, it is very important to develop a robust and simple method for determining the adulteration of luwak coffee. In this research, the use of UV-Visible spectra combined with PLSR was evaluated to establish rapid and simple methods for quantification of adulteration in luwak-arabica coffee blend. Several preprocessing methods were tested and the results show that most of the preprocessing spectra were effective in improving the quality of calibration models with the best PLS calibration model selected for Savitzky-Golay smoothing spectra which had the lowest RMSECV (0.039) and highest RPDcal value (4.64). Using this PLS model, a prediction for quantification of luwak content was calculated and resulted in satisfactory prediction performance with high both RPDp and RER values. PMID:28913348

  16. Supernova shock breakout through a wind

    NASA Astrophysics Data System (ADS)

    Balberg, Shmuel; Loeb, Abraham

    2011-06-01

    The breakout of a supernova shock wave through the progenitor star's outer envelope is expected to appear as an X-ray flash. However, if the supernova explodes inside an optically thick wind, the breakout flash is delayed. We present a simple model for estimating the conditions at shock breakout in a wind based on the general observable quantities in the X-ray flash light curve; the total energy EX, and the diffusion time after the peak, tdiff. We base the derivation on the self-similar solution for the forward-reverse shock structure expected for an ejecta plowing through a pre-existing wind at large distances from the progenitor's surface. We find simple quantitative relations for the shock radius and velocity at breakout. By relating the ejecta density profile to the pre-explosion structure of the progenitor, the model can also be extended to constrain the combination of explosion energy and ejecta mass. For the observed case of XRO08109/SN2008D, our model provides reasonable constraints on the breakout radius, explosion energy and ejecta mass, and predicts a high shock velocity which naturally accounts for the observed non-thermal spectrum.

  17. Programs as Polypeptides.

    PubMed

    Williams, Lance R

    2016-01-01

    Object-oriented combinator chemistry (OOCC) is an artificial chemistry with composition devices borrowed from object-oriented and functional programming languages. Actors in OOCC are embedded in space and subject to diffusion; since they are neither created nor destroyed, their mass is conserved. Actors use programs constructed from combinators to asynchronously update their own states and the states of other actors in their neighborhoods. The fact that programs and combinators are themselves reified as actors makes it possible to build programs that build programs from combinators of a few primitive types using asynchronous spatial processes that resemble chemistry as much as computation. To demonstrate this, OOCC is used to define a parallel, asynchronous, spatially distributed self-replicating system modeled in part on the living cell. Since interactions among its parts result in the construction of more of these same parts, the system is strongly constructive. The system's high normalized complexity is contrasted with that of a simple composome.

  18. The Effect of Combining Analogy-Based Simulation and Laboratory Activities on Turkish Elementary School Students' Understanding of Simple Electric Circuits

    ERIC Educational Resources Information Center

    Unlu, Zeynep Koyunlu; Dokme, Ibilge

    2011-01-01

    The purpose of this study was to investigate whether the combination of both analogy-based simulation and laboratory activities as a teaching tool was more effective than utilizing them separately in teaching the concepts of simple electricity. The quasi-experimental design that involved 66 seventh grade students from urban Turkish elementary…

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisniega, A.; Vaquero, J. J., E-mail: juanjose.vaquero@uc3m.es; Instituto de Investigación Sanitaria Gregorio Marañón, Madrid ES28007

    Purpose: The availability of accurate and simple models for the estimation of x-ray spectra is of great importance for system simulation, optimization, or inclusion of photon energy information into data processing. There is a variety of publicly available tools for estimation of x-ray spectra in radiology and mammography. However, most of these models cannot be used directly for modeling microfocus x-ray sources due to differences in inherent filtration, energy range and/or anode material. For this reason the authors propose in this work a new model for the simulation of microfocus spectra based on existing models for mammography and radiology, modifiedmore » to compensate for the effects of inherent filtration and energy range. Methods: The authors used the radiology and mammography versions of an existing empirical model [tungsten anode spectral model interpolating polynomials (TASMIP)] as the basis of the microfocus model. First, the authors estimated the inherent filtration included in the radiology model by comparing the shape of the spectra with spectra from the mammography model. Afterwards, the authors built a unified spectra dataset by combining both models and, finally, they estimated the parameters of the new version of TASMIP for microfocus sources by calibrating against experimental exposure data from a microfocus x-ray source. The model was validated by comparing estimated and experimental exposure and attenuation data for different attenuating materials and x-ray beam peak energy values, using two different x-ray tubes. Results: Inherent filtration for the radiology spectra from TASMIP was found to be equivalent to 1.68 mm Al, as compared to spectra obtained from the mammography model. To match the experimentally measured exposure data the combined dataset required to apply a negative filtration of about 0.21 mm Al and an anode roughness of 0.003 mm W. The validation of the model against real acquired data showed errors in exposure and attenuation in line with those reported for other models for radiology or mammography. Conclusions: A new version of the TASMIP model for the estimation of x-ray spectra in microfocus x-ray sources has been developed and validated experimentally. Similarly to other versions of TASMIP, the estimation of spectra is very simple, involving only the evaluation of polynomial expressions.« less

  20. A feedback model of figure-ground assignment.

    PubMed

    Domijan, Drazen; Setić, Mia

    2008-05-30

    A computational model is proposed in order to explain how bottom-up and top-down signals are combined into a unified perception of figure and background. The model is based on the interaction between the ventral and the dorsal stream. The dorsal stream computes saliency based on boundary signals provided by the simple and the complex cortical cells. Output from the dorsal stream is projected to the surface network which serves as a blackboard on which the surface representation is formed. The surface network is a recurrent network which segregates different surfaces by assigning different firing rates to them. The figure is labeled by the maximal firing rate. Computer simulations showed that the model correctly assigns figural status to the surface with a smaller size, a greater contrast, convexity, surroundedness, horizontal-vertical orientation and a higher spatial frequency content. The simple gradient of activity in the dorsal stream enables the simulation of the new principles of the lower region and the top-bottom polarity. The model also explains how the exogenous attention and the endogenous attention may reverse the figural assignment. Due to the local excitation in the surface network, neural activity at the cued region will spread over the whole surface representation. Therefore, the model implements the object-based attentional selection.

  1. A novel method for calculating the energy barriers for carbon diffusion in ferrite under heterogeneous stress

    NASA Astrophysics Data System (ADS)

    Tchitchekova, Deyana S.; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel

    2014-07-01

    A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ˜3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.

  2. A novel method for calculating the energy barriers for carbon diffusion in ferrite under heterogeneous stress.

    PubMed

    Tchitchekova, Deyana S; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel

    2014-07-21

    A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ∼3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.

  3. OvAge: a new methodology to quantify ovarian reserve combining clinical, biochemical and 3D-ultrasonographic parameters.

    PubMed

    Venturella, Roberta; Lico, Daniela; Sarica, Alessia; Falbo, Maria Pia; Gulletta, Elio; Morelli, Michele; Zupi, Errico; Cevenini, Gabriele; Cannataro, Mario; Zullo, Fulvio

    2015-04-08

    In the last decade, both endocrine and ultrasound data have been tested to verify their usefulness for assessing ovarian reserve, but the ideal marker does not yet exist. The purpose of this study was to find, if any, a statistical advanced model able to identify a simple, easy to understand and intuitive modality for defining ovarian age by combining clinical, biochemical and 3D-ultrasonographic data. This is a population-based observational study. From January 2012 to March 2014, we enrolled 652 healthy fertile women, 29 patients with clinical suspect of premature ovarian insufficiency (POI) and 29 patients with Polycystic Ovary syndrome (PCOS) at the Unit of Obstetrics & Gynecology of Magna Graecia University of Catanzaro (Italy). In all women we measured Anti Müllerian Hormone (AMH), Follicle Stimulating Hormone (FSH), Estradiol (E2), 3D Antral Follicle Count (AFC), ovarian volume, Vascular Index (VI) and Flow Index (FI) between days 1 and 4 of menstrual cycle. We applied the Generalized Linear Models (GzLM) for producing an equation combining these data to provide a ready to use information about women ovarian reserve, here called OvAge. To introduce this new variable, expression of ovarian reserve, we assumed that in healthy fertile women ovarian age is identical to chronological age. GzLM applied on the healthy fertile controls dataset produced the following equation OvAge = 48.05 - 3.14*AHM + 0.07*FSH - 0.77*AFC - 0.11*FI + 0.25*VI + 0.1*AMH*AFC + 0.02*FSH*AFC. This model showed a high statistical significance for each marker included in the equation. We applied the final equation on POI and PCOS datasets to test its ability of discovering significant deviation from normality and we obtained a mean of predicted ovarian age significantly different from the mean of chronological age in both groups. OvAge is one of the first reliable attempt to create a new method able to identify a simple, easy to understand and intuitive modality for defining ovarian reserve by combining clinical, biochemical and 3D-ultrasonographic data. Although design data prove a statistical high accuracy of the model, we are going to plan a clinical validation of model reliability in predicting reproductive prognosis and distance to menopause.

  4. Assessing the impact of land use change on hydrology by ensemble modelling (LUCHEM) II: Ensemble combinations and predictions

    USGS Publications Warehouse

    Viney, N.R.; Bormann, H.; Breuer, L.; Bronstert, A.; Croke, B.F.W.; Frede, H.; Graff, T.; Hubrechts, L.; Huisman, J.A.; Jakeman, A.J.; Kite, G.W.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Willems, P.

    2009-01-01

    This paper reports on a project to compare predictions from a range of catchment models applied to a mesoscale river basin in central Germany and to assess various ensemble predictions of catchment streamflow. The models encompass a large range in inherent complexity and input requirements. In approximate order of decreasing complexity, they are DHSVM, MIKE-SHE, TOPLATS, WASIM-ETH, SWAT, PRMS, SLURP, HBV, LASCAM and IHACRES. The models are calibrated twice using different sets of input data. The two predictions from each model are then combined by simple averaging to produce a single-model ensemble. The 10 resulting single-model ensembles are combined in various ways to produce multi-model ensemble predictions. Both the single-model ensembles and the multi-model ensembles are shown to give predictions that are generally superior to those of their respective constituent models, both during a 7-year calibration period and a 9-year validation period. This occurs despite a considerable disparity in performance of the individual models. Even the weakest of models is shown to contribute useful information to the ensembles they are part of. The best model combination methods are a trimmed mean (constructed using the central four or six predictions each day) and a weighted mean ensemble (with weights calculated from calibration performance) that places relatively large weights on the better performing models. Conditional ensembles, in which separate model weights are used in different system states (e.g. summer and winter, high and low flows) generally yield little improvement over the weighted mean ensemble. However a conditional ensemble that discriminates between rising and receding flows shows moderate improvement. An analysis of ensemble predictions shows that the best ensembles are not necessarily those containing the best individual models. Conversely, it appears that some models that predict well individually do not necessarily combine well with other models in multi-model ensembles. The reasons behind these observations may relate to the effects of the weighting schemes, non-stationarity of the climate series and possible cross-correlations between models. Crown Copyright ?? 2008.

  5. Effectiveness of external fixator combined with T-plate internal fixation for the treatment of comminuted distal radius fractures.

    PubMed

    Han, L R; Jin, C X; Yan, J; Han, S Z; He, X B; Yang, X F

    2015-03-31

    This study compared the efficacy between external fixator combined with palmar T-plate internal fixation and simple plate internal fixation for the treatment of comminuted distal radius fractures. A total of 61 patients classified as type C according to the AO/ASIF classification underwent surgery for comminuted distal radius fractures. There were 54 and 7 cases of closed and open fractures, respectively. Moreover, 19 patients received an external fixator combined with T-plate internal fixation, and 42 received simple plate internal fixation. All patients were treated successfully during 12-month postoperative follow-up. The follow-up results show that the palmar flexion and dorsiflexion of the wrist, radial height, and palmar angle were significantly better in those treated with the external fixator combined with T-plate compared to those treated with the simple plate only (P < 0.05); however, there were no significant differences in radial-ulnar deviation, wrist range of motion, or wrist function score between groups (P > 0.05). Hence, the effectiveness of external fixator combined with T-plate internal fixation for the treatment of comminuted distal radius fractures was satisfactory. Patients sufficiently recovered wrist, forearm, and hand function. In conclusion, compared to the simple T-plate, the external fixator combined with T-plate internal fixation can reduce the possibility of the postoperative re-shifting of broken bones and keep the distraction of fractures to maintain radial height and prevent radial shortening.

  6. Reanalysis, compatibility and correlation in analysis of modified antenna structures

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1989-01-01

    A simple computational procedure is synthesized to process changes in the microwave-antenna pathlength-error measure when there are changes in the antenna structure model. The procedure employs structural modification reanalysis methods combined with new extensions of correlation analysis to provide the revised rms pathlength error. Mainframe finite-element-method processing of the structure model is required only for the initial unmodified structure, and elementary postprocessor computations develop and deal with the effects of the changes. Several illustrative computational examples are included. The procedure adapts readily to processing spectra of changes for parameter studies or sensitivity analyses.

  7. More memory under evolutionary learning may lead to chaos

    NASA Astrophysics Data System (ADS)

    Diks, Cees; Hommes, Cars; Zeppini, Paolo

    2013-02-01

    We show that an increase of memory of past strategy performance in a simple agent-based innovation model, with agents switching between costly innovation and cheap imitation, can be quantitatively stabilising while at the same time qualitatively destabilising. As memory in the fitness measure increases, the amplitude of price fluctuations decreases, but at the same time a bifurcation route to chaos may arise. The core mechanism leading to the chaotic behaviour in this model with strategy switching is that the map obtained for the system with memory is a convex combination of an increasing linear function and a decreasing non-linear function.

  8. Accounting for uncertainty in model-based prevalence estimation: paratuberculosis control in dairy herds.

    PubMed

    Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R

    2012-09-10

    A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.

  9. The lucky image-motion prediction for simple scene observation based soft-sensor technology

    NASA Astrophysics Data System (ADS)

    Li, Yan; Su, Yun; Hu, Bin

    2015-08-01

    High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.

  10. Lunar crater volumes - Interpretation by models of impact cratering and upper crustal structure

    NASA Technical Reports Server (NTRS)

    Croft, S. K.

    1978-01-01

    Lunar crater volumes can be divided by size into two general classes with distinctly different functional dependence on diameter. Craters smaller than approximately 12 km in diameter are morphologically simple and increase in volume as the cube of the diameter, while craters larger than about 20 km are complex and increase in volume at a significantly lower rate implying shallowing. Ejecta and interior volumes are not identical and their ratio, Schroeters Ratio (SR), increases from about 0.5 for simple craters to about 1.5 for complex craters. The excess of ejecta volume causing the increase, can be accounted for by a discontinuity in lunar crust porosity at 1.5-2 km depth. The diameter range of significant increase in SR corresponds with the diameter range of transition from simple to complex crater morphology. This observation, combined with theoretical rebound calculation, indicates control of the transition diameter by the porosity structure of the upper crust.

  11. Fully Resolved Simulations of 3D Printing

    NASA Astrophysics Data System (ADS)

    Tryggvason, Gretar; Xia, Huanxiong; Lu, Jiacai

    2017-11-01

    Numerical simulations of Fused Deposition Modeling (FDM) (or Fused Filament Fabrication) where a filament of hot, viscous polymer is deposited to ``print'' a three-dimensional object, layer by layer, are presented. A finite volume/front tracking method is used to follow the injection, cooling, solidification and shrinking of the filament. The injection of the hot melt is modeled using a volume source, combined with a nozzle, modeled as an immersed boundary, that follows a prescribed trajectory. The viscosity of the melt depends on the temperature and the shear rate and the polymer becomes immobile as its viscosity increases. As the polymer solidifies, the stress is found by assuming a hyperelastic constitutive equation. The method is described and its accuracy and convergence properties are tested by grid refinement studies for a simple setup involving two short filaments, one on top of the other. The effect of the various injection parameters, such as nozzle velocity and injection velocity are briefly examined and the applicability of the approach to simulate the construction of simple multilayer objects is shown. The role of fully resolved simulations for additive manufacturing and their use for novel processes and as the ``ground truth'' for reduced order models is discussed.

  12. Molecular-dynamics simulation of mutual diffusion in nonideal liquid mixtures

    NASA Astrophysics Data System (ADS)

    Rowley, R. L.; Stoker, J. M.; Giles, N. F.

    1991-05-01

    The mutual-diffusion coefficients, D 12, of n-hexane, n-heptane, and n-octane in chloroform were modeled using equilibrium molecular-dynamics (MD) simulations of simple Lennard-Jones (LJ) fluids. Pure-component LJ parameters were obtained by comparison of simulations to experimental self-diffusion coefficients. While values of “effective” LJ parameters are not expected to simulate accurately diverse thermophysical properties over a wide range of conditions, it was recently shown that effective parameters obtained from pure self-diffusion coefficients can accurately model mutual diffusion in ideal, liquid mixtures. In this work, similar simulations are used to model diffusion in nonideal mixtures. The same combining rules used in the previous study for the cross-interaction parameters were found to be adequate to represent the composition dependence of D 12. The effect of alkane chain length on D 12 is also correctly predicted by the simulations. A commonly used assumption in empirical correlations of D 12, that its kinetic portion is a simple, compositional average of the intradiffusion coefficients, is inconsistent with the simulation results. In fact, the value of the kinetic portion of D 12 was often outside the range of values bracketed by the two intradiffusion coefficients for the nonideal system modeled here.

  13. Using energy budgets to combine ecology and toxicology in a mammalian sentinel species

    NASA Astrophysics Data System (ADS)

    Desforges, Jean-Pierre W.; Sonne, Christian; Dietz, Rune

    2017-04-01

    Process-driven modelling approaches can resolve many of the shortcomings of traditional descriptive and non-mechanistic toxicology. We developed a simple dynamic energy budget (DEB) model for the mink (Mustela vison), a sentinel species in mammalian toxicology, which coupled animal physiology, ecology and toxicology, in order to mechanistically investigate the accumulation and adverse effects of lifelong dietary exposure to persistent environmental toxicants, most notably polychlorinated biphenyls (PCBs). Our novel mammalian DEB model accurately predicted, based on energy allocations to the interconnected metabolic processes of growth, development, maintenance and reproduction, lifelong patterns in mink growth, reproductive performance and dietary accumulation of PCBs as reported in the literature. Our model results were consistent with empirical data from captive and free-ranging studies in mink and other wildlife and suggest that PCB exposure can have significant population-level impacts resulting from targeted effects on fetal toxicity, kit mortality and growth and development. Our approach provides a simple and cross-species framework to explore the mechanistic interactions of physiological processes and ecotoxicology, thus allowing for a deeper understanding and interpretation of stressor-induced adverse effects at all levels of biological organization.

  14. Modeling Surface Climate in US Cities Using Simple Biosphere Model Sib2

    NASA Technical Reports Server (NTRS)

    Zhang, Ping; Bounoua, Lahouari; Thome, Kurtis; Wolfe, Robert; Imhoff, Marc

    2015-01-01

    We combine Landsat- and the Moderate Resolution Imaging Spectroradiometer (MODIS)-based products in the Simple Biosphere model (SiB2) to assess the effects of urbanized land on the continental US (CONUS) surface climate. Using National Land Cover Database (NLCD) Impervious Surface Area (ISA), we define more than 300 urban settlements and their surrounding suburban and rural areas over the CONUS. The SiB2 modeled Gross Primary Production (GPP) over the CONUS of 7.10 PgC (1 PgC= 10(exp 15) grams of Carbon) is comparable to the MODIS improved GPP of 6.29 PgC. At state level, SiB2 GPP is highly correlated with MODIS GPP with a correlation coefficient of 0.94. An increasing horizontal GPP gradient is shown from the urban out to the rural area, with, on average, rural areas fixing 30% more GPP than urbans. Cities built in forested biomes have stronger UHI magnitude than those built in short vegetation with low biomass. Mediterranean climate cities have a stronger UHI in wet season than dry season. Our results also show that for urban areas built within forests, 39% of the precipitation is discharged as surface runoff during summer versus 23% in rural areas.

  15. Down and Out at Pacaya Volcano: A Glimpse of Magma Storage and Diking as Interpreted From GPS Geodesy

    NASA Astrophysics Data System (ADS)

    Lechner, H. N.; Waite, G. P.; Wauthier, D. C.; Escobar-Wolf, R. P.; Lopez-Hetland, B.

    2017-12-01

    Geodetic data from an eight-station GPS network at Pacaya volcano Guatemala allows us to produce a simple analytical model of deformation sources associated with the 2010 eruption and the eruptive period in 2013-2014. Deformation signals for both eruptive time-periods indicate downward vertical and outward horizontal motion at several stations surrounding the volcano. The objective of this research was to better understand the magmatic plumbing system and sources of this deformation. Because this down-and-out displacement is difficult to explain with a single source, we chose a model that includes a combination of a dike and spherical source. Our modelling suggests that deformation is dominated the inflation of a shallow dike seated high within the volcanic edifice and deflation of a deeper, spherical source below the SW flank of the volcano. The source parameters for the dike feature are in good agreement with the observed orientation of recent vent emplacements on the edifice as well the horizontal displacement, while the parameters for a deeper spherical source accommodate the downward vertical motion. This study presents GPS observations at Pacaya dating back to 2009 and provides a glimpse of simple models of possible deformation sources.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Damao; Wang, Zhien; Heymsfield, Andrew J.

    Measurement of ice number concentration in clouds is important but still challenging. Stratiform mixed-phase clouds (SMCs) provide a simple scenario for retrieving ice number concentration from remote sensing measurements. The simple ice generation and growth pattern in SMCs offers opportunities to use cloud radar reflectivity (Ze) measurements and other cloud properties to infer ice number concentration quantitatively. To understand the strong temperature dependency of ice habit and growth rate quantitatively, we develop a 1-D ice growth model to calculate the ice diffusional growth along its falling trajectory in SMCs. The radar reflectivity and fall velocity profiles of ice crystals calculatedmore » from the 1-D ice growth model are evaluated with the Atmospheric Radiation Measurements (ARM) Climate Research Facility (ACRF) ground-based high vertical resolution radar measurements. Combining Ze measurements and 1-D ice growth model simulations, we develop a method to retrieve the ice number concentrations in SMCs at given cloud top temperature (CTT) and liquid water path (LWP). The retrieved ice concentrations in SMCs are evaluated with in situ measurements and with a three-dimensional cloud-resolving model simulation with a bin microphysical scheme. These comparisons show that the retrieved ice number concentrations are within an uncertainty of a factor of 2, statistically.« less

  17. Delay induced stability switch, multitype bistability and chaos in an intraguild predation model.

    PubMed

    Shu, Hongying; Hu, Xi; Wang, Lin; Watmough, James

    2015-12-01

    In many predator-prey models, delay has a destabilizing effect and induces oscillations; while in many competition models, delay does not induce oscillations. By analyzing a rather simple delayed intraguild predation model, which combines both the predator-prey relation and competition, we show that delay in intraguild predation models promotes very complex dynamics. The delay can induce stability switches exhibiting a destabilizing role as well as a stabilizing role. It is shown that three types of bistability are possible: one stable equilibrium coexists with another stable equilibrium (node-node bistability); one stable equilibrium coexists with a stable periodic solution (node-cycle bistability); one stable periodic solution coexists with another stable periodic solution (cycle-cycle bistability). Numerical simulations suggest that delay can also induce chaos in intraguild predation models.

  18. The use of simple inflow- and storage-based heuristics equations to represent reservoir behavior in California for investigating human impacts on the water cycle

    NASA Astrophysics Data System (ADS)

    Solander, K.; David, C. H.; Reager, J. T.; Famiglietti, J. S.

    2013-12-01

    The ability to reasonably replicate reservoir behavior in terms of storage and outflow is important for studying the potential human impacts on the terrestrial water cycle. Developing a simple method for this purpose could facilitate subsequent integration in a land surface or global climate model. This study attempts to simulate monthly reservoir outflow and storage using a simple, temporally-varying set of heuristics equations with input consisting of in situ records of reservoir inflow and storage. Equations of increasing complexity relative to the number of parameters involved were tested. Only two parameters were employed in the final equations used to predict outflow and storage in an attempt to best mimic seasonal reservoir behavior while still preserving model parsimony. California reservoirs were selected for model development due to the high level of data availability and intensity of water resource management in this region relative to other areas. Calibration was achieved using observations from eight major reservoirs representing approximately 41% of the 107 largest reservoirs in the state. Parameter optimization was accomplished using the minimum RMSE between observed and modeled storage and outflow as the main objective function. Initial results obtained for a multi-reservoir average of the correlation coefficient between observed and modeled storage (resp. outflow) is of 0.78 (resp. 0.75). These results combined with the simplicity of the equations being used show promise for integration into a land surface or a global climate model. This would be invaluable for evaluations of reservoir management impacts on the flow regime and associated ecosystems as well as on the climate at both regional and global scales.

  19. Predicting ecosystem shifts requires new approaches that integrate the effects of climate change across entire systems

    PubMed Central

    Russell, Bayden D.; Harley, Christopher D. G.; Wernberg, Thomas; Mieszkowska, Nova; Widdicombe, Stephen; Hall-Spencer, Jason M.; Connell, Sean D.

    2012-01-01

    Most studies that forecast the ecological consequences of climate change target a single species and a single life stage. Depending on climatic impacts on other life stages and on interacting species, however, the results from simple experiments may not translate into accurate predictions of future ecological change. Research needs to move beyond simple experimental studies and environmental envelope projections for single species towards identifying where ecosystem change is likely to occur and the drivers for this change. For this to happen, we advocate research directions that (i) identify the critical species within the target ecosystem, and the life stage(s) most susceptible to changing conditions and (ii) the key interactions between these species and components of their broader ecosystem. A combined approach using macroecology, experimentally derived data and modelling that incorporates energy budgets in life cycle models may identify critical abiotic conditions that disproportionately alter important ecological processes under forecasted climates. PMID:21900317

  20. Verifying the botanical authenticity of commercial tannins through sugars and simple phenols profiles.

    PubMed

    Malacarne, Mario; Nardin, Tiziana; Bertoldi, Daniela; Nicolini, Giorgio; Larcher, Roberto

    2016-09-01

    Commercial tannins from several botanical sources and with different chemical and technological characteristics are used in the food and winemaking industries. Different ways to check their botanical authenticity have been studied in the last few years, through investigation of different analytical parameters. This work proposes a new, effective approach based on the quantification of 6 carbohydrates, 7 polyalcohols, and 55 phenols. 87 tannins from 12 different botanical sources were analysed following a very simple sample preparation procedure. Using Forward Stepwise Discriminant Analysis, 3 statistical models were created based on sugars content, phenols concentration and combination of the two classes of compounds for the 8 most abundant categories (i.e. oak, grape seed, grape skin, gall, chestnut, quebracho, tea and acacia). The last approach provided good results in attributing tannins to the correct botanical origin. Validation, repeated 3 times on subsets of 10% of samples, confirmed the reliability of this model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Clairvoyant fusion: a new methodology for designing robust detection algorithms

    NASA Astrophysics Data System (ADS)

    Schaum, Alan

    2016-10-01

    Many realistic detection problems cannot be solved with simple statistical tests for known alternative probability models. Uncontrollable environmental conditions, imperfect sensors, and other uncertainties transform simple detection problems with likelihood ratio solutions into composite hypothesis (CH) testing problems. Recently many multi- and hyperspectral sensing CH problems have been addressed with a new approach. Clairvoyant fusion (CF) integrates the optimal detectors ("clairvoyants") associated with every unspecified value of the parameters appearing in a detection model. For problems with discrete parameter values, logical rules emerge for combining the decisions of the associated clairvoyants. For many problems with continuous parameters, analytic methods of CF have been found that produce closed-form solutions-or approximations for intractable problems. Here the principals of CF are reviewed and mathematical insights are described that have proven useful in the derivation of solutions. It is also shown how a second-stage fusion procedure can be used to create theoretically superior detection algorithms for ALL discrete parameter problems.

  2. A minimally sufficient model for rib proximal-distal patterning based on genetic analysis and agent-based simulations

    PubMed Central

    Mah, In Kyoung

    2017-01-01

    For decades, the mechanism of skeletal patterning along a proximal-distal axis has been an area of intense inquiry. Here, we examine the development of the ribs, simple structures that in most terrestrial vertebrates consist of two skeletal elements—a proximal bone and a distal cartilage portion. While the ribs have been shown to arise from the somites, little is known about how the two segments are specified. During our examination of genetically modified mice, we discovered a series of progressively worsening phenotypes that could not be easily explained. Here, we combine genetic analysis of rib development with agent-based simulations to conclude that proximal-distal patterning and outgrowth could occur based on simple rules. In our model, specification occurs during somite stages due to varying Hedgehog protein levels, while later expansion refines the pattern. This framework is broadly applicable for understanding the mechanisms of skeletal patterning along a proximal-distal axis. PMID:29068314

  3. A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2002-01-01

    The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.

  4. Further shock tunnel studies of scramjet phenomena

    NASA Technical Reports Server (NTRS)

    Morgan, R. G.; Paull, A.; Morris, N. A.; Stalker, R. J.

    1986-01-01

    Scramjet phenomena were studied using the shock tunnel T3 at the Australian National University. Simple two dimensional models were used with a combination of wall and central injectors. Silane as an additive to hydrogen fuel was studied over a range of temperatures and pressures to evaluate its effect as an ignition aid. The film cooling effect of surface injected hydrogen was measured over a wide range of equivalence. Heat transfer measurements without injection were repeated to confirm previous indications of heating rates lower than simple flat plate predictions for laminar boundary layers in equilibrium flow. The previous results were reproduced and the discrepancies are discussed in terms of the model geometry and departures of the flow from equilibrium. In the thrust producing mode, attempts were made to increase specific impulse with wall injection. Some preliminary tests were also performed on shock induced ignition, to investigate the possibility in flight of injecting fuel upstream of the combustion chamber, where it could mix but not burn.

  5. Coherent beam combination using self-phase locked stimulated Brillouin scattering phase conjugate mirrors with a rotating wedge for high power laser generation.

    PubMed

    Park, Sangwoo; Cha, Seongwoo; Oh, Jungsuk; Lee, Hwihyeong; Ahn, Heekyung; Churn, Kil Sung; Kong, Hong Jin

    2016-04-18

    The self-phase locking of a stimulated Brillouin scattering-phase conjugate mirror (SBS-PCM) allows a simple and scalable coherent beam combination of existing lasers. We propose a simple optical system composed of a rotating wedge and a concave mirror to overcome the power limit of the SBS-PCM. Its phase locking ability and the usefulness on the beam-combination laser are demonstrated experimentally. A four-beam combination is demonstrated using this SBS-PCM scheme. The relative phases between the beams were measured to be less than λ/24.7.

  6. Development and Integration of an Advanced Stirling Convertor Linear Alternator Model for a Tool Simulating Convertor Performance and Creating Phasor Diagrams

    NASA Technical Reports Server (NTRS)

    Metscher, Jonathan F.; Lewandowski, Edward J.

    2013-01-01

    A simple model of the Advanced Stirling Convertors (ASC) linear alternator and an AC bus controller has been developed and combined with a previously developed thermodynamic model of the convertor for a more complete simulation and analysis of the system performance. The model was developed using Sage, a 1-D thermodynamic modeling program that now includes electro-magnetic components. The convertor, consisting of a free-piston Stirling engine combined with a linear alternator, has sufficiently sinusoidal steady-state behavior to allow for phasor analysis of the forces and voltages acting in the system. A MATLAB graphical user interface (GUI) has been developed to interface with the Sage software for simplified use of the ASC model, calculation of forces, and automated creation of phasor diagrams. The GUI allows the user to vary convertor parameters while fixing different input or output parameters and observe the effect on the phasor diagrams or system performance. The new ASC model and GUI help create a better understanding of the relationship between the electrical component voltages and mechanical forces. This allows better insight into the overall convertor dynamics and performance.

  7. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  8. A simple shape-free model for pore-size estimation with positron annihilation lifetime spectroscopy

    NASA Astrophysics Data System (ADS)

    Wada, Ken; Hyodo, Toshio

    2013-06-01

    Positron annihilation lifetime spectroscopy is one of the methods for estimating pore size in insulating materials. We present a shape-free model to be used conveniently for such analysis. A basic model in classical picture is modified by introducing a parameter corresponding to an effective size of the positronium (Ps). This parameter is adjusted so that its Ps-lifetime to pore-size relation merges smoothly with that of the well-established Tao-Eldrup model (with modification involving the intrinsic Ps annihilation rate) applicable to very small pores. The combined model, i.e., modified Tao-Eldrup model for smaller pores and the modified classical model for larger pores, agrees surprisingly well with the quantum-mechanics based extended Tao-Eldrup model, which deals with Ps trapped in and thermally equilibrium with a rectangular pore.

  9. Sensitivity of Polar Stratospheric Ozone Loss to Uncertainties in Chemical Reaction Kinetics

    NASA Technical Reports Server (NTRS)

    Kawa, S. Randolph; Stolarski, Richard S.; Douglass, Anne R.; Newman, Paul A.

    2008-01-01

    Several recent observational and laboratory studies of processes involved in polar stratospheric ozone loss have prompted a reexamination of aspect of out understanding for this key indicator of global change. To a large extent, our confidence in understanding and projecting changes in polar and global ozone is based on our ability to to simulate these process in numerical models of chemistry and transport. These models depend on laboratory-measured kinetic reaction rates and photlysis cross section to simulate molecular interactions. In this study we use a simple box-model scenario for Antarctic ozone to estimate the uncertainty in loss attributable to known reaction kinetic uncertainties. Following the method of earlier work, rates and uncertainties from the latest laboratory evaluation are applied in random combinations. We determine the key reaction and rates contributing the largest potential errors and compare the results to observations to evaluate which combinations are consistent with atmospheric data. Implications for our theoretical and practical understanding of polar ozone loss will be assessed.

  10. Exploiting symmetries in the modeling and analysis of tires

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Andersen, Carl M.; Tanner, John A.

    1987-01-01

    A simple and efficient computational strategy for reducing both the size of a tire model and the cost of the analysis of tires in the presence of symmetry-breaking conditions (unsymmetry in the tire material, geometry, or loading) is presented. The strategy is based on approximating the unsymmetric response of the tire with a linear combination of symmetric and antisymmetric global approximation vectors (or modes). Details are presented for the three main elements of the computational strategy, which include: use of special three-field mixed finite-element models, use of operator splitting, and substantial reduction in the number of degrees of freedom. The proposed computational stategy is applied to three quasi-symmetric problems of tires: linear analysis of anisotropic tires, through use of semianalytic finite elements, nonlinear analysis of anisotropic tires through use of two-dimensional shell finite elements, and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry (and their combinations) exhibited by the tire response are identified.

  11. Consistency among distance measurements: transparency, BAO scale and accelerated expansion

    NASA Astrophysics Data System (ADS)

    Avgoustidis, Anastasios; Verde, Licia; Jimenez, Raul

    2009-06-01

    We explore consistency among different distance measures, including Supernovae Type Ia data, measurements of the Hubble parameter, and determination of the Baryon acoustic oscillation scale. We present new constraints on the cosmic transparency combining H(z) data together with the latest Supernovae Type Ia data compilation. This combination, in the context of a flat ΛCDM model, improves current constraints by nearly an order of magnitude although the constraints presented here are parametric rather than non-parametric. We re-examine the recently reported tension between the Baryon acoustic oscillation scale and Supernovae data in light of possible deviations from transparency, concluding that the source of the discrepancy may most likely be found among systematic effects of the modelling of the low redshift data or a simple ~ 2-σ statistical fluke, rather than in exotic physics. Finally, we attempt to draw model-independent conclusions about the recent accelerated expansion, determining the acceleration redshift to be zacc = 0.35+0.20-0.13 (1-σ).

  12. Leg pairs as virtual wheels

    NASA Astrophysics Data System (ADS)

    Howe, Russel; Duttweiler, Mark; Khanlian, Luke; Setrakian, Mark

    2005-05-01

    We propose the use of virtual wheels as the starting point of a new vehicle design. Each virtual wheel incorporates a pair of simple legs that, by simulating the rotary motion and ground contact of a traditional wheel, combine many of the benefits of legged and wheeled motion. We describe the use of virtual wheels in the design of a robotic mule, presenting an analysis of the mule's mobility the results of our efforts to model and build such a device.

  13. Chaotic Dynamics and Application of LCR Oscillators Sharing Common Nonlinearity

    NASA Astrophysics Data System (ADS)

    Jeevarekha, A.; Paul Asir, M.; Philominathan, P.

    2016-06-01

    This paper addresses the problem of sharing common nonlinearity among nonautonomous and autonomous oscillators. By choosing a suitable common nonlinear element with the driving point characteristics capable of bringing out chaotic motion in a combined system, we obtain identical chaotic states. The dynamics of the coupled system is explored through numerical and experimental studies. Employing the concept of common nonlinearity, a simple chaotic communication system is modeled and its performance is verified through Multisim simulation.

  14. ISO deep far-infrared survey in the Lockman Hole

    NASA Astrophysics Data System (ADS)

    Kawara, K.; Sato, Y.; Matsuhara, H.; Taniguchi, Y.; Okuda, H.; Sofue, Y.; Matsumoto, T.; Wakamatsu, K.; Cowie, L. L.; Joseph, R. D.; Sanders, D. B.

    1999-03-01

    Two 44 arcmin x 44 arcmin fields in the Lockman Hole were mapped at 95 and 175 μm using ISOPHOT. A simple program code combined with PIA works well to correct for the drift in the detector responsivity. The number density of 175 μm sources is 3 - 10 times higher than expected from the no-evolution model. The source counts at 95 and 175 μm are consistent with the cosmic infrared background.

  15. Geometric, Statistical, and Topological Modeling of Intrinsic Data Manifolds: Application to 3D Shapes

    DTIC Science & Technology

    2009-01-01

    representation to a simple curve in 3D by using the Whitney embedding theorem. In a very ludic way, we propose to combine phases one and two to...elimination principle which takes advantage of the designed parametrization. To further refine discrimination among objects, we introduce a post...packing numbers and design of principal curves. IEEE transactions on Pattern Analysis and Machine Intel- ligence, 22(3):281-297, 2000. [68] M. H. Yang, Face

  16. Possible biomechanical origins of the long-range correlations in stride intervals of walking

    NASA Astrophysics Data System (ADS)

    Gates, Deanna H.; Su, Jimmy L.; Dingwell, Jonathan B.

    2007-07-01

    When humans walk, the time duration of each stride varies from one stride to the next. These temporal fluctuations exhibit long-range correlations. It has been suggested that these correlations stem from higher nervous system centers in the brain that control gait cycle timing. Existing proposed models of this phenomenon have focused on neurophysiological mechanisms that might give rise to these long-range correlations, and generally ignored potential alternative mechanical explanations. We hypothesized that a simple mechanical system could also generate similar long-range correlations in stride times. We modified a very simple passive dynamic model of bipedal walking to incorporate forward propulsion through an impulsive force applied to the trailing leg at each push-off. Push-off forces were varied from step to step by incorporating both “sensory” and “motor” noise terms that were regulated by a simple proportional feedback controller. We generated 400 simulations of walking, with different combinations of sensory noise, motor noise, and feedback gain. The stride time data from each simulation were analyzed using detrended fluctuation analysis to compute a scaling exponent, α. This exponent quantified how each stride interval was correlated with previous and subsequent stride intervals over different time scales. For different variations of the noise terms and feedback gain, we obtained short-range correlations (α<0.5), uncorrelated time series (α=0.5), long-range correlations (0.5<α<1.0), or Brownian motion (α>1.0). Our results indicate that a simple biomechanical model of walking can generate long-range correlations and thus perhaps these correlations are not a complex result of higher level neuronal control, as has been previously suggested.

  17. A Simple Simulation Technique for Nonnormal Data with Prespecified Skewness, Kurtosis, and Covariance Matrix.

    PubMed

    Foldnes, Njål; Olsson, Ulf Henning

    2016-01-01

    We present and investigate a simple way to generate nonnormal data using linear combinations of independent generator (IG) variables. The simulated data have prespecified univariate skewness and kurtosis and a given covariance matrix. In contrast to the widely used Vale-Maurelli (VM) transform, the obtained data are shown to have a non-Gaussian copula. We analytically obtain asymptotic robustness conditions for the IG distribution. We show empirically that popular test statistics in covariance analysis tend to reject true models more often under the IG transform than under the VM transform. This implies that overly optimistic evaluations of estimators and fit statistics in covariance structure analysis may be tempered by including the IG transform for nonnormal data generation. We provide an implementation of the IG transform in the R environment.

  18. Machine learning-based in-line holographic sensing of unstained malaria-infected red blood cells.

    PubMed

    Go, Taesik; Kim, Jun H; Byeon, Hyeokjun; Lee, Sang J

    2018-04-19

    Accurate and immediate diagnosis of malaria is important for medication of the infectious disease. Conventional methods for diagnosing malaria are time consuming and rely on the skill of experts. Therefore, an automatic and simple diagnostic modality is essential for healthcare in developing countries that lack the expertise of trained microscopists. In the present study, a new automatic sensing method using digital in-line holographic microscopy (DIHM) combined with machine learning algorithms was proposed to sensitively detect unstained malaria-infected red blood cells (iRBCs). To identify the RBC characteristics, 13 descriptors were extracted from segmented holograms of individual RBCs. Among the 13 descriptors, 10 features were highly statistically different between healthy RBCs (hRBCs) and iRBCs. Six machine learning algorithms were applied to effectively combine the dominant features and to greatly improve the diagnostic capacity of the present method. Among the classification models trained by the 6 tested algorithms, the model trained by the support vector machine (SVM) showed the best accuracy in separating hRBCs and iRBCs for training (n = 280, 96.78%) and testing sets (n = 120, 97.50%). This DIHM-based artificial intelligence methodology is simple and does not require blood staining. Thus, it will be beneficial and valuable in the diagnosis of malaria. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Modeling choice and reaction time during arbitrary visuomotor learning through the coordination of adaptive working memory and reinforcement learning

    PubMed Central

    Viejo, Guillaume; Khamassi, Mehdi; Brovelli, Andrea; Girard, Benoît

    2015-01-01

    Current learning theory provides a comprehensive description of how humans and other animals learn, and places behavioral flexibility and automaticity at heart of adaptive behaviors. However, the computations supporting the interactions between goal-directed and habitual decision-making systems are still poorly understood. Previous functional magnetic resonance imaging (fMRI) results suggest that the brain hosts complementary computations that may differentially support goal-directed and habitual processes in the form of a dynamical interplay rather than a serial recruitment of strategies. To better elucidate the computations underlying flexible behavior, we develop a dual-system computational model that can predict both performance (i.e., participants' choices) and modulations in reaction times during learning of a stimulus–response association task. The habitual system is modeled with a simple Q-Learning algorithm (QL). For the goal-directed system, we propose a new Bayesian Working Memory (BWM) model that searches for information in the history of previous trials in order to minimize Shannon entropy. We propose a model for QL and BWM coordination such that the expensive memory manipulation is under control of, among others, the level of convergence of the habitual learning. We test the ability of QL or BWM alone to explain human behavior, and compare them with the performance of model combinations, to highlight the need for such combinations to explain behavior. Two of the tested combination models are derived from the literature, and the latter being our new proposal. In conclusion, all subjects were better explained by model combinations, and the majority of them are explained by our new coordination proposal. PMID:26379518

  20. Design and fabrication of a hybrid maglev model employing PML and SML

    NASA Astrophysics Data System (ADS)

    Sun, R. X.; Zheng, J.; Zhan, L. J.; Huang, S. Y.; Li, H. T.; Deng, Z. G.

    2017-10-01

    A hybrid maglev model combining permanent magnet levitation (PML) and superconducting magnetic levitation (SML) was designed and fabricated to explore a heavy-load levitation system advancing in passive stability and simple structure. In this system, the PML was designed to levitate the load, and the SML was introduced to guarantee the stability. In order to realize different working gaps of the two maglev components, linear bearings were applied to connect the PML layer (for load) and the SML layer (for stability) of the hybrid maglev model. Experimental results indicate that the hybrid maglev model possesses excellent advantages of heavy-load ability and passive stability at the same time. This work presents a possible way to realize a heavy-load passive maglev concept.

  1. 15 CFR 921.13 - Management plan and environmental impact statement development.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... simple property interest (e.g., conservation easement), fee simple property acquisition, or a combination... simple options) to establish adequate long-term state control; an estimate of the fair market value of any property interest—which is proposed for acquisition; a schedule estimating the time required to...

  2. Piezoelectricity above the Curie temperature? Combining flexoelectricity and functional grading to enable high-temperature electromechanical coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbarki, R.; Baccam, N.; Dayal, Kaushik

    Most technologically relevant ferroelectrics typically lose piezoelectricity above the Curie temperature. This limits their use to relatively low temperatures. In this Letter, exploiting a combination of flexoelectricity and simple functional grading, we propose a strategy for high-temperature electromechanical coupling in a standard thin film configuration. We use continuum modeling to quantitatively demonstrate the possibility of achieving apparent piezoelectric materials with large and temperature-stable electromechanical coupling across a wide temperature range that extends significantly above the Curie temperature. With Barium and Strontium Titanate, as example materials, a significant electromechanical coupling that is potentially temperature-stable up to 900 °C is possible.

  3. Robust Rocket Engine Concept

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.

    1995-01-01

    The potential for a revolutionary step in the durability of reusable rocket engines is made possible by the combination of several emerging technologies. The recent creation and analytical demonstration of life extending (or damage mitigating) control technology enables rapid rocket engine transients with minimum fatigue and creep damage. This technology has been further enhanced by the formulation of very simple but conservative continuum damage models. These new ideas when combined with recent advances in multidisciplinary optimization provide the potential for a large (revolutionary) step in reusable rocket engine durability. This concept has been named the robust rocket engine concept (RREC) and is the basic contribution of this paper. The concept also includes consideration of design innovations to minimize critical point damage.

  4. Methods for developing time-series climate surfaces to drive topographically distributed energy- and water-balance models

    USGS Publications Warehouse

    Susong, D.; Marks, D.; Garen, D.

    1999-01-01

    Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.

  5. Assessing the Effectiveness of Ramp-Up During Sonar Operations Using Exposure Models.

    PubMed

    von Benda-Beckmann, Alexander M; Wensveen, Paul J; Kvadsheim, Petter H; Lam, Frans-Peter A; Miller, Patrick J O; Tyack, Peter L; Ainslie, Michael A

    2016-01-01

    Ramp-up procedures are used to mitigate the impact of sound on marine mammals. Sound exposure models combined with observations of marine mammals responding to sound can be used to assess the effectiveness of ramp-up procedures. We found that ramp-up procedures before full-level sonar operations can reduce the risk of hearing threshold shifts with marine mammals, but their effectiveness depends strongly on the responsiveness of the animals. In this paper, we investigated the effect of sonar parameters (source level, pulse-repetition time, ship speed) on sound exposure by using a simple analytical model and highlight the mechanisms that limit the effectiveness of ramp-up procedures.

  6. Modelling the evolution and diversity of cumulative culture

    PubMed Central

    Enquist, Magnus; Ghirlanda, Stefano; Eriksson, Kimmo

    2011-01-01

    Previous work on mathematical models of cultural evolution has mainly focused on the diffusion of simple cultural elements. However, a characteristic feature of human cultural evolution is the seemingly limitless appearance of new and increasingly complex cultural elements. Here, we develop a general modelling framework to study such cumulative processes, in which we assume that the appearance and disappearance of cultural elements are stochastic events that depend on the current state of culture. Five scenarios are explored: evolution of independent cultural elements, stepwise modification of elements, differentiation or combination of elements and systems of cultural elements. As one application of our framework, we study the evolution of cultural diversity (in time as well as between groups). PMID:21199845

  7. Current Status and Challenges of Atmospheric Data Assimilation

    NASA Astrophysics Data System (ADS)

    Atlas, R. M.; Gelaro, R.

    2016-12-01

    The issues of modern atmospheric data assimilation are fairly simple to comprehend but difficult to address, involving the combination of literally billions of model variables and tens of millions of observations daily. In addition to traditional meteorological variables such as wind, temperature pressure and humidity, model state vectors are being expanded to include explicit representation of precipitation, clouds, aerosols and atmospheric trace gases. At the same time, model resolutions are approaching single-kilometer scales globally and new observation types have error characteristics that are increasingly non-Gaussian. This talk describes the current status and challenges of atmospheric data assimilation, including an overview of current methodologies, the difficulty of estimating error statistics, and progress toward coupled earth system analyses.

  8. On two-point boundary correlations in the six-vertex model with domain wall boundary conditions

    NASA Astrophysics Data System (ADS)

    Colomo, F.; Pronko, A. G.

    2005-05-01

    The six-vertex model with domain wall boundary conditions on an N × N square lattice is considered. The two-point correlation function describing the probability of having two vertices in a given state at opposite (top and bottom) boundaries of the lattice is calculated. It is shown that this two-point boundary correlator is expressible in a very simple way in terms of the one-point boundary correlators of the model on N × N and (N - 1) × (N - 1) lattices. In alternating sign matrix (ASM) language this result implies that the doubly refined x-enumerations of ASMs are just appropriate combinations of the singly refined ones.

  9. Wave motion on the surface of the human tympanic membrane: Holographic measurement and modeling analysis

    PubMed Central

    Cheng, Jeffrey Tao; Hamade, Mohamad; Merchant, Saumil N.; Rosowski, John J.; Harrington, Ellery; Furlong, Cosme

    2013-01-01

    Sound-induced motions of the surface of the tympanic membrane (TM) were measured using stroboscopic holography in cadaveric human temporal bones at frequencies between 0.2 and 18 kHz. The results are consistent with the combination of standing-wave-like modal motions and traveling-wave-like motions on the TM surface. The holographic techniques also quantified sound-induced displacements of the umbo of the malleus, as well as volume velocity of the TM. These measurements were combined with sound-pressure measurements near the TM to compute middle-ear input impedance and power reflectance at the TM. The results are generally consistent with other published data. A phenomenological model that behaved qualitatively like the data was used to quantify the relative magnitude and spatial frequencies of the modal and traveling-wave-like displacement components on the TM surface. This model suggests the modal magnitudes are generally larger than those of the putative traveling waves, and the computed wave speeds are much slower than wave speeds predicted by estimates of middle-ear delay. While the data are inconsistent with simple modal displacements of the TM, an alternate model based on the combination of modal motions in a lossy membrane can also explain these measurements without invoking traveling waves. PMID:23363110

  10. Activities of Antibiotic Combinations against Resistant Strains of Pseudomonas aeruginosa in a Model of Infected THP-1 Monocytes

    PubMed Central

    Buyck, Julien M.

    2014-01-01

    Antibiotic combinations are often used for treating Pseudomonas aeruginosa infections but their efficacy toward intracellular bacteria has not been investigated so far. We have studied combinations of representatives of the main antipseudomonal classes (ciprofloxacin, meropenem, tobramycin, and colistin) against intracellular P. aeruginosa in a model of THP-1 monocytes in comparison with bacteria growing in broth, using the reference strain PAO1 and two clinical isolates (resistant to ciprofloxacin and meropenem, respectively). Interaction between drugs was assessed by checkerboard titration (extracellular model only), by kill curves, and by using the fractional maximal effect (FME) method, which allows studying the effects of combinations when dose-effect relationships are not linear. For drugs used alone, simple sigmoidal functions could be fitted to all concentration-effect relationships (extracellular and intracellular bacteria), with static concentrations close to (ciprofloxacin, colistin, and meropenem) or slightly higher than (tobramycin) the MIC and with maximal efficacy reaching the limit of detection in broth but only a 1 to 1.5 (colistin, meropenem, and tobramycin) to 2 to 3 (ciprofloxacin) log10 CFU decrease intracellularly. Extracellularly, all combinations proved additive by checkerboard titration but synergistic using the FME method and more bactericidal in kill curve assays. Intracellularly, all combinations proved additive only based on both FME and kill curve assays. Thus, although combinations appeared to modestly improve antibiotic activity against intracellular P. aeruginosa, they do not allow eradication of these persistent forms of infections. Combinations including ciprofloxacin were the most active (even against the ciprofloxacin-resistant strain), which is probably related to the fact this drug was the most effective alone intracellularly. PMID:25348528

  11. Asteroid thermal modeling in the presence of reflected sunlight

    NASA Astrophysics Data System (ADS)

    Myhrvold, Nathan

    2018-03-01

    A new derivation of simple asteroid thermal models is presented, investigating the need to account correctly for Kirchhoff's law of thermal radiation when IR observations contain substantial reflected sunlight. The framework applies to both the NEATM and related thermal models. A new parameterization of these models eliminates the dependence of thermal modeling on visible absolute magnitude H, which is not always available. Monte Carlo simulations are used to assess the potential impact of violating Kirchhoff's law on estimates of physical parameters such as diameter and IR albedo, with an emphasis on NEOWISE results. The NEOWISE papers use ten different models, applied to 12 different combinations of WISE data bands, in 47 different combinations. The most prevalent combinations are simulated and the accuracy of diameter estimates is found to be depend critically on the model and data band combination. In the best case of full thermal modeling of all four band the errors in an idealized model the 1σ (68.27%) confidence interval is -5% to +6%, but this combination is just 1.9% of NEOWISE results. Other combinations representing 42% of the NEOWISE results have about twice the CI at -10% to +12%, before accounting for errors due to irregular shape or other real world effects that are not simulated. The model and data band combinations found for the majority of NEOWISE results have much larger systematic and random errors. Kirchhoff's law violation by NEOWISE models leads to errors in estimation accuracy that are strongest for asteroids with W1, W2 band emissivity ɛ12 in both the lowest (0.605 ≤ɛ12 ≤ 0 . 780), and highest decile (0.969 ≤ɛ12 ≤ 0 . 988), corresponding to the highest and lowest deciles of near-IR albedo pIR. Systematic accuracy error between deciles ranges from a low of 5% to as much as 45%, and there are also differences in the random errors. Kirchhoff's law effects also produce large errors in NEOWISE estimates of pIR, particularly for high values. IR observations of asteroids in bands that have substantial reflected sunlight can largely avoid these problems by adopting the Kirchhoff law compliant modeling framework presented here, which is conceptually straightforward and comes without computational cost.

  12. A powerful and flexible approach to the analysis of RNA sequence count data.

    PubMed

    Zhou, Yi-Hui; Xia, Kai; Wright, Fred A

    2011-10-01

    A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean-variance relationships provides a flexible testing regimen that 'borrows' information across genes, while easily incorporating design effects and additional covariates. We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean-variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary data are available at Bioinformatics online.

  13. Regionalization of response routine parameters

    NASA Astrophysics Data System (ADS)

    Tøfte, Lena S.; Sultan, Yisak A.

    2013-04-01

    When area distributed hydrological models are to be calibrated or updated, fewer calibration parameters is of a considerable advantage. Based on, among others, Kirchner, we have developed a simple non-threshold response model for drainage in natural catchments, to be used in the gridded hydrological model ENKI. The new response model takes only the hydrogram into account, it has one state and two parameters, and is adapted to catchments that are dominated by terrain drainage. The method is based on the assumption that in catchments where precipitation, evaporation and snowmelt is neglect able, the discharge is entirely determined by the amount of stored water. It can then be characterized as a simple first-order nonlinear dynamical system, where the governing equations can be found directly from measured stream flow fluctuations. This means that the response in the catchment can be modelled by using hydrogram data where all data from periods with rain, snowmelt or evaporation is left out, and adjust these series to a two or three parameter equation. A large number of discharge series from catchments in different regions in Norway are analyzed, and parameters found for all the series. By combining the computed parameters and known catchments characteristics, we try to regionalize the parameters. Then the parameters in the response routine can easily be found also for ungauged catchments, from maps or data bases.

  14. A complex speciation–richness relationship in a simple neutral model

    PubMed Central

    Desjardins-Proulx, Philippe; Gravel, Dominique

    2012-01-01

    Speciation is the “elephant in the room” of community ecology. As the ultimate source of biodiversity, its integration in ecology's theoretical corpus is necessary to understand community assembly. Yet, speciation is often completely ignored or stripped of its spatial dimension. Recent approaches based on network theory have allowed ecologists to effectively model complex landscapes. In this study, we use this framework to model allopatric and parapatric speciation in networks of communities. We focus on the relationship between speciation, richness, and the spatial structure of communities. We find a strong opposition between speciation and local richness, with speciation being more common in isolated communities and local richness being higher in more connected communities. Unlike previous models, we also find a transition to a positive relationship between speciation and local richness when dispersal is low and the number of communities is small. We use several measures of centrality to characterize the effect of network structure on diversity. The degree, the simplest measure of centrality, is the best predictor of local richness and speciation, although it loses some of its predictive power as connectivity grows. Our framework shows how a simple neutral model can be combined with network theory to reveal complex relationships between speciation, richness, and the spatial organization of populations. PMID:22957181

  15. Investigation of blood flow in the external carotid artery and its branches with a new 0D peripheral model.

    PubMed

    Ohhara, Yoshihito; Oshima, Marie; Iwai, Toshinori; Kitajima, Hiroaki; Yajima, Yasuharu; Mitsudo, Kenji; Krdy, Absy; Tohnai, Iwai

    2016-02-04

    Patient-specific modelling in clinical studies requires a realistic simulation to be performed within a reasonable computational time. The aim of this study was to develop simple but realistic outflow boundary conditions for patient-specific blood flow simulation which can be used to clarify the distribution of the anticancer agent in intra-arterial chemotherapy for oral cancer. In this study, the boundary conditions are expressed as a zero dimension (0D) resistance model of the peripheral vessel network based on the fractal characteristics of branching arteries combined with knowledge of the circulatory system and the energy minimization principle. This resistance model was applied to four patient-specific blood flow simulations at the region where the common carotid artery bifurcates into the internal and external carotid arteries. Results of these simulations with the proposed boundary conditions were compared with the results of ultrasound measurements for the same patients. The pressure was found to be within the physiological range. The difference in velocity in the superficial temporal artery results in an error of 5.21 ± 0.78 % between the numerical results and the measurement data. The proposed outflow boundary conditions, therefore, constitute a simple resistance-based model and can be used for performing accurate simulations with commercial fluid dynamics software.

  16. Technical report. The application of probability-generating functions to linear-quadratic radiation survival curves.

    PubMed

    Kendal, W S

    2000-04-01

    To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.

  17. BCS Theory of Hadronic Matter at High Densities

    NASA Astrophysics Data System (ADS)

    Bohr, Henrik; Panda, Prafulla K.; Providência, Constança; da Providência, João

    2012-04-01

    The equilibrium between the so-called 2SC and CFL phases of strange quark matter at high densities is investigated in the framework of a simple schematic model of the NJL type. Equal densities are assumed for quarks u, d and s. The 2SC phase is here described by a color-flavor symmetric state, in which the quark numbers are independent of the color-flavor combination. In the CFL phase the quark numbers depend on the color-flavor combination, that is, the number of quarks associated with the color-flavor combinations ur, dg, sb is different from the number of quarks associated with the color flavor combinations ug, ub, dr, db, sr, sg. We find that the 2SC phase is stable for a chemical potential μ below μ c = 0.505 GeV, while the CFL phase is stable above, the equilibrium pressure being P c = 0.003 GeV4. We have used a 3-momentum regularizing cutoff Λ = 0.8 GeV, which is somewhat larger than is usual in NJL type models. This should be adequate if the relevant chemical potential does not exceed 0.6 GeV.

  18. A molecule-centered method for accelerating the calculation of hydrodynamic interactions in Brownian dynamics simulations containing many flexible biomolecules

    PubMed Central

    Elcock, Adrian H.

    2013-01-01

    Inclusion of hydrodynamic interactions (HIs) is essential in simulations of biological macromolecules that treat the solvent implicitly if the macromolecules are to exhibit correct translational and rotational diffusion. The present work describes the development and testing of a simple approach aimed at allowing more rapid computation of HIs in coarse-grained Brownian dynamics simulations of systems that contain large numbers of flexible macromolecules. The method combines a complete treatment of intramolecular HIs with an approximate treatment of the intermolecular HIs which assumes that the molecules are effectively spherical; all of the HIs are calculated at the Rotne-Prager-Yamakawa level of theory. When combined with Fixman’s Chebyshev polynomial method for calculating correlated random displacements, the proposed method provides an approach that is simple to program but sufficiently fast that it makes it computationally viable to include HIs in large-scale simulations. Test calculations performed on very coarse-grained models of the pyruvate dehydrogenase (PDH) E2 complex and on oligomers of ParM (ranging in size from 1 to 20 monomers) indicate that the method reproduces the translational diffusion behavior seen in more complete HI simulations surprisingly well; the method performs less well at capturing rotational diffusion but its discrepancies diminish with increasing size of the simulated assembly. Simulations of residue-level models of two tetrameric protein models demonstrate that the method also works well when more structurally detailed models are used in the simulations. Finally, test simulations of systems containing up to 1024 coarse-grained PDH molecules indicate that the proposed method rapidly becomes more efficient than the conventional BD approach in which correlated random displacements are obtained via a Cholesky decomposition of the complete diffusion tensor. PMID:23914146

  19. B-physics anomalies: a guide to combined explanations

    NASA Astrophysics Data System (ADS)

    Buttazzo, Dario; Greljo, Admir; Isidori, Gino; Marzocca, David

    2017-11-01

    Motivated by additional experimental hints of Lepton Flavour Universality violation in B decays, both in charged- and in neutral-current processes, we analyse the ingredients necessary to provide a combined description of these phenomena. By means of an Effective Field Theory (EFT) approach, based on the hypothesis of New Physics coupled predominantly to the third generation of left-handed quarks and leptons, we show how this is possible. We demonstrate, in particular, how to solve the problems posed by electroweak precision tests and direct searches with a rather natural choice of model parameters, within the context of a U(2) q ×U(2)ℓ flavour symmetry. We further exemplify the general EFT findings by means of simplified models with explicit mediators in the TeV range: coloured scalar or vector leptoquarks and colour-less vectors. Among these, the case of an SU(2) L -singlet vector leptoquark emerges as a particularly simple and successful framework.

  20. Multiscale simulations of anisotropic particles combining molecular dynamics and Green's function reaction dynamics

    NASA Astrophysics Data System (ADS)

    Vijaykumar, Adithya; Ouldridge, Thomas E.; ten Wolde, Pieter Rein; Bolhuis, Peter G.

    2017-03-01

    The modeling of complex reaction-diffusion processes in, for instance, cellular biochemical networks or self-assembling soft matter can be tremendously sped up by employing a multiscale algorithm which combines the mesoscopic Green's Function Reaction Dynamics (GFRD) method with explicit stochastic Brownian, Langevin, or deterministic molecular dynamics to treat reactants at the microscopic scale [A. Vijaykumar, P. G. Bolhuis, and P. R. ten Wolde, J. Chem. Phys. 143, 214102 (2015)]. Here we extend this multiscale MD-GFRD approach to include the orientational dynamics that is crucial to describe the anisotropic interactions often prevalent in biomolecular systems. We present the novel algorithm focusing on Brownian dynamics only, although the methodology is generic. We illustrate the novel algorithm using a simple patchy particle model. After validation of the algorithm, we discuss its performance. The rotational Brownian dynamics MD-GFRD multiscale method will open up the possibility for large scale simulations of protein signalling networks.

  1. Eruption rate, area, and length relationships for some Hawaiian lava flows

    NASA Technical Reports Server (NTRS)

    Pieri, David C.; Baloga, Stephen M.

    1986-01-01

    The relationships between the morphological parameters of lava flows and the process parameters of lava composition, eruption rate, and eruption temperature were investigated using literature data on Hawaiian lava flows. Two simple models for lava flow heat loss by Stefan-Boltzmann radiation were employed to derive eruption rate versus planimetric area relationship. For the Hawaiian basaltic flows, the eruption rate is highly correlated with the planimetric area. Moreover, this observed correlation is superior to those from other obvious combinations of eruption rate and flow dimensions. The correlations obtained on the basis of the two theoretical models, suggest that the surface of the Hawaiian flows radiates at an effective temperature much less than the inner parts of the flowing lava, which is in agreement with field observations. The data also indicate that the eruption rate versus planimetric area correlations can be markedly degraded when data from different vents, volcanoes, and epochs are combined.

  2. The evolution of contralateral control of the body by the brain: is it a protective mechanism?

    PubMed

    Whitehead, Lorne; Banihani, Saleh

    2014-01-01

    Contralateral control, the arrangement whereby most of the human motor and sensory fibres cross the midline in order to provide control for contralateral portions of the body, presents a puzzle from an evolutionary perspective. What caused such a counterintuitive and complex arrangement to become dominant? In this paper we offer a new perspective on this question by showing that in a complex interactive control system there could be a significant net survival advantage with contralateral control, associated with the effect of injuries of intermediate severity. In such cases an advantage could arise from a combination of non-linear system response combined with correlations between injuries on the same side of the head and body. We show that a simple mathematical model of these ideas emulates such an advantage. Based on this model, we conclude that effects of this kind are a plausible driving force for the evolution of contralateral control.

  3. Quantifying alluvial fan sensitivity to climate in Death Valley, California, from field observations and numerical models

    NASA Astrophysics Data System (ADS)

    Brooke, Sam; Whittaker, Alexander; Armitage, John; D'Arcy, Mitch; Watkins, Stephen

    2017-04-01

    A quantitative understanding of landscape sensitivity to climate change remains a key challenge in the Earth Sciences. The stream-flow deposits of coupled catchment-fan systems offer one way to decode past changes in external boundary conditions as they comprise simple, closed systems that can be represented effectively by numerical models. Here we combine the collection and analysis of grain size data on well-dated alluvial fan surfaces in Death Valley, USA, with numerical modelling to address the extent to which sediment routing systems record high-frequency, high-magnitude climate change. We compile a new database of Holocene and Late-Pleistocene grain size trends from 11 alluvial fans in Death Valley, capturing high-resolution grain size data ranging from the Recent to 100 kyr in age. We hypothesise the observed changes in average surface grain size and fining rate over time are a record of landscape response to glacial-interglacial climatic forcing. With this data we are in a unique position to test the predictions of landscape evolution models and evaluate the extent to which climate change has influenced the volume and calibre of sediment deposited on alluvial fans. To gain insight into our field data and study area, we employ an appropriately-scaled catchment-fan model that calculates an eroded volumetric sediment budget to be deposited in a subsiding basin according to mass balance where grain size trends are predicted by a self-similarity fining model. We use the model to compare predicted trends in alluvial fan stratigraphy as a function of boundary condition change for a range of model parameters and input grain size distributions. Subsequently, we perturb our model with a plausible glacial-interglacial magnitude precipitation change to estimate the requisite sediment flux needed to generate observed field grain size trends in Death Valley. Modelled fluxes are then compared with independent measurements of sediment supply over time. Our results constitute one of the first attempts to combine the detailed collection of alluvial fan grain size data in time and space with coupled catchment-fan models, affording us the means to evaluate how well field and model data can be reconciled for simple sediment routing systems.

  4. Synaptic Scaling in Combination with Many Generic Plasticity Mechanisms Stabilizes Circuit Connectivity

    PubMed Central

    Tetzlaff, Christian; Kolodziejski, Christoph; Timme, Marc; Wörgötter, Florentin

    2011-01-01

    Synaptic scaling is a slow process that modifies synapses, keeping the firing rate of neural circuits in specific regimes. Together with other processes, such as conventional synaptic plasticity in the form of long term depression and potentiation, synaptic scaling changes the synaptic patterns in a network, ensuring diverse, functionally relevant, stable, and input-dependent connectivity. How synaptic patterns are generated and stabilized, however, is largely unknown. Here we formally describe and analyze synaptic scaling based on results from experimental studies and demonstrate that the combination of different conventional plasticity mechanisms and synaptic scaling provides a powerful general framework for regulating network connectivity. In addition, we design several simple models that reproduce experimentally observed synaptic distributions as well as the observed synaptic modifications during sustained activity changes. These models predict that the combination of plasticity with scaling generates globally stable, input-controlled synaptic patterns, also in recurrent networks. Thus, in combination with other forms of plasticity, synaptic scaling can robustly yield neuronal circuits with high synaptic diversity, which potentially enables robust dynamic storage of complex activation patterns. This mechanism is even more pronounced when considering networks with a realistic degree of inhibition. Synaptic scaling combined with plasticity could thus be the basis for learning structured behavior even in initially random networks. PMID:22203799

  5. Circuit models and three-dimensional electromagnetic simulations of a 1-MA linear transformer driver stage

    NASA Astrophysics Data System (ADS)

    Rose, D. V.; Miller, C. L.; Welch, D. R.; Clark, R. E.; Madrid, E. A.; Mostrom, C. B.; Stygar, W. A.; Lechien, K. R.; Mazarakis, M. A.; Langston, W. L.; Porter, J. L.; Woodworth, J. R.

    2010-09-01

    A 3D fully electromagnetic (EM) model of the principal pulsed-power components of a high-current linear transformer driver (LTD) has been developed. LTD systems are a relatively new modular and compact pulsed-power technology based on high-energy density capacitors and low-inductance switches located within a linear-induction cavity. We model 1-MA, 100-kV, 100-ns rise-time LTD cavities [A. A. Kim , Phys. Rev. ST Accel. Beams 12, 050402 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.050402] which can be used to drive z-pinch and material dynamics experiments. The model simulates the generation and propagation of electromagnetic power from individual capacitors and triggered gas switches to a radially symmetric output line. Multiple cavities, combined to provide voltage addition, drive a water-filled coaxial transmission line. A 3D fully EM model of a single 1-MA 100-kV LTD cavity driving a simple resistive load is presented and compared to electrical measurements. A new model of the current loss through the ferromagnetic cores is developed for use both in circuit representations of an LTD cavity and in the 3D EM simulations. Good agreement between the measured core current, a simple circuit model, and the 3D simulation model is obtained. A 3D EM model of an idealized ten-cavity LTD accelerator is also developed. The model results demonstrate efficient voltage addition when driving a matched impedance load, in good agreement with an idealized circuit model.

  6. Fitting dynamic models to the Geosat sea level observations in the tropical Pacific Ocean. I - A free wave model

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Vazquez, Jorge; Perigaud, Claire

    1991-01-01

    Free, equatorially trapped sinusoidal wave solutions to a linear model on an equatorial beta plane are used to fit the Geosat altimetric sea level observations in the tropical Pacific Ocean. The Kalman filter technique is used to estimate the wave amplitude and phase from the data. The estimation is performed at each time step by combining the model forecast with the observation in an optimal fashion utilizing the respective error covariances. The model error covariance is determined such that the performance of the model forecast is optimized. It is found that the dominant observed features can be described qualitatively by basin-scale Kelvin waves and the first meridional-mode Rossby waves. Quantitatively, however, only 23 percent of the signal variance can be accounted for by this simple model.

  7. Iterative combining rules for the van der Waals potentials of mixed rare gas systems

    NASA Astrophysics Data System (ADS)

    Wei, L. M.; Li, P.; Tang, K. T.

    2017-05-01

    An iterative procedure is introduced to make the results of some simple combining rules compatible with the Tang-Toennies potential model. The method is used to calculate the well locations Re and the well depths De of the van der Waals potentials of the mixed rare gas systems from the corresponding values of the homo-nuclear dimers. When the ;sizes; of the two interacting atoms are very different, several rounds of iteration are required for the results to converge. The converged results can be substantially different from the starting values obtained from the combining rules. However, if the sizes of the interacting atoms are close, only one or even no iteration is necessary for the results to converge. In either case, the converged results are the accurate descriptions of the interaction potentials of the hetero-nuclear dimers.

  8. Simple model for piezoelectric ceramic/polymer 1-3 composites used in ultrasonic transducer applications.

    PubMed

    Chan, H W; Unsworth, J

    1989-01-01

    A theoretical model is presented for combining parameters of 1-3 ultrasonic composite materials in order to predict ultrasonic characteristics such as velocity, acoustic impedance, electromechanical coupling factor, and piezoelectric coefficients. Hence, the model allows the estimation of resonance frequencies of 1-3 composite transducers. This model has been extended to cover more material parameters, and they are compared to experimental results up to PZT volume fraction nu of 0.8. The model covers calculation of piezoelectric charge constants d(33) and d(31). Values are found to be in good agreement with experimental results obtained for PZT 7A/Araldite D 1-3 composites. The acoustic velocity, acoustic impedance, and electromechanical coupling factor are predicted and found to be close to the values determined experimentally.

  9. Self organising hypothesis networks: a new approach for representing and structuring SAR knowledge

    PubMed Central

    2014-01-01

    Background Combining different sources of knowledge to build improved structure activity relationship models is not easy owing to the variety of knowledge formats and the absence of a common framework to interoperate between learning techniques. Most of the current approaches address this problem by using consensus models that operate at the prediction level. We explore the possibility to directly combine these sources at the knowledge level, with the aim to harvest potentially increased synergy at an earlier stage. Our goal is to design a general methodology to facilitate knowledge discovery and produce accurate and interpretable models. Results To combine models at the knowledge level, we propose to decouple the learning phase from the knowledge application phase using a pivot representation (lingua franca) based on the concept of hypothesis. A hypothesis is a simple and interpretable knowledge unit. Regardless of its origin, knowledge is broken down into a collection of hypotheses. These hypotheses are subsequently organised into hierarchical network. This unification permits to combine different sources of knowledge into a common formalised framework. The approach allows us to create a synergistic system between different forms of knowledge and new algorithms can be applied to leverage this unified model. This first article focuses on the general principle of the Self Organising Hypothesis Network (SOHN) approach in the context of binary classification problems along with an illustrative application to the prediction of mutagenicity. Conclusion It is possible to represent knowledge in the unified form of a hypothesis network allowing interpretable predictions with performances comparable to mainstream machine learning techniques. This new approach offers the potential to combine knowledge from different sources into a common framework in which high level reasoning and meta-learning can be applied; these latter perspectives will be explored in future work. PMID:24959206

  10. Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT

    NASA Technical Reports Server (NTRS)

    Fagundo, Arturo

    1994-01-01

    Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.

  11. Transient excitation and mechanical admittance test techniques for prediction of payload vibration environments

    NASA Technical Reports Server (NTRS)

    Kana, D. D.; Vargas, L. M.

    1977-01-01

    Transient excitation forces were applied separately to simple beam-and-mass launch vehicle and payload models to develop complex admittance functions for the interface and other appropriate points on the structures. These measured admittances were then analytically combined by a matrix representation to obtain a description of the coupled system dynamic characteristics. Response of the payload model to excitation of the launch vehicle model was predicted and compared with results measured on the combined models. These results are also compared with results of earlier work in which a similar procedure was employed except that steady-state sinusoidal excitation techniques were included. It is found that the method employing transient tests produces results that are better overall than the steady state methods. Furthermore, the transient method requires far less time to implement, and provides far better resolution in the data. However, the data acquisition and handling problem is more complex for this method. It is concluded that the transient test and admittance matrix prediction method can be a valuable tool for development of payload vibration tests.

  12. Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions

    DOE PAGES

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...

    2015-11-01

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  13. Modeling Soft Tissue Damage and Failure Using a Combined Particle/Continuum Approach.

    PubMed

    Rausch, M K; Karniadakis, G E; Humphrey, J D

    2017-02-01

    Biological soft tissues experience damage and failure as a result of injury, disease, or simply age; examples include torn ligaments and arterial dissections. Given the complexity of tissue geometry and material behavior, computational models are often essential for studying both damage and failure. Yet, because of the need to account for discontinuous phenomena such as crazing, tearing, and rupturing, continuum methods are limited. Therefore, we model soft tissue damage and failure using a particle/continuum approach. Specifically, we combine continuum damage theory with Smoothed Particle Hydrodynamics (SPH). Because SPH is a meshless particle method, and particle connectivity is determined solely through a neighbor list, discontinuities can be readily modeled by modifying this list. We show, for the first time, that an anisotropic hyperelastic constitutive model commonly employed for modeling soft tissue can be conveniently implemented within a SPH framework and that SPH results show excellent agreement with analytical solutions for uniaxial and biaxial extension as well as finite element solutions for clamped uniaxial extension in 2D and 3D. We further develop a simple algorithm that automatically detects damaged particles and disconnects the spatial domain along rupture lines in 2D and rupture surfaces in 3D. We demonstrate the utility of this approach by simulating damage and failure under clamped uniaxial extension and in a peeling experiment of virtual soft tissue samples. In conclusion, SPH in combination with continuum damage theory may provide an accurate and efficient framework for modeling damage and failure in soft tissues.

  14. Photocatalytic mineralization of commercial herbicides in a pilot-scale solar CPC reactor: photoreactor modeling and reaction kinetics constants independent of radiation field.

    PubMed

    Colina-Márquez, Jose; Machuca-Martínez, Fiderman; Li Puma, Gianluca

    2009-12-01

    The six-flux absorption-scattering model (SFM) of the radiation field in the photoreactor, combined with reaction kinetics and fluid-dynamic models, has proved to be suitable to describe the degradation of water pollutants in heterogeneous photocatalytic reactors, combining simplicity and accuracy. In this study, the above approach was extended to model the photocatalytic mineralization of a commercial herbicides mixture (2,4-D, diuron, and ametryne used in Colombian sugar cane crops) in a solar, pilot-scale, compound parabolic collector (CPC) photoreactor using a slurry suspension of TiO(2). The ray-tracing technique was used jointly with the SFM to determine the direction of both the direct and diffuse solar photon fluxes and the spatial profile of the local volumetric rate of photon absorption (LVRPA) in the CPC reactor. Herbicides mineralization kinetics with explicit photon absorption effects were utilized to remove the dependence of the observed rate constants from the reactor geometry and radiation field in the photoreactor. The results showed that the overall model fitted the experimental data of herbicides mineralization in the solar CPC reactor satisfactorily for both cloudy and sunny days. Using the above approach kinetic parameters independent of the radiation field in the reactor can be estimated directly from the results of experiments carried out in a solar CPC reactor. The SFM combined with reaction kinetics and fluid-dynamic models proved to be a simple, but reliable model, for solar photocatalytic applications.

  15. Modeling Soft Tissue Damage and Failure Using a Combined Particle/Continuum Approach

    PubMed Central

    Rausch, M. K.; Karniadakis, G. E.; Humphrey, J. D.

    2016-01-01

    Biological soft tissues experience damage and failure as a result of injury, disease, or simply age; examples include torn ligaments and arterial dissections. Given the complexity of tissue geometry and material behavior, computational models are often essential for studying both damage and failure. Yet, because of the need to account for discontinuous phenomena such as crazing, tearing, and rupturing, continuum methods are limited. Therefore, we model soft tissue damage and failure using a particle/continuum approach. Specifically, we combine continuum damage theory with Smoothed Particle Hydrodynamics (SPH). Because SPH is a meshless particle method, and particle connectivity is determined solely through a neighbor list, discontinuities can be readily modeled by modifying this list. We show, for the first time, that an anisotropic hyperelastic constitutive model commonly employed for modeling soft tissue can be conveniently implemented within a SPH framework and that SPH results show excellent agreement with analytical solutions for uniaxial and biaxial extension as well as finite element solutions for clamped uniaxial extension in 2D and 3D. We further develop a simple algorithm that automatically detects damaged particles and disconnects the spatial domain along rupture lines in 2D and rupture surfaces in 3D. We demonstrate the utility of this approach by simulating damage and failure under clamped uniaxial extension and in a peeling experiment of virtual soft tissue samples. In conclusion, SPH in combination with continuum damage theory may provide an accurate and efficient framework for modeling damage and failure in soft tissues. PMID:27538848

  16. Combining symmetry collective states with coupled-cluster theory: Lessons from the Agassi model Hamiltonian

    NASA Astrophysics Data System (ADS)

    Hermes, Matthew R.; Dukelsky, Jorge; Scuseria, Gustavo E.

    2017-06-01

    The failures of single-reference coupled-cluster theory for strongly correlated many-body systems is flagged at the mean-field level by the spontaneous breaking of one or more physical symmetries of the Hamiltonian. Restoring the symmetry of the mean-field determinant by projection reveals that coupled-cluster theory fails because it factorizes high-order excitation amplitudes incorrectly. However, symmetry-projected mean-field wave functions do not account sufficiently for dynamic (or weak) correlation. Here we pursue a merger of symmetry projection and coupled-cluster theory, following previous work along these lines that utilized the simple Lipkin model system as a test bed [J. Chem. Phys. 146, 054110 (2017), 10.1063/1.4974989]. We generalize the concept of a symmetry-projected mean-field wave function to the concept of a symmetry projected state, in which the factorization of high-order excitation amplitudes in terms of low-order ones is guided by symmetry projection and is not exponential, and combine them with coupled-cluster theory in order to model the ground state of the Agassi Hamiltonian. This model has two separate channels of correlation and two separate physical symmetries which are broken under strong correlation. We show how the combination of symmetry collective states and coupled-cluster theory is effective in obtaining correlation energies and order parameters of the Agassi model throughout its phase diagram.

  17. Combining Statistics and Physics to Improve Climate Downscaling

    NASA Astrophysics Data System (ADS)

    Gutmann, E. D.; Eidhammer, T.; Arnold, J.; Nowak, K.; Clark, M. P.

    2017-12-01

    Getting useful information from climate models is an ongoing problem that has plagued climate science and hydrologic prediction for decades. While it is possible to develop statistical corrections for climate models that mimic current climate almost perfectly, this does not necessarily guarantee that future changes are portrayed correctly. In contrast, convection permitting regional climate models (RCMs) have begun to provide an excellent representation of the regional climate system purely from first principles, providing greater confidence in their change signal. However, the computational cost of such RCMs prohibits the generation of ensembles of simulations or long time periods, thus limiting their applicability for hydrologic applications. Here we discuss a new approach combining statistical corrections with physical relationships for a modest computational cost. We have developed the Intermediate Complexity Atmospheric Research model (ICAR) to provide a climate and weather downscaling option that is based primarily on physics for a fraction of the computational requirements of a traditional regional climate model. ICAR also enables the incorporation of statistical adjustments directly within the model. We demonstrate that applying even simple corrections to precipitation while the model is running can improve the simulation of land atmosphere feedbacks in ICAR. For example, by incorporating statistical corrections earlier in the modeling chain, we permit the model physics to better represent the effect of mountain snowpack on air temperature changes.

  18. Everyday Engineering: What Makes a Bic Click?

    ERIC Educational Resources Information Center

    Moyer, Richard; Everett, Susan

    2009-01-01

    The ballpoint pen is an ideal example of simple engineering that we use everyday. But is it really so simple? The ballpoint pen is a remarkable combination of technology and science. Its operation uses several scientific principles related to chemistry and physics, such as properties of liquids and simple machines. They represent significant…

  19. A Simple Demonstration of Atomic and Molecular Orbitals Using Circular Magnets

    ERIC Educational Resources Information Center

    Chakraborty, Maharudra; Mukhopadhyay, Subrata; Das, Ranendu Sekhar

    2014-01-01

    A quite simple and inexpensive technique is described here to represent the approximate shapes of atomic orbitals and the molecular orbitals formed by them following the principles of the linear combination of atomic orbitals (LCAO) method. Molecular orbitals of a few simple molecules can also be pictorially represented. Instructors can employ the…

  20. A Simple Model Framework to Explore the Deeply Uncertain, Local Sea Level Response to Climate Change. A Case Study on New Orleans, Louisiana

    NASA Astrophysics Data System (ADS)

    Bakker, Alexander; Louchard, Domitille; Keller, Klaus

    2016-04-01

    Sea-level rise threatens many coastal areas around the world. The integrated assessment of potential adaptation and mitigation strategies requires a sound understanding of the upper tails and the major drivers of the uncertainties. Global warming causes sea-level to rise, primarily due to thermal expansion of the oceans and mass loss of the major ice sheets, smaller ice caps and glaciers. These components show distinctly different responses to temperature changes with respect to response time, threshold behavior, and local fingerprints. Projections of these different components are deeply uncertain. Projected uncertainty ranges strongly depend on (necessary) pragmatic choices and assumptions; e.g. on the applied climate scenarios, which processes to include and how to parameterize them, and on error structure of the observations. Competing assumptions are very hard to objectively weigh. Hence, uncertainties of sea-level response are hard to grasp in a single distribution function. The deep uncertainty can be better understood by making clear the key assumptions. Here we demonstrate this approach using a relatively simple model framework. We present a mechanistically motivated, but simple model framework that is intended to efficiently explore the deeply uncertain sea-level response to anthropogenic climate change. The model consists of 'building blocks' that represent the major components of sea-level response and its uncertainties, including threshold behavior. The framework's simplicity enables the simulation of large ensembles allowing for an efficient exploration of parameter uncertainty and for the simulation of multiple combined adaptation and mitigation strategies. The model framework can skilfully reproduce earlier major sea level assessments, but due to the modular setup it can also be easily utilized to explore high-end scenarios and the effect of competing assumptions and parameterizations.

  1. Multiporosity flow in fractured low-permeability rocks: Extension to shale hydrocarbon reservoirs

    DOE PAGES

    Kuhlman, Kristopher L.; Malama, Bwalya; Heath, Jason E.

    2015-02-05

    We presented a multiporosity extension of classical double and triple-porosity fractured rock flow models for slightly compressible fluids. The multiporosity model is an adaptation of the multirate solute transport model of Haggerty and Gorelick (1995) to viscous flow in fractured rock reservoirs. It is a generalization of both pseudo steady state and transient interporosity flow double-porosity models. The model includes a fracture continuum and an overlapping distribution of multiple rock matrix continua, whose fracture-matrix exchange coefficients are specified through a discrete probability mass function. Semianalytical cylindrically symmetric solutions to the multiporosity mathematical model are developed using the Laplace transform tomore » illustrate its behavior. Furthermore, the multiporosity model presented here is conceptually simple, yet flexible enough to simulate common conceptualizations of double and triple-porosity flow. This combination of generality and simplicity makes the multiporosity model a good choice for flow modelling in low-permeability fractured rocks.« less

  2. Motor and sensory neuropathy due to myelin infolding and paranodal damage in a transgenic mouse model of Charcot–Marie–Tooth disease type 1C

    PubMed Central

    Lee, Samuel M.; Sha, Di; Mohammed, Anum A.; Asress, Seneshaw; Glass, Jonathan D.; Chin, Lih-Shen; Li, Lian

    2013-01-01

    Charcot–Marie–Tooth disease type 1C (CMT1C) is a dominantly inherited motor and sensory neuropathy. Despite human genetic evidence linking missense mutations in SIMPLE to CMT1C, the in vivo role of CMT1C-linked SIMPLE mutations remains undetermined. To investigate the molecular mechanism underlying CMT1C pathogenesis, we generated transgenic mice expressing either wild-type or CMT1C-linked W116G human SIMPLE. Mice expressing mutant, but not wild type, SIMPLE develop a late-onset motor and sensory neuropathy that recapitulates key clinical features of CMT1C disease. SIMPLE mutant mice exhibit motor and sensory behavioral impairments accompanied by decreased motor and sensory nerve conduction velocity and reduced compound muscle action potential amplitude. This neuropathy phenotype is associated with focally infolded myelin loops that protrude into the axons at paranodal regions and near Schmidt–Lanterman incisures of peripheral nerves. We find that myelin infolding is often linked to constricted axons with signs of impaired axonal transport and to paranodal defects and abnormal organization of the node of Ranvier. Our findings support that SIMPLE mutation disrupts myelin homeostasis and causes peripheral neuropathy via a combination of toxic gain-of-function and dominant-negative mechanisms. The results from this study suggest that myelin infolding and paranodal damage may represent pathogenic precursors preceding demyelination and axonal degeneration in CMT1C patients. PMID:23359569

  3. Identification of research hypotheses and new knowledge from scientific literature.

    PubMed

    Shardlow, Matthew; Batista-Navarro, Riza; Thompson, Paul; Nawaz, Raheel; McNaught, John; Ananiadou, Sophia

    2018-06-25

    Text mining (TM) methods have been used extensively to extract relations and events from the literature. In addition, TM techniques have been used to extract various types or dimensions of interpretative information, known as Meta-Knowledge (MK), from the context of relations and events, e.g. negation, speculation, certainty and knowledge type. However, most existing methods have focussed on the extraction of individual dimensions of MK, without investigating how they can be combined to obtain even richer contextual information. In this paper, we describe a novel, supervised method to extract new MK dimensions that encode Research Hypotheses (an author's intended knowledge gain) and New Knowledge (an author's findings). The method incorporates various features, including a combination of simple MK dimensions. We identify previously explored dimensions and then use a random forest to combine these with linguistic features into a classification model. To facilitate evaluation of the model, we have enriched two existing corpora annotated with relations and events, i.e., a subset of the GENIA-MK corpus and the EU-ADR corpus, by adding attributes to encode whether each relation or event corresponds to Research Hypothesis or New Knowledge. In the GENIA-MK corpus, these new attributes complement simpler MK dimensions that had previously been annotated. We show that our approach is able to assign different types of MK dimensions to relations and events with a high degree of accuracy. Firstly, our method is able to improve upon the previously reported state of the art performance for an existing dimension, i.e., Knowledge Type. Secondly, we also demonstrate high F1-score in predicting the new dimensions of Research Hypothesis (GENIA: 0.914, EU-ADR 0.802) and New Knowledge (GENIA: 0.829, EU-ADR 0.836). We have presented a novel approach for predicting New Knowledge and Research Hypothesis, which combines simple MK dimensions to achieve high F1-scores. The extraction of such information is valuable for a number of practical TM applications.

  4. Full quantum mechanical analysis of atomic three-grating Mach–Zehnder interferometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanz, A.S., E-mail: asanz@iff.csic.es; Davidović, M.; Božić, M.

    2015-02-15

    Atomic three-grating Mach–Zehnder interferometry constitutes an important tool to probe fundamental aspects of the quantum theory. There is, however, a remarkable gap in the literature between the oversimplified models and robust numerical simulations considered to describe the corresponding experiments. Consequently, the former usually lead to paradoxical scenarios, such as the wave–particle dual behavior of atoms, while the latter make difficult the data analysis in simple terms. Here these issues are tackled by means of a simple grating working model consisting of evenly-spaced Gaussian slits. As is shown, this model suffices to explore and explain such experiments both analytically and numerically,more » giving a good account of the full atomic journey inside the interferometer, and hence contributing to make less mystic the physics involved. More specifically, it provides a clear and unambiguous picture of the wavefront splitting that takes place inside the interferometer, illustrating how the momentum along each emerging diffraction order is well defined even though the wave function itself still displays a rather complex shape. To this end, the local transverse momentum is also introduced in this context as a reliable analytical tool. The splitting, apart from being a key issue to understand atomic Mach–Zehnder interferometry, also demonstrates at a fundamental level how wave and particle aspects are always present in the experiment, without incurring in any contradiction or interpretive paradox. On the other hand, at a practical level, the generality and versatility of the model and methodology presented, makes them suitable to attack analogous problems in a simple manner after a convenient tuning. - Highlights: • A simple model is proposed to analyze experiments based on atomic Mach–Zehnder interferometry. • The model can be easily handled both analytically and computationally. • A theoretical analysis based on the combination of the position and momentum representations is considered. • Wave and particle aspects are shown to coexist within the same experiment, thus removing the old wave-corpuscle dichotomy. • A good agreement between numerical simulations and experimental data is found without appealing to best-fit procedures.« less

  5. Using computational modeling of river flow with remotely sensed data to infer channel bathymetry

    USGS Publications Warehouse

    Nelson, Jonathan M.; McDonald, Richard R.; Kinzel, Paul J.; Shimizu, Y.

    2012-01-01

    As part of an ongoing investigation into the use of computational river flow and morphodynamic models for the purpose of correcting and extending remotely sensed river datasets, a simple method for inferring channel bathymetry is developed and discussed. The method is based on an inversion of the equations expressing conservation of mass and momentum to develop equations that can be solved for depth given known values of vertically-averaged velocity and water-surface elevation. The ultimate goal of this work is to combine imperfect remotely sensed data on river planform, water-surface elevation and water-surface velocity in order to estimate depth and other physical parameters of river channels. In this paper, the technique is examined using synthetic data sets that are developed directly from the application of forward two-and three-dimensional flow models. These data sets are constrained to satisfy conservation of mass and momentum, unlike typical remotely sensed field data sets. This provides a better understanding of the process and also allows assessment of how simple inaccuracies in remotely sensed estimates might propagate into depth estimates. The technique is applied to three simple cases: First, depth is extracted from a synthetic dataset of vertically averaged velocity and water-surface elevation; second, depth is extracted from the same data set but with a normally-distributed random error added to the water-surface elevation; third, depth is extracted from a synthetic data set for the same river reach using computed water-surface velocities (in place of depth-integrated values) and water-surface elevations. In each case, the extracted depths are compared to the actual measured depths used to construct the synthetic data sets (with two- and three-dimensional flow models). Errors in water-surface elevation and velocity that are very small degrade depth estimates and cannot be recovered. Errors in depth estimates associated with assuming water-surface velocities equal to depth-integrated velocities are substantial, but can be reduced with simple corrections.

  6. Testlet-Based Multidimensional Adaptive Testing.

    PubMed

    Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen

    2016-01-01

    Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.

  7. Complex food webs prevent competitive exclusion among producer species.

    PubMed

    Brose, Ulrich

    2008-11-07

    Herbivorous top-down forces and bottom-up competition for nutrients determine the coexistence and relative biomass patterns of producer species. Combining models of predator-prey and producer-nutrient interactions with a structural model of complex food webs, I investigated these two aspects in a dynamic food-web model. While competitive exclusion leads to persistence of only one producer species in 99.7% of the simulated simple producer communities without consumers, embedding the same producer communities in complex food webs generally yields producer coexistence. In simple producer communities, the producers with the most efficient nutrient-intake rates increase in biomass until they competitively exclude inferior producers. In food webs, herbivory predominantly reduces the biomass density of those producers that dominated in producer communities, which yields a more even biomass distribution. In contrast to prior analyses of simple modules, this facilitation of producer coexistence by herbivory does not require a trade-off between the nutrient-intake efficiency and the resistance to herbivory. The local network structure of food webs (top-down effects of the number of herbivores and the herbivores' maximum consumption rates) and the nutrient supply (bottom-up effect) interactively determine the relative biomass densities of the producer species. A strong negative feedback loop emerges in food webs: factors that increase producer biomasses also increase herbivory, which reduces producer biomasses. This negative feedback loop regulates the coexistence and biomass patterns of the producers by balancing biomass increases of producers and biomass fluxes to herbivores, which prevents competitive exclusion.

  8. A simple phenomenological model for grain clustering in turbulence

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2016-01-01

    We propose a simple model for density fluctuations of aerodynamic grains, embedded in a turbulent, gravitating gas disc. The model combines a calculation for the behaviour of a group of grains encountering a single turbulent eddy, with a hierarchical approximation of the eddy statistics. This makes analytic predictions for a range of quantities including: distributions of grain densities, power spectra and correlation functions of fluctuations, and maximum grain densities reached. We predict how these scale as a function of grain drag time ts, spatial scale, grain-to-gas mass ratio tilde{ρ }, strength of turbulence α, and detailed disc properties. We test these against numerical simulations with various turbulence-driving mechanisms. The simulations agree well with the predictions, spanning ts Ω ˜ 10-4-10, tilde{ρ }˜ 0{-}3, α ˜ 10-10-10-2. Results from `turbulent concentration' simulations and laboratory experiments are also predicted as a special case. Vortices on a wide range of scales disperse and concentrate grains hierarchically. For small grains this is most efficient in eddies with turnover time comparable to the stopping time, but fluctuations are also damped by local gas-grain drift. For large grains, shear and gravity lead to a much broader range of eddy scales driving fluctuations, with most power on the largest scales. The grain density distribution has a log-Poisson shape, with fluctuations for large grains up to factors ≳1000. We provide simple analytic expressions for the predictions, and discuss implications for planetesimal formation, grain growth, and the structure of turbulence.

  9. A New Model of Jupiter's Magnetic Field From Juno's First Nine Orbits

    NASA Astrophysics Data System (ADS)

    Connerney, J. E. P.; Kotsiaros, S.; Oliversen, R. J.; Espley, J. R.; Joergensen, J. L.; Joergensen, P. S.; Merayo, J. M. G.; Herceg, M.; Bloxham, J.; Moore, K. M.; Bolton, S. J.; Levin, S. M.

    2018-03-01

    A spherical harmonic model of the magnetic field of Jupiter is obtained from vector magnetic field observations acquired by the Juno spacecraft during its first nine polar orbits about the planet. Observations acquired during eight of these orbits provide the first truly global coverage of Jupiter's magnetic field with a coarse longitudinal separation of 45° between perijoves. The magnetic field is represented with a degree 20 spherical harmonic model for the planetary ("internal") field, combined with a simple model of the magnetodisc for the field ("external") due to distributed magnetospheric currents. Partial solution of the underdetermined inverse problem using generalized inverse techniques yields a model ("Juno Reference Model through Perijove 9") of the planetary magnetic field with spherical harmonic coefficients well determined through degree and order 10, providing the first detailed view of a planetary dynamo beyond Earth.

  10. The problem with simple lumped parameter models: Evidence from tritium mean transit times

    NASA Astrophysics Data System (ADS)

    Stewart, Michael; Morgenstern, Uwe; Gusyev, Maksym; Maloszewski, Piotr

    2017-04-01

    Simple lumped parameter models (LPMs) based on assuming homogeneity and stationarity in catchments and groundwater bodies are widely used to model and predict hydrological system outputs. However, most systems are not homogeneous or stationary, and errors resulting from disregard of the real heterogeneity and non-stationarity of such systems are not well understood and rarely quantified. As an example, mean transit times (MTTs) of streamflow are usually estimated from tracer data using simple LPMs. The MTT or transit time distribution of water in a stream reveals basic catchment properties such as water flow paths, storage and mixing. Importantly however, Kirchner (2016a) has shown that there can be large (several hundred percent) aggregation errors in MTTs inferred from seasonal cycles in conservative tracers such as chloride or stable isotopes when they are interpreted using simple LPMs (i.e. a range of gamma models or GMs). Here we show that MTTs estimated using tritium concentrations are similarly affected by aggregation errors due to heterogeneity and non-stationarity when interpreted using simple LPMs (e.g. GMs). The tritium aggregation error series from the strong nonlinearity between tritium concentrations and MTT, whereas for seasonal tracer cycles it is due to the nonlinearity between tracer cycle amplitudes and MTT. In effect, water from young subsystems in the catchment outweigh water from old subsystems. The main difference between the aggregation errors with the different tracers is that with tritium it applies at much greater ages than it does with seasonal tracer cycles. We stress that the aggregation errors arise when simple LPMs are applied (with simple LPMs the hydrological system is assumed to be a homogeneous whole with parameters representing averages for the system). With well-chosen compound LPMs (which are combinations of simple LPMs) on the other hand, aggregation errors are very much smaller because young and old water flows are treated separately. "Well-chosen" means that the compound LPM is based on hydrologically- and geologically-validated information, and the choice can be assisted by matching simulations to time series of tritium measurements. References: Kirchner, J.W. (2016a): Aggregation in environmental systems - Part 1: Seasonal tracer cycles quantify young water fractions, but not mean transit times, in spatially heterogeneous catchments. Hydrol. Earth Syst. Sci. 20, 279-297. Stewart, M.K., Morgenstern, U., Gusyev, M.A., Maloszewski, P. 2016: Aggregation effects on tritium-based mean transit times and young water fractions in spatially heterogeneous catchments and groundwater systems, and implications for past and future applications of tritium. Submitted to Hydrol. Earth Syst. Sci., 10 October 2016, doi:10.5194/hess-2016-532.

  11. Real Options in Defense R and D: A Decision Tree Analysis Approach for Options to Defer, Abandon, and Expand

    DTIC Science & Technology

    2016-12-01

    chosen rather than complex ones , and responds to the criticism of the DTA approach. Chapter IV provides three separate case studies in defense R&D...defense R&D projects. To this end, the first section describes the case study method and the advantages of using simple models over more complex ones ...the analysis lacked empirical data and relied on subjective data, the analysis successfully combined the DTA approach with the case study method and

  12. Bird song: in vivo, in vitro, in silico

    NASA Astrophysics Data System (ADS)

    Mukherjee, Aryesh; Mandre, Shreyas; Mahadevan, Lakshminarayan

    2010-11-01

    Bird song, long since an inspiration for artists, writers and poets also poses challenges for scientists interested in dissecting the mechanisms underlying the neural, motor, learning and behavioral systems behind the beak and brain, as a way to recreate and synthesize it. We use a combination of quantitative visualization experiments with physical models and computational theories to understand the simplest aspects of these complex musical boxes, focusing on using the controllable elastohydrodynamic interactions to mimic aural gestures and simple songs.

  13. High angular resolution observations of the cool giant V Hya

    NASA Astrophysics Data System (ADS)

    Pedretti, E.; Monnier, J. D.; Millan Gabet, R.; Traub, W. A.; Tuthill, P.; Danchi, W.; Berger, J.; Schloerb, F. P.; Thureau, N. D.; Carleton, N. P.; Lacasse, M. G.; Schuller, P. A.; Ragland, S.; Brewer, M.

    2005-12-01

    We present the preliminary interferometric observations of the cool giant star V Hya. V Hya, which is known to have mass-loss and to be surrounded by a dust shell,was observed in three narrow-band filters in the H bandpass at the infrared optical telescope array (IOTA), using the IONIC three-telescope beam combiner. The star was also observed at the Keck telescope using an aperture mask. We discuss the results and try to fit simple models to the observed data.

  14. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  15. A model for the space shuttle main engine high pressure oxidizer turbopump shaft seal system

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.

    1990-01-01

    A simple static model is presented which solves for the flow properties of pressure, temperature, and mass flow in the Space Shuttle Main Engine pressure Oxidizer Turbopump Shaft Seal Systems. This system includes the primary and secondary turbine seals, the primary and secondary turbine drains, the helium purge seals and feed line, the primary oxygen drain, and the slinger/labyrinth oxygen seal pair. The model predicts the changes in flow variables that occur during and after failures of the various seals. Such information would be particularly useful in a post flight situation where processing of sensor information using this model could identify a particular seal that had experienced excessive wear. Most of the seals in the system are modeled using simple one dimensional equations which can be applied to almost any seal provided that the fluid is gaseous. A failure is modeled as an increase in the clearance between the shaft and the seal. Thus, the model does not attempt to predict how the failure process actually occurs (e.g., wear, seal crack initiation). The results presented were obtained using a FORTRAN implementation of the model running on a VAX computer. Solution for the seal system properties is obtained iteratively; however, a further simplified implementation (which does not include the slinger/labyrinth combination) was also developed which provides fast and reasonable results for most engine operating conditions. Results from the model compare favorably with the limited redline data available.

  16. Mitigating stimulated scattering processes in gas-filled Hohlraums via external magnetic fields

    NASA Astrophysics Data System (ADS)

    Gong, Tao; Zheng, Jian; Li, Zhichao; Ding, Yongkun; Yang, Dong; Hu, Guangyue; Zhao, Bin

    2015-09-01

    A simple model, based on energy and pressure equilibrium, is proposed to deal with the effect of external magnetic fields on the plasma parameters inside the laser path, which shows that the electron temperature can be significantly enhanced as the intensity of the external magnetic fields increases. With the combination of this model and a 1D three-wave coupling code, the effect of external magnetic fields on the reflectivities of stimulated scattering processes is studied. The results indicate that a magnetic field with an intensity of tens of Tesla can decrease the reflectivities of stimulated scattering processes by several orders of magnitude.

  17. Application of Peterson's stray light model to complex optical instruments

    NASA Astrophysics Data System (ADS)

    Fray, S.; Goepel, M.; Kroneberger, M.

    2016-07-01

    Gary L. Peterson (Breault Research Organization) presented a simple analytical model for in- field stray light evaluation of axial optical systems. We exploited this idea for more complex optical instruments of the Meteosat Third Generation (MTG) mission. For the Flexible Combined Imager (FCI) we evaluated the in-field stray light of its three-mirroranastigmat telescope, while for the Infrared Sounder (IRS) we performed an end-to-end analysis including the front telescope, interferometer and back telescope assembly and the cold optics. A comparison to simulations will be presented. The authors acknowledge the support by ESA and Thales Alenia Space through the MTG satellites program.

  18. Convective stability of a plasma in a system of coupled adiabatic open cells in the Kruskal-Oberman model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arsenin, V. V.; Terekhin, P. N.

    2010-08-15

    The Kruskal-Oberman kinetic model is used to determine the conditions for the convective stability of a plasma in a system of coupled axisymmetric adiabatic open cells in which the magnetic field curvature has opposite signs. For a combination of a nonparaxial simple mirror cell and a semicusp, the boundaries of the interval of values of the flux coordinate where the plasma can be stable are determined, as well as the range in which the ratio of the pressures in the component cells should lie. Numerical simulations were carried out for different particle distributions over the pitch angle.

  19. Minimizing effects of methodological decisions on interpretation and prediction in species distribution studies: An example with background selection

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.

    2017-01-01

    Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.

  20. Theory of nematic order with aggregate dehydration for reversibly assembling proteins in concentrated solutions: Application to sickle-cell hemoglobin polymers

    NASA Astrophysics Data System (ADS)

    Hentschke, Reinhard; Herzfeld, Judith

    1991-06-01

    The reversible association of globular protein molecules in concentrated solution leads to highly polydisperse fibers, e.g., actin filaments, microtubules, and sickle-cell hemoglobin fibers. At high concentrations, excluded-volume interactions between the fibers lead to spontaneous alignment analogous to that in simple lyotropic liquid crystals. However, the phase behavior of reversibly associating proteins is complicated by the threefold coupling between the growth, alignment, and hydration of the fibers. In protein systems aggregates contain substantial solvent, which may cause them to swell or shrink, depending on osmotic stress. Extending previous work, we present a model for the equilibrium phase behavior of the above-noted protein systems in terms of simple intra- and interaggregate interactions, combined with equilibration of fiber-incorporated solvent with the bulk solvent. Specifically, we compare our model results to recent osmotic pressure data for sickle-cell hemoglobin and find excellent agreement. This comparison shows that particle interactions sufficient to cause alignment are also sufficient to squeeze significant amounts of solvent out of protein fibers. In addition, the model is in accord with findings from independent sedimentation and birefringence studies on sickle-cell hemoglobin.

  1. Rocket Engine Oscillation Diagnostics

    NASA Technical Reports Server (NTRS)

    Nesman, Tom; Turner, James E. (Technical Monitor)

    2002-01-01

    Rocket engine oscillating data can reveal many physical phenomena ranging from unsteady flow and acoustics to rotordynamics and structural dynamics. Because of this, engine diagnostics based on oscillation data should employ both signal analysis and physical modeling. This paper describes an approach to rocket engine oscillation diagnostics, types of problems encountered, and example problems solved. Determination of design guidelines and environments (or loads) from oscillating phenomena is required during initial stages of rocket engine design, while the additional tasks of health monitoring, incipient failure detection, and anomaly diagnostics occur during engine development and operation. Oscillations in rocket engines are typically related to flow driven acoustics, flow excited structures, or rotational forces. Additional sources of oscillatory energy are combustion and cavitation. Included in the example problems is a sampling of signal analysis tools employed in diagnostics. The rocket engine hardware includes combustion devices, valves, turbopumps, and ducts. Simple models of an oscillating fluid system or structure can be constructed to estimate pertinent dynamic parameters governing the unsteady behavior of engine systems or components. In the example problems it is shown that simple physical modeling when combined with signal analysis can be successfully employed to diagnose complex rocket engine oscillatory phenomena.

  2. Scaling laws and fluctuations in the statistics of word frequencies

    NASA Astrophysics Data System (ADS)

    Gerlach, Martin; Altmann, Eduardo G.

    2014-11-01

    In this paper, we combine statistical analysis of written texts and simple stochastic models to explain the appearance of scaling laws in the statistics of word frequencies. The average vocabulary of an ensemble of fixed-length texts is known to scale sublinearly with the total number of words (Heaps’ law). Analyzing the fluctuations around this average in three large databases (Google-ngram, English Wikipedia, and a collection of scientific articles), we find that the standard deviation scales linearly with the average (Taylor's law), in contrast to the prediction of decaying fluctuations obtained using simple sampling arguments. We explain both scaling laws (Heaps’ and Taylor) by modeling the usage of words using a Poisson process with a fat-tailed distribution of word frequencies (Zipf's law) and topic-dependent frequencies of individual words (as in topic models). Considering topical variations lead to quenched averages, turn the vocabulary size a non-self-averaging quantity, and explain the empirical observations. For the numerous practical applications relying on estimations of vocabulary size, our results show that uncertainties remain large even for long texts. We show how to account for these uncertainties in measurements of lexical richness of texts with different lengths.

  3. A simple model for heterogeneous nucleation of isotactic polypropylene

    NASA Astrophysics Data System (ADS)

    Howard, Michael; Milner, Scott

    2013-03-01

    Flow-induced crystallization (FIC) is of interest because of its relevance to processes such as injection molding. It has been suggested that flow increases the homogeneous nucleation rate by reducing the melt state entropy. However, commercial polypropylene (iPP) exhibits quiescent nucleation rates that are much too high to be consistent with homogeneous nucleation in carefully purified samples. This suggests that heterogeneous nucleation is dominant for typical samples used in FIC experiments. We describe a simple model for heterogeneous nucleation of iPP, in terms of a cylindrical nucleus on a flat surface with the critical size and barrier set by the contact angle. Analysis of quiescent crystallization data with this model gives reasonable values for the contact angle. We have also employed atomistic simulations of iPP crystals to determine surface energies with vacuum and with Hamaker-matched substrates, and find values consistent with the contact angles inferred from heterogeneous nucleation experiments. In future work, these results combined with calculations from melt rheology of entropy reduction due to flow can be used to estimate the heterogeneous nucleation barrier reduction due to flow, and hence the increase in nucleation rate due to FIC for commecial iPP.

  4. Young Children and Turtle Graphics Programming: Generating and Debugging Simple Turtle Programs.

    ERIC Educational Resources Information Center

    Cuneo, Diane O.

    Turtle graphics is a popular vehicle for introducing children to computer programming. Children combine simple graphic commands to get a display screen cursor (called a turtle) to draw designs on the screen. The purpose of this study was to examine young children's abilities to function in a simple computer programming environment. Four- and…

  5. Time-Frequency Analysis of Non-Stationary Biological Signals with Sparse Linear Regression Based Fourier Linear Combiner.

    PubMed

    Wang, Yubo; Veluvolu, Kalyana C

    2017-06-14

    It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.

  6. Ice phase in altocumulus clouds over Leipzig: remote sensing observations and detailed modeling

    NASA Astrophysics Data System (ADS)

    Simmel, M.; Bühl, J.; Ansmann, A.; Tegen, I.

    2015-09-01

    The present work combines remote sensing observations and detailed cloud modeling to investigate two altocumulus cloud cases observed over Leipzig, Germany. A suite of remote sensing instruments was able to detect primary ice at rather high temperatures of -6 °C. For comparison, a second mixed phase case at about -25 °C is introduced. To further look into the details of cloud microphysical processes, a simple dynamics model of the Asai-Kasahara (AK) type is combined with detailed spectral microphysics (SPECS) forming the model system AK-SPECS. Vertical velocities are prescribed to force the dynamics, as well as main cloud features, to be close to the observations. Subsequently, sensitivity studies with respect to ice microphysical parameters are carried out with the aim to quantify the most important sensitivities for the cases investigated. For the cases selected, the liquid phase is mainly determined by the model dynamics (location and strength of vertical velocity), whereas the ice phase is much more sensitive to the microphysical parameters (ice nucleating particle (INP) number, ice particle shape). The choice of ice particle shape may induce large uncertainties that are on the same order as those for the temperature-dependent INP number distribution.

  7. Ice phase in altocumulus clouds over Leipzig: remote sensing observations and detailed modelling

    NASA Astrophysics Data System (ADS)

    Simmel, M.; Bühl, J.; Ansmann, A.; Tegen, I.

    2015-01-01

    The present work combines remote sensing observations and detailed cloud modeling to investigate two altocumulus cloud cases observed over Leipzig, Germany. A suite of remote sensing instruments was able to detect primary ice at rather warm temperatures of -6 °C. For comparison, a second mixed phase case at about -25 °C is introduced. To further look into the details of cloud microphysical processes a simple dynamics model of the Asai-Kasahara type is combined with detailed spectral microphysics forming the model system AK-SPECS. Vertical velocities are prescribed to force the dynamics as well as main cloud features to be close to the observations. Subsequently, sensitivity studies with respect to ice microphysical parameters are carried out with the aim to quantify the most important sensitivities for the cases investigated. For the cases selected, the liquid phase is mainly determined by the model dynamics (location and strength of vertical velocity) whereas the ice phase is much more sensitive to the microphysical parameters (ice nuclei (IN) number, ice particle shape). The choice of ice particle shape may induce large uncertainties which are in the same order as those for the temperature-dependent IN number distribution.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J.; Moon, T.J.; Howell, J.R.

    This paper presents an analysis of the heat transfer occurring during an in-situ curing process for which infrared energy is provided on the surface of polymer composite during winding. The material system is Hercules prepreg AS4/3501-6. Thermoset composites have an exothermic chemical reaction during the curing process. An Eulerian thermochemical model is developed for the heat transfer analysis of helical winding. The model incorporates heat generation due to the chemical reaction. Several assumptions are made leading to a two-dimensional, thermochemical model. For simplicity, 360{degree} heating around the mandrel is considered. In order to generate the appropriate process windows, the developedmore » heat transfer model is combined with a simple winding time model. The process windows allow for a proper selection of process variables such as infrared energy input and winding velocity to give a desired end-product state. Steady-state temperatures are found for each combination of the process variables. A regression analysis is carried out to relate the process variables to the resulting steady-state temperatures. Using regression equations, process windows for a wide range of cylinder diameters are found. A general procedure to find process windows for Hercules AS4/3501-6 prepreg tape is coded in a FORTRAN program.« less

  9. Semantic wireless localization of WiFi terminals in smart buildings

    NASA Astrophysics Data System (ADS)

    Ahmadi, H.; Polo, A.; Moriyama, T.; Salucci, M.; Viani, F.

    2016-06-01

    The wireless localization of mobile terminals in indoor scenarios by means of a semantic interpretation of the environment is addressed in this work. A training-less approach based on the real-time calibration of a simple path loss model is proposed which combines (i) the received signal strength information measured by the wireless terminal and (ii) the topological features of the localization domain. A customized evolutionary optimization technique has been designed to estimate the optimal target position that fits the complex wireless indoor propagation and the semantic target-environment relation, as well. The proposed approach is experimentally validated in a real building area where the available WiFi network is opportunistically exploited for data collection. The presented results point out a reduction of the localization error obtained with the introduction of a very simple semantic interpretation of the considered scenario.

  10. The extended Beer-Lambert theory for ray tracing modeling of LED chip-scaled packaging application with multiple luminescence materials

    NASA Astrophysics Data System (ADS)

    Yuan, Cadmus C. A.

    2015-12-01

    Optical ray tracing modeling applied Beer-Lambert method in the single luminescence material system to model the white light pattern from blue LED light source. This paper extends such algorithm to a mixed multiple luminescence material system by introducing the equivalent excitation and emission spectrum of individual luminescence materials. The quantum efficiency numbers of individual material and self-absorption of the multiple luminescence material system are considered as well. By this combination, researchers are able to model the luminescence characteristics of LED chip-scaled packaging (CSP), which provides simple process steps and the freedom of the luminescence material geometrical dimension. The method will be first validated by the experimental results. Afterward, a further parametric investigation has been then conducted.

  11. Particle-in-a-box model of one-dimensional excitons in conjugated polymers

    NASA Astrophysics Data System (ADS)

    Pedersen, Thomas G.; Johansen, Per M.; Pedersen, Henrik C.

    2000-04-01

    A simple two-particle model of excitons in conjugated polymers is proposed as an alternative to usual highly computationally demanding quantum chemical methods. In the two-particle model, the exciton is described as an electron-hole pair interacting via Coulomb forces and confined to the polymer backbone by rigid walls. Furthermore, by integrating out the transverse part, the two-particle equation is reduced to one-dimensional form. It is demonstrated how essentially exact solutions are obtained in the cases of short and long conjugation length, respectively. From a linear combination of these cases an approximate solution for the general case is obtained. As an application of the model the influence of a static electric field on the electron-hole overlap integral and exciton energy is considered.

  12. Scattering measurements on natural and model trees

    NASA Technical Reports Server (NTRS)

    Rogers, James C.; Lee, Sung M.

    1990-01-01

    The acoustical back scattering from a simple scale model of a tree has been experimentally measured. The model consisted of a trunk and six limbs, each with 4 branches; no foliage or twigs were included. The data from the anechoic chamber measurements were then mathematically combined to construct the effective back scattering from groups of trees. Also, initial measurements have been conducted out-of-doors on a single tree in an open field in order to characterize its acoustic scattering as a function of azimuth angle. These measurements were performed in the spring, prior to leaf development. The data support a statistical model of forest scattering; the scattered signal spectrum is highly irregular but with a remarkable general resemblance to the incident signal spectrum. Also, the scattered signal's spectra showed little dependence upon scattering angle.

  13. Symmetric Fold/Super-Hopf Bursting, Chaos and Mixed-Mode Oscillations in Pernarowski Model of Pancreatic Beta-Cells

    NASA Astrophysics Data System (ADS)

    Fallah, Haniyeh

    Pancreatic beta-cells produce insulin to regularize the blood glucose level. Bursting is important in beta cells due to its relation to the release of insulin. Pernarowski model is a simple polynomial model of beta-cell activities indicating bursting oscillations in these cells. This paper presents bursting behaviors of symmetric type in this model. In addition, it is shown that the current system exhibits the phenomenon of period doubling cascades of canards which is a route to chaos. Canards are also observed symmetrically near folds of slow manifold which results in a chaotic transition between n and n + 1 spikes symmetric bursting. Furthermore, mixed-mode oscillations (MMOs) and combination of symmetric bursting together with MMOs are illustrated during the transition between symmetric bursting and continuous spiking.

  14. Investigating the Group-Level Impact of Advanced Dual-Echo fMRI Combinations

    PubMed Central

    Kettinger, Ádám; Hill, Christopher; Vidnyánszky, Zoltán; Windischberger, Christian; Nagy, Zoltán

    2016-01-01

    Multi-echo fMRI data acquisition has been widely investigated and suggested to optimize sensitivity for detecting the BOLD signal. Several methods have also been proposed for the combination of data with different echo times. The aim of the present study was to investigate whether these advanced echo combination methods provide advantages over the simple averaging of echoes when state-of-the-art group-level random-effect analyses are performed. Both resting-state and task-based dual-echo fMRI data were collected from 27 healthy adult individuals (14 male, mean age = 25.75 years) using standard echo-planar acquisition methods at 3T. Both resting-state and task-based data were subjected to a standard image pre-processing pipeline. Subsequently the two echoes were combined as a weighted average, using four different strategies for calculating the weights: (1) simple arithmetic averaging, (2) BOLD sensitivity weighting, (3) temporal-signal-to-noise ratio weighting and (4) temporal BOLD sensitivity weighting. Our results clearly show that the simple averaging of data with the different echoes is sufficient. Advanced echo combination methods may provide advantages on a single-subject level but when considering random-effects group level statistics they provide no benefit regarding sensitivity (i.e., group-level t-values) compared to the simple echo-averaging approach. One possible reason for the lack of clear advantages may be that apart from increasing the average BOLD sensitivity at the single-subject level, the advanced weighted averaging methods also inflate the inter-subject variance. As the echo combination methods provide very similar results, the recommendation is to choose between them depending on the availability of time for collecting additional resting-state data or whether subject-level or group-level analyses are planned. PMID:28018165

  15. Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser

    NASA Technical Reports Server (NTRS)

    Monson, D. J.

    1977-01-01

    The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.

  16. Modification of the TASMIP x-ray spectral model for the simulation of microfocus x-ray sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisniega, A.; Vaquero, J. J., E-mail: juanjose.vaquero@uc3m.es; Instituto de Investigación Sanitaria Gregorio Marañón, Madrid ES28007

    2014-01-15

    Purpose: The availability of accurate and simple models for the estimation of x-ray spectra is of great importance for system simulation, optimization, or inclusion of photon energy information into data processing. There is a variety of publicly available tools for estimation of x-ray spectra in radiology and mammography. However, most of these models cannot be used directly for modeling microfocus x-ray sources due to differences in inherent filtration, energy range and/or anode material. For this reason the authors propose in this work a new model for the simulation of microfocus spectra based on existing models for mammography and radiology, modifiedmore » to compensate for the effects of inherent filtration and energy range. Methods: The authors used the radiology and mammography versions of an existing empirical model [tungsten anode spectral model interpolating polynomials (TASMIP)] as the basis of the microfocus model. First, the authors estimated the inherent filtration included in the radiology model by comparing the shape of the spectra with spectra from the mammography model. Afterwards, the authors built a unified spectra dataset by combining both models and, finally, they estimated the parameters of the new version of TASMIP for microfocus sources by calibrating against experimental exposure data from a microfocus x-ray source. The model was validated by comparing estimated and experimental exposure and attenuation data for different attenuating materials and x-ray beam peak energy values, using two different x-ray tubes. Results: Inherent filtration for the radiology spectra from TASMIP was found to be equivalent to 1.68 mm Al, as compared to spectra obtained from the mammography model. To match the experimentally measured exposure data the combined dataset required to apply a negative filtration of about 0.21 mm Al and an anode roughness of 0.003 mm W. The validation of the model against real acquired data showed errors in exposure and attenuation in line with those reported for other models for radiology or mammography. Conclusions: A new version of the TASMIP model for the estimation of x-ray spectra in microfocus x-ray sources has been developed and validated experimentally. Similarly to other versions of TASMIP, the estimation of spectra is very simple, involving only the evaluation of polynomial expressions.« less

  17. Effects of Moisture and Particle Size on Quantitative Determination of Total Organic Carbon (TOC) in Soils Using Near-Infrared Spectroscopy.

    PubMed

    Tamburini, Elena; Vincenzi, Fabio; Costa, Stefania; Mantovi, Paolo; Pedrini, Paola; Castaldelli, Giuseppe

    2017-10-17

    Near-Infrared Spectroscopy is a cost-effective and environmentally friendly technique that could represent an alternative to conventional soil analysis methods, including total organic carbon (TOC). Soil fertility and quality are usually measured by traditional methods that involve the use of hazardous and strong chemicals. The effects of physical soil characteristics, such as moisture content and particle size, on spectral signals could be of great interest in order to understand and optimize prediction capability and set up a robust and reliable calibration model, with the future perspective of being applied in the field. Spectra of 46 soil samples were collected. Soil samples were divided into three data sets: unprocessed, only dried and dried, ground and sieved, in order to evaluate the effects of moisture and particle size on spectral signals. Both separate and combined normalization methods including standard normal variate (SNV), multiplicative scatter correction (MSC) and normalization by closure (NCL), as well as smoothing using first and second derivatives (DV1 and DV2), were applied to a total of seven cases. Pretreatments for model optimization were designed and compared for each data set. The best combination of pretreatments was achieved by applying SNV and DV2 on partial least squares (PLS) modelling. There were no significant differences between the predictions using the three different data sets ( p < 0.05). Finally, a unique database including all three data sets was built to include all the sources of sample variability that were tested and used for final prediction. External validation of TOC was carried out on 16 unknown soil samples to evaluate the predictive ability of the final combined calibration model. Hence, we demonstrate that sample preprocessing has minor influence on the quality of near infrared spectroscopy (NIR) predictions, laying the ground for a direct and fast in situ application of the method. Data can be acquired outside the laboratory since the method is simple and does not need more than a simple band ratio of the spectra.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, J.D.; Woan, G.

    Data from the Laser Interferometer Space Antenna (LISA) is expected to be dominated by frequency noise from its lasers. However, the noise from any one laser appears more than once in the data and there are combinations of the data that are insensitive to this noise. These combinations, called time delay interferometry (TDI) variables, have received careful study and point the way to how LISA data analysis may be performed. Here we approach the problem from the direction of statistical inference, and show that these variables are a direct consequence of a principal component analysis of the problem. We presentmore » a formal analysis for a simple LISA model and show that there are eigenvectors of the noise covariance matrix that do not depend on laser frequency noise. Importantly, these orthogonal basis vectors correspond to linear combinations of TDI variables. As a result we show that the likelihood function for source parameters using LISA data can be based on TDI combinations of the data without loss of information.« less

  19. Optimal systems of geoscience surveying A preliminary discussion

    NASA Astrophysics Data System (ADS)

    Shoji, Tetsuya

    2006-10-01

    In any geoscience survey, each survey technique must be effectively applied, and many techniques are often combined optimally. An important task is to get necessary and sufficient information to meet the requirement of the survey. A prize-penalty function quantifies effectiveness of the survey, and hence can be used to determine the best survey technique. On the other hand, an information-cost function can be used to determine the optimal combination of survey techniques on the basis of the geoinformation obtained. Entropy is available to evaluate geoinformation. A simple model suggests the possibility that low-resolvability techniques are generally applied at early stages of survey, and that higher-resolvability techniques should alternate with lower-resolvability ones with the progress of the survey.

  20. Assessment of steam-injected gas turbine systems and their potential application

    NASA Technical Reports Server (NTRS)

    Stochl, R. J.

    1982-01-01

    Results were arrived at by utilizing and expanding on information presented in the literature. The results were analyzed and compared with those for simple gas turbine and combined cycles for both utility power generation and industrial cogeneration applications. The efficiency and specific power of simple gas turbine cycles can be increased as much as 30 and 50 percent, respectively, by the injection of steam into the combustor. Steam-injected gas turbines appear to be economically competitive with both simple gas turbine and combined cycles for small, clean-fuel-fired utility power generation and industrial cogeneration applications. For large powerplants with integrated coal gasifiers, the economic advantages appear to be marginal.

  1. Herding, minority game, market clearing and efficient markets in a simple spin model framework

    NASA Astrophysics Data System (ADS)

    Kristoufek, Ladislav; Vosvrda, Miloslav

    2018-01-01

    We present a novel approach towards the financial Ising model. Most studies utilize the model to find settings which generate returns closely mimicking the financial stylized facts such as fat tails, volatility clustering and persistence, and others. We tackle the model utility from the other side and look for the combination of parameters which yields return dynamics of the efficient market in the view of the efficient market hypothesis. Working with the Ising model, we are able to present nicely interpretable results as the model is based on only two parameters. Apart from showing the results of our simulation study, we offer a new interpretation of the Ising model parameters via inverse temperature and entropy. We show that in fact market frictions (to a certain level) and herding behavior of the market participants do not go against market efficiency but what is more, they are needed for the markets to be efficient.

  2. Validity of the two-level model for Viterbi decoder gap-cycle performance

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Arnold, S.

    1990-01-01

    A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.

  3. Universal dispersion model for characterization of optical thin films over wide spectral range: Application to magnesium fluoride

    NASA Astrophysics Data System (ADS)

    Franta, Daniel; Nečas, David; Giglia, Angelo; Franta, Pavel; Ohlídal, Ivan

    2017-11-01

    Optical characterization of magnesium fluoride thin films is performed in a wide spectral range from far infrared to extreme ultraviolet (0.01-45 eV) utilizing the universal dispersion model. Two film defects, i.e. random roughness of the upper boundaries and defect transition layer at lower boundary are taken into account. An extension of universal dispersion model consisting in expressing the excitonic contributions as linear combinations of Gaussian and truncated Lorentzian terms is introduced. The spectral dependencies of the optical constants are presented in a graphical form and by the complete set of dispersion parameters that allows generating tabulated optical constants with required range and step using a simple utility in the newAD2 software package.

  4. Forecasting the realized volatility of the Chinese stock market: Do the G7 stock markets help?

    NASA Astrophysics Data System (ADS)

    Peng, Huan; Chen, Ruoxun; Mei, Dexiang; Diao, Xiaohua

    2018-07-01

    In this paper, we use a comprehensive look to investigate whether the G7 stock markets can contain predictive information to help in forecasting the Chinese stock market volatility. Our out-of-sample empirical results indicate the kitchen sink (HAR-RV-SK) model is able to attain better performance than the benchmark model (HAR-RV) and other models, implying that the G7 stock markets can help in predicting the one-day volatility of the Chinese stock market. Moreover, the kitchen sink strategy can beat the strategy of the simple combination forecasts. Finally, the G7 stock markets can indeed contain useful information, which can increase the accuracy forecasts of the Chinese stock market.

  5. Reduction of a linear complex model for respiratory system during Airflow Interruption.

    PubMed

    Jablonski, Ireneusz; Mroczka, Janusz

    2010-01-01

    The paper presents methodology of a complex model reduction to its simpler version - an identifiable inverse model. Its main tool is a numerical procedure of sensitivity analysis (structural and parametric) applied to the forward linear equivalent designed for the conditions of interrupter experiment. Final result - the reduced analog for the interrupter technique is especially worth of notice as it fills a major gap in occlusional measurements, which typically use simple, one- or two-element physical representations. Proposed electrical reduced circuit, being structural combination of resistive, inertial and elastic properties, can be perceived as a candidate for reliable reconstruction and quantification (in the time and frequency domain) of dynamical behavior of the respiratory system in response to a quasi-step excitation by valve closure.

  6. Control technology development

    NASA Astrophysics Data System (ADS)

    Schaechter, D. B.

    1982-03-01

    The main objectives of the control technology development task are given in the slide below. The first is to develop control design techniques based on flexible structural models, rather than simple rigid-body models. Since large space structures are distributed parameter systems, a new degree of freedom, that of sensor/actuator placement, may be exercised for improving control system performance. Another characteristic of large space structures is numerous oscillatory modes within the control bandwidth. Reduced-order controller design models must be developed which produce stable closed-loop systems when combined with the full-order system. Since the date of an actual large-space-structure flight is rapidly approaching, it is vitally important that theoretical developments are tested in actual hardware. Experimental verification is a vital counterpart of all current theoretical developments.

  7. Probing the exchange statistics of one-dimensional anyon models

    NASA Astrophysics Data System (ADS)

    Greschner, Sebastian; Cardarelli, Lorenzo; Santos, Luis

    2018-05-01

    We propose feasible scenarios for revealing the modified exchange statistics in one-dimensional anyon models in optical lattices based on an extension of the multicolor lattice-depth modulation scheme introduced in [Phys. Rev. A 94, 023615 (2016), 10.1103/PhysRevA.94.023615]. We show that the fast modulation of a two-component fermionic lattice gas in the presence a magnetic field gradient, in combination with additional resonant microwave fields, allows for the quantum simulation of hardcore anyon models with periodic boundary conditions. Such a semisynthetic ring setup allows for realizing an interferometric arrangement sensitive to the anyonic statistics. Moreover, we show as well that simple expansion experiments may reveal the formation of anomalously bound pairs resulting from the anyonic exchange.

  8. The Oceanographic Multipurpose Software Environment (OMUSE v1.0)

    NASA Astrophysics Data System (ADS)

    Pelupessy, Inti; van Werkhoven, Ben; van Elteren, Arjen; Viebahn, Jan; Candy, Adam; Portegies Zwart, Simon; Dijkstra, Henk

    2017-08-01

    In this paper we present the Oceanographic Multipurpose Software Environment (OMUSE). OMUSE aims to provide a homogeneous environment for existing or newly developed numerical ocean simulation codes, simplifying their use and deployment. In this way, numerical experiments that combine ocean models representing different physics or spanning different ranges of physical scales can be easily designed. Rapid development of simulation models is made possible through the creation of simple high-level scripts. The low-level core of the abstraction in OMUSE is designed to deploy these simulations efficiently on heterogeneous high-performance computing resources. Cross-verification of simulation models with different codes and numerical methods is facilitated by the unified interface that OMUSE provides. Reproducibility in numerical experiments is fostered by allowing complex numerical experiments to be expressed in portable scripts that conform to a common OMUSE interface. Here, we present the design of OMUSE as well as the modules and model components currently included, which range from a simple conceptual quasi-geostrophic solver to the global circulation model POP (Parallel Ocean Program). The uniform access to the codes' simulation state and the extensive automation of data transfer and conversion operations aids the implementation of model couplings. We discuss the types of couplings that can be implemented using OMUSE. We also present example applications that demonstrate the straightforward model initialization and the concurrent use of data analysis tools on a running model. We give examples of multiscale and multiphysics simulations by embedding a regional ocean model into a global ocean model and by coupling a surface wave propagation model with a coastal circulation model.

  9. A method and tool for combining differential or inclusive measurements obtained with simultaneously constrained uncertainties

    NASA Astrophysics Data System (ADS)

    Kieseler, Jan

    2017-11-01

    A method is discussed that allows combining sets of differential or inclusive measurements. It is assumed that at least one measurement was obtained with simultaneously fitting a set of nuisance parameters, representing sources of systematic uncertainties. As a result of beneficial constraints from the data all such fitted parameters are correlated among each other. The best approach for a combination of these measurements would be the maximization of a combined likelihood, for which the full fit model of each measurement and the original data are required. However, only in rare cases this information is publicly available. In absence of this information most commonly used combination methods are not able to account for these correlations between uncertainties, which can lead to severe biases as shown in this article. The method discussed here provides a solution for this problem. It relies on the public result and its covariance or Hessian, only, and is validated against the combined-likelihood approach. A dedicated software package implementing this method is also presented. It provides a text-based user interface alongside a C++ interface. The latter also interfaces to ROOT classes for simple combination of binned measurements such as differential cross sections.

  10. Modeling climate change impacts on combined sewer overflow using synthetic precipitation time series.

    PubMed

    Bendel, David; Beck, Ferdinand; Dittmer, Ulrich

    2013-01-01

    In the presented study climate change impacts on combined sewer overflows (CSOs) in Baden-Wuerttemberg, Southern Germany, were assessed based on continuous long-term rainfall-runoff simulations. As input data, synthetic rainfall time series were used. The applied precipitation generator NiedSim-Klima accounts for climate change effects on precipitation patterns. Time series for the past (1961-1990) and future (2041-2050) were generated for various locations. Comparing the simulated CSO activity of both periods we observe significantly higher overflow frequencies for the future. Changes in overflow volume and overflow duration depend on the type of overflow structure. Both values will increase at simple CSO structures that merely divide the flow, whereas they will decrease when the CSO structure is combined with a storage tank. However, there is a wide variation between the results of different precipitation time series (representative for different locations).

  11. Simultaneous acquisition of 3D shape and deformation by combination of interferometric and correlation-based laser speckle metrology.

    PubMed

    Dekiff, Markus; Berssenbrügge, Philipp; Kemper, Björn; Denz, Cornelia; Dirksen, Dieter

    2015-12-01

    A metrology system combining three laser speckle measurement techniques for simultaneous determination of 3D shape and micro- and macroscopic deformations is presented. While microscopic deformations are determined by a combination of Digital Holographic Interferometry (DHI) and Digital Speckle Photography (DSP), macroscopic 3D shape, position and deformation are retrieved by photogrammetry based on digital image correlation of a projected laser speckle pattern. The photogrammetrically obtained data extend the measurement range of the DHI-DSP system and also increase the accuracy of the calculation of the sensitivity vector. Furthermore, a precise assignment of microscopic displacements to the object's macroscopic shape for enhanced visualization is achieved. The approach allows for fast measurements with a simple setup. Key parameters of the system are optimized, and its precision and measurement range are demonstrated. As application examples, the deformation of a mandible model and the shrinkage of dental impression material are measured.

  12. Evaluation of Preduster in Cement Industry Based on Computational Fluid Dynamic

    NASA Astrophysics Data System (ADS)

    Septiani, E. L.; Widiyastuti, W.; Djafaar, A.; Ghozali, I.; Pribadi, H. M.

    2017-10-01

    Ash-laden hot air from clinker in cement industry is being used to reduce water contain in coal, however it may contain large amount of ash even though it was treated by a preduster. This study investigated preduster performance as a cyclone separator in the cement industry by Computational Fluid Dynamic method. In general, the best performance of cyclone is it have relatively high efficiency with the low pressure drop. The most accurate and simple turbulence model, Reynold Average Navier Stokes (RANS), standard k-ε, and combination with Lagrangian model as particles tracking model were used to solve the problem. The measurement in simulation result are flow pattern in the cyclone, pressure outlet and collection efficiency of preduster. The applied model well predicted by comparing with the most accurate empirical model and pressure outlet in experimental measurement.

  13. Transport in Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Datta, S.; Xue, Yong-Qinag; Anantram, M. P.; Saini, Subhash (Technical Monitor)

    1999-01-01

    This presentation discusses coupling between carbon nanotubes (CNT), simple metals (FEG) and a graphene sheet. The graphene sheet did not couple well with FEG, but the combination of a graphene strip and CNT did couple well with most simple metals.

  14. Landscape structure and climate influences on hydrologic response

    NASA Astrophysics Data System (ADS)

    Nippgen, Fabian; McGlynn, Brian L.; Marshall, Lucy A.; Emanuel, Ryan E.

    2011-12-01

    Climate variability and catchment structure (topography, geology, vegetation) have a significant influence on the timing and quantity of water discharged from mountainous catchments. How these factors combine to influence runoff dynamics is poorly understood. In this study we linked differences in hydrologic response across catchments and across years to metrics of landscape structure and climate using a simple transfer function rainfall-runoff modeling approach. A transfer function represents the internal catchment properties that convert a measured input (rainfall/snowmelt) into an output (streamflow). We examined modeled mean response time, defined as the average time that it takes for a water input to leave the catchment outlet from the moment it reaches the ground surface. We combined 12 years of precipitation and streamflow data from seven catchments in the Tenderfoot Creek Experimental Forest (Little Belt Mountains, southwestern Montana) with landscape analyses to quantify the first-order controls on mean response times. Differences between responses across the seven catchments were related to the spatial variability in catchment structure (e.g., slope, flowpath lengths, tree height). Annual variability was largely a function of maximum snow water equivalent. Catchment averaged runoff ratios exhibited strong correlations with mean response time while annually averaged runoff ratios were not related to climatic metrics. These results suggest that runoff ratios in snowmelt dominated systems are mainly controlled by topography and not by climatic variability. This approach provides a simple tool for assessing differences in hydrologic response across diverse watersheds and climate conditions.

  15. A potential to monitor nutrients as an indicator of rangeland quality using space borne remote sensing

    NASA Astrophysics Data System (ADS)

    Ramoelo, A.; Cho, M. A.; Madonsela, S.; Mathieu, R.; van der Korchove, R.; Kaszta, Z.; Wolf, E.

    2014-02-01

    Global change consisting of land use and climate change could have huge impacts on food security and the health of various ecosystems. Leaf nitrogen (N) is one of the key factors limiting agricultural production and ecosystem functioning. Leaf N can be used as an indicator of rangeland quality which could provide information for the farmers, decision makers, land planners and managers. Leaf N plays a crucial role in understanding the feeding patterns and distribution of wildlife and livestock. Assessment of this vegetation parameter using conventional methods at landscape scale level is time consuming and tedious. Remote sensing provides a synoptic view of the landscape, which engenders an opportunity to assess leaf N over wider rangeland areas from protected to communal areas. Estimation of leaf N has been successful during peak productivity or high biomass and limited studies estimated leaf N in dry season. The objective of this study is to monitor leaf N as an indicator of rangeland quality using WorldView 2 satellite images in the north-eastern part of South Africa. Series of field work to collect samples for leaf N were undertaken in the beginning of May (end of wet season) and July (dry season). Several conventional and red edge based vegetation indices were computed. Simple regression was used to develop prediction model for leaf N. Using bootstrapping, indicator of precision and accuracy were analyzed to select a best model for the combined data sets (May and July). The may model for red edge based simple ratio explained over 90% of leaf N variations. The model developed from the combined data sets with normalized difference vegetation index explained 62% of leaf N variation, and this is a model used to estimate and map leaf N for two seasons. The study demonstrated that leaf N could be monitored using high spatial resolution with the red edge band capability.

  16. A mass-balance code for the quantitative interpretation of fluid column profiles in ground-water studies

    NASA Astrophysics Data System (ADS)

    Paillet, Frederick

    2012-08-01

    A simple mass-balance code allows effective modeling of conventional fluid column resistivity logs in dilution tests involving column replacement with either distilled water or dilute brine. Modeling a series of column profiles where the inflowing formation water introduces water quality interfaces propagating along the borehole gives effective estimates of the rate of borehole flow. Application of the dilution model yields estimates of borehole flow rates that agree with measurements made with the heat-pulse flowmeter under ambient and pumping conditions. Model dilution experiments are used to demonstrate how dilution logging can extend the range of borehole flow measurement at least an order of magnitude beyond that achieved with flowmeters. However, dilution logging has the same dynamic range limitation encountered with flowmeters because it is difficult to detect and characterize flow zones that contribute a small fraction of total flow when that contribution is superimposed on a larger flow. When the smaller contribution is located below the primary zone, ambient downflow may disguise the zone if pumping is not strong enough to reverse the outflow. This situation can be addressed by increased pumping. But this is likely to make the moveout of water quality interfaces too fast to measure in the upper part of the borehole, so that a combination of flowmeter and dilution method may be more appropriate. Numerical experiments show that the expected weak horizontal flow across the borehole at conductive zones would be almost impossible to recognize if any ambient vertical flow is present. In situations where natural water quality differences occur such as flowing boreholes or injection experiments, the simple mass-balance code can be used to quantitatively model the evolution of fluid column logs. Otherwise, dilution experiments can be combined with high-resolution flowmeter profiles to obtain results not attainable using either method alone.

  17. Fisher's method of combining dependent statistics using generalizations of the gamma distribution with applications to genetic pleiotropic associations.

    PubMed

    Li, Qizhai; Hu, Jiyuan; Ding, Juan; Zheng, Gang

    2014-04-01

    A classical approach to combine independent test statistics is Fisher's combination of $p$-values, which follows the $\\chi ^2$ distribution. When the test statistics are dependent, the gamma distribution (GD) is commonly used for the Fisher's combination test (FCT). We propose to use two generalizations of the GD: the generalized and the exponentiated GDs. We study some properties of mis-using the GD for the FCT to combine dependent statistics when one of the two proposed distributions are true. Our results show that both generalizations have better control of type I error rates than the GD, which tends to have inflated type I error rates at more extreme tails. In practice, common model selection criteria (e.g. Akaike information criterion/Bayesian information criterion) can be used to help select a better distribution to use for the FCT. A simple strategy of the two generalizations of the GD in genome-wide association studies is discussed. Applications of the results to genetic pleiotrophic associations are described, where multiple traits are tested for association with a single marker.

  18. Bumblebees minimize control challenges by combining active and passive modes in unsteady winds

    NASA Astrophysics Data System (ADS)

    Ravi, Sridhar; Kolomenskiy, Dmitry; Engels, Thomas; Schneider, Kai; Wang, Chun; Sesterhenn, Jörn; Liu, Hao

    2016-10-01

    The natural wind environment that volant insects encounter is unsteady and highly complex, posing significant flight-control and stability challenges. It is critical to understand the strategies insects employ to safely navigate in natural environments. We combined experiments on free flying bumblebees with high-fidelity numerical simulations and lower-order modeling to identify the mechanics that mediate insect flight in unsteady winds. We trained bumblebees to fly upwind towards an artificial flower in a wind tunnel under steady wind and in a von Kármán street formed in the wake of a cylinder. Analysis revealed that at lower frequencies in both steady and unsteady winds the bees mediated lateral movement with body roll - typical casting motion. Numerical simulations of a bumblebee in similar conditions permitted the separation of the passive and active components of the flight trajectories. Consequently, we derived simple mathematical models that describe these two motion components. Comparison between the free-flying live and modeled bees revealed a novel mechanism that enables bees to passively ride out high-frequency perturbations while performing active maneuvers at lower frequencies. The capacity of maintaining stability by combining passive and active modes at different timescales provides a viable means for animals and machines to tackle the challenges posed by complex airflows.

  19. Combined collapse by bridging and self-adhesion in a prototypical polymer model inspired by the bacterial nucleoid

    NASA Astrophysics Data System (ADS)

    Scolari, Vittore F.; Cosentino Lagomarsino, Marco

    Recent experimental results suggest that the E. coli chromosome feels a self-attracting interaction of osmotic origin, and is condensed in foci by bridging interactions. Motivated by these findings, we explore a generic modeling framework combining solely these two ingredients, in order to characterize their joint effects. Specifically, we study a simple polymer physics computational model with weak ubiquitous short-ranged self attraction and stronger sparse bridging interactions. Combining theoretical arguments and simulations, we study the general phenomenology of polymer collapse induced by these dual contributions, in the case of regularly-spaced bridging. Our results distinguish a regime of classical Flory-like coil-globule collapse dictated by the interplay of excluded volume and attractive energy and a switch-like collapse where bridging interaction compete with entropy loss terms from the looped arms of a star-like rosette. Additionally, we show that bridging can induce stable compartmentalized domains. In these configurations, different "cores" of bridging proteins are kept separated by star-like polymer loops in an entropically favorable multi-domain configuration, with a mechanism that parallels micellar polysoaps. Such compartmentalized domains are stable, and do not need any intra-specific interactions driving their segregation. Domains can be stable also in presence of uniform attraction, as long as the uniform collapse is above its theta point.

  20. Clock Drawing Test and the diagnosis of amnestic mild cognitive impairment: can more detailed scoring systems do the work?

    PubMed

    Rubínová, Eva; Nikolai, Tomáš; Marková, Hana; Siffelová, Kamila; Laczó, Jan; Hort, Jakub; Vyhnálek, Martin

    2014-01-01

    The Clock Drawing Test is a frequently used cognitive screening test with several scoring systems in elderly populations. We compare simple and complex scoring systems and evaluate the usefulness of the combination of the Clock Drawing Test with the Mini-Mental State Examination to detect patients with mild cognitive impairment. Patients with amnestic mild cognitive impairment (n = 48) and age- and education-matched controls (n = 48) underwent neuropsychological examinations, including the Clock Drawing Test and the Mini-Mental State Examination. Clock drawings were scored by three blinded raters using one simple (6-point scale) and two complex (17- and 18-point scales) systems. The sensitivity and specificity of these scoring systems used alone and in combination with the Mini-Mental State Examination were determined. Complex scoring systems, but not the simple scoring system, were significant predictors of the amnestic mild cognitive impairment diagnosis in logistic regression analysis. At equal levels of sensitivity (87.5%), the Mini-Mental State Examination showed higher specificity (31.3%, compared with 12.5% for the 17-point Clock Drawing Test scoring scale). The combination of Clock Drawing Test and Mini-Mental State Examination scores increased the area under the curve (0.72; p < .001) and increased specificity (43.8%), but did not increase sensitivity, which remained high (85.4%). A simple 6-point scoring system for the Clock Drawing Test did not differentiate between healthy elderly and patients with amnestic mild cognitive impairment in our sample. Complex scoring systems were slightly more efficient, yet still were characterized by high rates of false-positive results. We found psychometric improvement using combined scores from the Mini-Mental State Examination and the Clock Drawing Test when complex scoring systems were used. The results of this study support the benefit of using combined scores from simple methods.

  1. Validation of a simple distributed sediment delivery approach in selected sub-basins of the River Inn catchment area

    NASA Astrophysics Data System (ADS)

    Reid, Lucas; Kittlaus, Steffen; Scherer, Ulrike

    2015-04-01

    For large areas without highly detailed data the empirical Universal Soil Loss Equation (USLE) is widely used to quantify soil loss. The problem though is usually the quantification of actual sediment influx into the rivers. As the USLE provides long-term mean soil loss rates, it is often combined with spatially lumped models to estimate the sediment delivery ratio (SDR). But it gets difficult with spatially lumped approaches in large catchment areas where the geographical properties have a wide variance. In this study we developed a simple but spatially distributed approach to quantify the sediment delivery ratio by considering the characteristics of the flow paths in the catchments. The sediment delivery ratio was determined using an empirical approach considering the slope, morphology and land use properties along the flow path as an estimation of travel time of the eroded particles. The model was tested against suspended solids measurements in selected sub-basins of the River Inn catchment area in Germany and Austria, ranging from the high alpine south to the Molasse basin in the northern part.

  2. A simulation assessment of the thermodynamics of dense ion-dipole mixtures with polarization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bastea, Sorin, E-mail: sbastea@llnl.gov

    Molecular dynamics (MD) simulations are employed to ascertain the relative importance of various electrostatic interaction contributions, including induction interactions, to the thermodynamics of dense, hot ion-dipole mixtures. In the absence of polarization, we find that an MD-constrained free energy term accounting for the ion-dipole interactions, combined with well tested ionic and dipolar contributions, yields a simple, fairly accurate free energy form that may be a better option for describing the thermodynamics of such mixtures than the mean spherical approximation (MSA). Polarization contributions induced by the presence of permanent dipoles and ions are found to be additive to a good approximation,more » simplifying the thermodynamic modeling. We suggest simple free energy corrections that account for these two effects, based in part on standard perturbative treatments and partly on comparisons with MD simulation. Even though the proposed approximations likely need further study, they provide a first quantitative assessment of polarization contributions at high densities and temperatures and may serve as a guide for future modeling efforts.« less

  3. Convergent chaos

    NASA Astrophysics Data System (ADS)

    Pradas, Marc; Pumir, Alain; Huber, Greg; Wilkinson, Michael

    2017-07-01

    Chaos is widely understood as being a consequence of sensitive dependence upon initial conditions. This is the result of an instability in phase space, which separates trajectories exponentially. Here, we demonstrate that this criterion should be refined. Despite their overall intrinsic instability, trajectories may be very strongly convergent in phase space over extremely long periods, as revealed by our investigation of a simple chaotic system (a realistic model for small bodies in a turbulent flow). We establish that this strong convergence is a multi-facetted phenomenon, in which the clustering is intense, widespread and balanced by lacunarity of other regions. Power laws, indicative of scale-free features, characterize the distribution of particles in the system. We use large-deviation and extreme-value statistics to explain the effect. Our results show that the interpretation of the ‘butterfly effect’ needs to be carefully qualified. We argue that the combination of mixing and clustering processes makes our specific model relevant to understanding the evolution of simple organisms. Lastly, this notion of convergent chaos, which implies the existence of conditions for which uncertainties are unexpectedly small, may also be relevant to the valuation of insurance and futures contracts.

  4. Flight evaluation of a simple total energy-rate system with potential wind-shear application

    NASA Technical Reports Server (NTRS)

    Ostroff, A. J.; Hueschen, R. M.; Hellbaum, R. F.; Creedon, J. F.

    1981-01-01

    Wind shears can create havoc during aircraft terminal area operations and have been cited as the primary cause of several major aircraft accidents. A simple sensor, potentially having application to the wind-shear problem, was developed to rapidly measure aircraft total energy relative to the air mass. Combining this sensor with either a variometer or a rate-of-climb indicator provides a total energy-rate system which was successfully applied in soaring flight. The measured rate of change of aircraft energy can potentially be used on display/control systems of powered aircraft to reduce glide-slope deviations caused by wind shear. The experimental flight configuration and evaluations of the energy-rate system are described. Two mathematical models are developed: the first describes operation of the energy probe in a linear design region and the second model is for the nonlinear region. The calculated total rate is compared with measured signals for many different flight tests. Time history plots show the tow curves to be almost the same for the linear operating region and very close for the nonlinear region.

  5. A powerful and flexible approach to the analysis of RNA sequence count data

    PubMed Central

    Zhou, Yi-Hui; Xia, Kai; Wright, Fred A.

    2011-01-01

    Motivation: A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean–variance relationships provides a flexible testing regimen that ‘borrows’ information across genes, while easily incorporating design effects and additional covariates. Results: We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean–variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. Availability: An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq Contact: yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21810900

  6. Numerical model of solar dynamic radiator for parametric analysis

    NASA Technical Reports Server (NTRS)

    Rhatigan, Jennifer L.

    1989-01-01

    Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. The SD module rejects waste heat from the power conversion cycle to space through a pumped-loop, multi-panel, deployable radiator. The baseline radiator configuration was defined during the Space Station conceptual design phase and is a function of the state point and heat rejection requirements of the power conversion unit. Requirements determined by the overall station design such as mass, system redundancy, micrometeoroid and space debris impact survivability, launch packaging, costs, and thermal and structural interaction with other station components have also been design drivers for the radiator configuration. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations. A brief description and discussion of the numerical model, it's capabilities and limitations, and results of the parametric studies performed is presented.

  7. Ultrasound hepatic/renal ratio and hepatic attenuation rate for quantifying liver fat content.

    PubMed

    Zhang, Bo; Ding, Fang; Chen, Tian; Xia, Liang-Hua; Qian, Juan; Lv, Guo-Yi

    2014-12-21

    To establish and validate a simple quantitative assessment method for nonalcoholic fatty liver disease (NAFLD) based on a combination of the ultrasound hepatic/renal ratio and hepatic attenuation rate. A total of 170 subjects were enrolled in this study. All subjects were examined by ultrasound and (1)H-magnetic resonance spectroscopy ((1)H-MRS) on the same day. The ultrasound hepatic/renal echo-intensity ratio and ultrasound hepatic echo-intensity attenuation rate were obtained from ordinary ultrasound images using the MATLAB program. Correlation analysis revealed that the ultrasound hepatic/renal ratio and hepatic echo-intensity attenuation rate were significantly correlated with (1)H-MRS liver fat content (ultrasound hepatic/renal ratio: r = 0.952, P = 0.000; hepatic echo-intensity attenuation r = 0.850, P = 0.000). The equation for predicting liver fat content by ultrasound (quantitative ultrasound model) is: liver fat content (%) = 61.519 × ultrasound hepatic/renal ratio + 167.701 × hepatic echo-intensity attenuation rate -26.736. Spearman correlation analysis revealed that the liver fat content ratio of the quantitative ultrasound model was positively correlated with serum alanine aminotransferase, aspartate aminotransferase, and triglyceride, but negatively correlated with high density lipoprotein cholesterol. Receiver operating characteristic curve analysis revealed that the optimal point for diagnosing fatty liver was 9.15% in the quantitative ultrasound model. Furthermore, in the quantitative ultrasound model, fatty liver diagnostic sensitivity and specificity were 94.7% and 100.0%, respectively, showing that the quantitative ultrasound model was better than conventional ultrasound methods or the combined ultrasound hepatic/renal ratio and hepatic echo-intensity attenuation rate. If the (1)H-MRS liver fat content had a value < 15%, the sensitivity and specificity of the ultrasound quantitative model would be 81.4% and 100%, which still shows that using the model is better than the other methods. The quantitative ultrasound model is a simple, low-cost, and sensitive tool that can accurately assess hepatic fat content in clinical practice. It provides an easy and effective parameter for the early diagnosis of mild hepatic steatosis and evaluation of the efficacy of NAFLD treatment.

  8. Thorough specification of the neurophysiologic processes underlying behavior and of their manifestation in EEG - demonstration with the go/no-go task.

    PubMed

    Shahaf, Goded; Pratt, Hillel

    2013-01-01

    In this work we demonstrate the principles of a systematic modeling approach of the neurophysiologic processes underlying a behavioral function. The modeling is based upon a flexible simulation tool, which enables parametric specification of the underlying neurophysiologic characteristics. While the impact of selecting specific parameters is of interest, in this work we focus on the insights, which emerge from rather accepted assumptions regarding neuronal representation. We show that harnessing of even such simple assumptions enables the derivation of significant insights regarding the nature of the neurophysiologic processes underlying behavior. We demonstrate our approach in some detail by modeling the behavioral go/no-go task. We further demonstrate the practical significance of this simplified modeling approach in interpreting experimental data - the manifestation of these processes in the EEG and ERP literature of normal and abnormal (ADHD) function, as well as with comprehensive relevant ERP data analysis. In-fact we show that from the model-based spatiotemporal segregation of the processes, it is possible to derive simple and yet effective and theory-based EEG markers differentiating normal and ADHD subjects. We summarize by claiming that the neurophysiologic processes modeled for the go/no-go task are part of a limited set of neurophysiologic processes which underlie, in a variety of combinations, any behavioral function with measurable operational definition. Such neurophysiologic processes could be sampled directly from EEG on the basis of model-based spatiotemporal segregation.

  9. Thickness-shear mode quartz crystal resonators in viscoelastic fluid media

    NASA Astrophysics Data System (ADS)

    Arnau, A.; Jiménez, Y.; Sogorb, T.

    2000-10-01

    An extended Butterworth-Van Dyke (EBVD) model to characterize a thickness-shear mode quartz crystal resonator in a semi-infinite viscoelastic medium is derived by means of analysis of the lumped elements model described by Cernosek et al. [R. W. Cernosek, S. J. Martin, A. R. Hillman, and H. L. Bandey, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 45, 1399 (1998)]. The EBVD model parameters are related to the viscoelastic properties of the medium. A capacitance added to the motional branch of the EBVD model has to be included when the elastic properties of the fluid are considered. From this model, an explicit expression for the frequency shift of a quartz crystal sensor in viscoelastic media is obtained. By combining the expressions for shifts in the motional series resonant frequency and in the motional resistance, a simple equation that relates only one unknown (the loss factor of the fluid) to those measurable quantities, and two simple explicit expressions for determining the viscoelastic properties of semi-infinite fluid media have been derived. The proposed expression for the parameter Δf/ΔR is compared with the corresponding ratio obtained with data computed from the complete admittance model. Relative errors below 4.5%, 3%, and 1.2% (for the ratios of the load surface mechanical impedance to the quartz shear characteristic impedance of 0.3, 0.25, and 0.1, respectively), are obtained in the range of the cases analyzed. Experimental data from the literature are used to validate the model.

  10. Radon-222 related influence on ambient gamma dose.

    PubMed

    Melintescu, A; Chambers, S D; Crawford, J; Williams, A G; Zorila, B; Galeriu, D

    2018-04-03

    Ambient gamma dose, radon, and rainfall have been monitored in southern Bucharest, Romania, from 2010 to 2016. The seasonal cycle of background ambient gamma dose peaked between July and October (100-105 nSv h -1 ), with minimum values in February (75-80 nSv h -1 ), the time of maximum snow cover. Based on 10 m a.g.l. radon concentrations, the ambient gamma dose increased by around 1 nSv h -1 for every 5 Bq m -3 increase in radon. Radon variability attributable to diurnal changes in atmospheric mixing contributed less than 15 nSv h -1 to the overall variability in ambient gamma dose, a factor of 4 more than synoptic timescale changes in air mass fetch. By contrast, precipitation-related enhancements of the ambient gamma dose were 15-80 nSv h -1 . To facilitate routine analysis, and account in part for occasional equipment failure, an automated method for identifying precipitation spikes in the ambient gamma dose was developed. Lastly, a simple model for predicting rainfall-related enhancement of the ambient gamma dose is tested against rainfall observations from events of contrasting duration and intensity. Results are also compared with those from previously published models of simple and complex formulation. Generally, the model performed very well. When simulations underestimated observations the absolute difference was typically less than the natural variability in ambient gamma dose arising from atmospheric mixing influences. Consequently, combined use of the automated event detection method and the simple model of this study could enable the ambient gamma dose "attention limit" (which indicates a potential radiological emergency) to be reduced from 200 to 400% above background to 25-50%. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. How well can regional fluxes be derived from smaller-scale estimates?

    NASA Technical Reports Server (NTRS)

    Moore, Kathleen E.; Fitzjarrald, David R.; Ritter, John A.

    1992-01-01

    Regional surface fluxes are essential lower boundary conditions for large scale numerical weather and climate models and are the elements of global budgets of important trace gases. Surface properties affecting the exchange of heat, moisture, momentum and trace gases vary with length scales from one meter to hundreds of km. A classical difficulty is that fluxes have been measured directly only at points or along lines. The process of scaling up observations limited in space and/or time to represent larger areas was done by assigning properties to surface classes and combining estimated or calculated fluxes using an area weighted average. It is not clear that a simple area weighted average is sufficient to produce the large scale from the small scale, chiefly due to the effect of internal boundary layers, nor is it known how important the uncertainty is to large scale model outcomes. Simultaneous aircraft and tower data obtained in the relatively simple terrain of the western Alaska tundra were used to determine the extent to which surface type variation can be related to fluxes of heat, moisture, and other properties. Surface type was classified as lake or land with aircraft borne infrared thermometer, and flight level heat and moisture fluxes were related to surface type. The magnitude and variety of sampling errors inherent in eddy correlation flux estimation place limits on how well any flux can be known even in simple geometries.

  12. Mixed Poisson distributions in exact solutions of stochastic autoregulation models.

    PubMed

    Iyer-Biswas, Srividya; Jayaprakash, C

    2014-11-01

    In this paper we study the interplay between stochastic gene expression and system design using simple stochastic models of autoactivation and autoinhibition. Using the Poisson representation, a technique whose particular usefulness in the context of nonlinear gene regulation models we elucidate, we find exact results for these feedback models in the steady state. Further, we exploit this representation to analyze the parameter spaces of each model, determine which dimensionless combinations of rates are the shape determinants for each distribution, and thus demarcate where in the parameter space qualitatively different behaviors arise. These behaviors include power-law-tailed distributions, bimodal distributions, and sub-Poisson distributions. We also show how these distribution shapes change when the strength of the feedback is tuned. Using our results, we reexamine how well the autoinhibition and autoactivation models serve their conventionally assumed roles as paradigms for noise suppression and noise exploitation, respectively.

  13. Distribution of model uncertainty across multiple data streams

    NASA Astrophysics Data System (ADS)

    Wutzler, Thomas

    2014-05-01

    When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.

  14. A solution to the surface intersection problem. [Boolean functions in geometric modeling

    NASA Technical Reports Server (NTRS)

    Timer, H. G.

    1977-01-01

    An application-independent geometric model within a data base framework should support the use of Boolean operators which allow the user to construct a complex model by appropriately combining a series of simple models. The use of these operators leads to the concept of implicitly and explicitly defined surfaces. With an explicitly defined model, the surface area may be computed by simply summing the surface areas of the bounding surfaces. For an implicitly defined model, the surface area computation must deal with active and inactive regions. Because the surface intersection problem involves four unknowns and its solution is a space curve, the parametric coordinates of each surface must be determined as a function of the arc length. Various subproblems involved in the general intersection problem are discussed, and the mathematical basis for their solution is presented along with a program written in FORTRAN IV for implementation on the IBM 370 TSO system.

  15. Improving Secondary Organic Aerosol (SOA) Models using Global Sensitivity Analysis and by Comparison to Chamber Data.

    NASA Astrophysics Data System (ADS)

    Miller, D. O.; Brune, W. H.

    2017-12-01

    Accurate estimates of secondary organic aerosol (SOA) from atmospheric models is a major research challenge due to the complexity of the chemical and physical processes involved in the SOA formation and continuous aging. The primary uncertainties of SOA models include those associated with the formation of gas-phase products, the conversion between gas phase and particle phase, the aging mechanisms of SOA, and other processes related to the heterogeneous and particle-phase reactions. To address this challenge, we us a modular modeling framework that combines both simple and near-explicit gas-phase reactions and a two-dimensional volatility basis set (2D-VBS) to simulate the formation and evolution of SOA. Global sensitivity analysis is used to assess the relative importance of the model input parameters. In addition, the model is compared to the measurements from the Focused Isoprene eXperiment at the California Institute of Technology (FIXCIT).

  16. Electrical description of N2 capacitively coupled plasmas with the global model

    NASA Astrophysics Data System (ADS)

    Cao, Ming-Lu; Lu, Yi-Jia; Cheng, Jia; Ji, Lin-Hong; Engineering Design Team

    2016-10-01

    N2 discharges in a commercial capacitively coupled plasma reactor are modelled by a combination of an equivalent circuit and the global model, for a range of gas pressure at 1 4 Torr. The ohmic and inductive plasma bulk and the capacitive sheath are represented as LCR elements, with electrical characteristics determined by plasma parameters. The electron density and electron temperature are obtained from the global model in which a Maxwellian electron distribution is assumed. Voltages and currents are recorded by a VI probe installed after the match network. Using the measured voltage as an input, the current flowing through the discharge volume is calculated from the electrical model and shows excellent agreement with the measurements. The experimentally verified electrical model provides a simple and accurate description for the relationship between the external electrical parameters and the plasma properties, which can serve as a guideline for process window planning in industrial applications.

  17. Development of an Algorithm for Automatic Analysis of the Impedance Spectrum Based on a Measurement Model

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kiyoshi; Suzuki, Tohru S.

    2018-03-01

    A new algorithm for the automatic estimation of an equivalent circuit and the subsequent parameter optimization is developed by combining the data-mining concept and complex least-squares method. In this algorithm, the program generates an initial equivalent-circuit model based on the sampling data and then attempts to optimize the parameters. The basic hypothesis is that the measured impedance spectrum can be reproduced by the sum of the partial-impedance spectra presented by the resistor, inductor, resistor connected in parallel to a capacitor, and resistor connected in parallel to an inductor. The adequacy of the model is determined by using a simple artificial-intelligence function, which is applied to the output function of the Levenberg-Marquardt module. From the iteration of model modifications, the program finds an adequate equivalent-circuit model without any user input to the equivalent-circuit model.

  18. Cost-effectiveness analysis of malaria rapid diagnostic test incentive schemes for informal private healthcare providers in Myanmar.

    PubMed

    Chen, Ingrid T; Aung, Tin; Thant, Hnin Nwe Nwe; Sudhinaraset, May; Kahn, James G

    2015-02-05

    The emergence of artemisinin-resistant Plasmodium falciparum parasites in Southeast Asia threatens global malaria control efforts. One strategy to counter this problem is a subsidy of malaria rapid diagnostic tests (RDTs) and artemisinin-based combination therapy (ACT) within the informal private sector, where the majority of malaria care in Myanmar is provided. A study in Myanmar evaluated the effectiveness of financial incentives vs information, education and counselling (IEC) in driving the proper use of subsidized malaria RDTs among informal private providers. This cost-effectiveness analysis compares intervention options. A decision tree was constructed in a spreadsheet to estimate the incremental cost-effectiveness ratios (ICERs) among four strategies: no intervention, simple subsidy, subsidy with financial incentives, and subsidy with IEC. Model inputs included programmatic costs (in dollars), malaria epidemiology and observed study outcomes. Data sources included expenditure records, study data and scientific literature. Model outcomes included the proportion of properly and improperly treated individuals with and without P. falciparum malaria, and associated disability-adjusted life years (DALYs). Results are reported as ICERs in US dollars per DALY averted. One-way sensitivity analysis assessed how outcomes depend on uncertainty in inputs. ICERs from the least to most expensive intervention are: $1,169/DALY averted for simple subsidy vs no intervention, $185/DALY averted for subsidy with financial incentives vs simple subsidy, and $200/DALY averted for a subsidy with IEC vs subsidy with financial incentives. Due to decreasing ICERs, each strategy was also compared to no intervention. The subsidy with IEC was the most favourable, costing $639/DALY averted compared with no intervention. One-way sensitivity analysis shows that ICERs are most affected by programme costs, RDT uptake, treatment-seeking behaviour, and the prevalence and virulence of non-malarial fevers. In conclusion, private provider subsidies with IEC or a combination of IEC and financial incentives may be a good investment for malaria control.

  19. Benchmarking novel approaches for modelling species range dynamics

    PubMed Central

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.

    2016-01-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. PMID:26872305

  20. Benchmarking novel approaches for modelling species range dynamics.

    PubMed

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. © 2016 John Wiley & Sons Ltd.

  1. APC-PC Combined Scheme in Gilbert Two State Model: Proposal and Study

    NASA Astrophysics Data System (ADS)

    Bulo, Yaka; Saring, Yang; Bhunia, Chandan Tilak

    2017-04-01

    In an automatic repeat request (ARQ) scheme, a packet is retransmitted if it gets corrupted due to transmission errors caused by the channel. However, an erroneous packet may contain both erroneous bits and correct bits and hence it may still contain useful information. The receiver may be able to combine this information from multiple erroneous copies to recover the correct packet. Packet combining (PC) is a simple and elegant scheme of error correction in transmitted packet, in which two received copies are XORed to obtain the bit location of erroneous bits. Thereafter, the packet is corrected by bit inversion of bit located as erroneous. Aggressive packet combining (APC) is a logic extension of PC primarily designed for wireless communication with objective of correcting error with low latency. PC offers higher throughput than APC, but PC does not correct double bit errors if occur in same bit location of erroneous copies of the packet. A hybrid technique is proposed to utilize the advantages of both APC and PC while attempting to remove the limitation of both. In the proposed technique, applications of APC-PC on Gilbert two state model has been studied. The simulation results show that the proposed technique offers better throughput than the conventional APC and lesser packet error rate than PC scheme.

  2. A comparison of simple global kinetic models for coal devolatilization with the CPD model

    DOE PAGES

    Richards, Andrew P.; Fletcher, Thomas H.

    2016-08-01

    Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less

  3. Low-Dose Irradiation Enhances Gene Targeting in Human Pluripotent Stem Cells.

    PubMed

    Hatada, Seigo; Subramanian, Aparna; Mandefro, Berhan; Ren, Songyang; Kim, Ho Won; Tang, Jie; Funari, Vincent; Baloh, Robert H; Sareen, Dhruv; Arumugaswami, Vaithilingaraja; Svendsen, Clive N

    2015-09-01

    Human pluripotent stem cells (hPSCs) are now being used for both disease modeling and cell therapy; however, efficient homologous recombination (HR) is often crucial to develop isogenic control or reporter lines. We showed that limited low-dose irradiation (LDI) using either γ-ray or x-ray exposure (0.4 Gy) significantly enhanced HR frequency, possibly through induction of DNA repair/recombination machinery including ataxia-telangiectasia mutated, histone H2A.X and RAD51 proteins. LDI could also increase HR efficiency by more than 30-fold when combined with the targeting tools zinc finger nucleases, transcription activator-like effector nucleases, and clustered regularly interspaced short palindromic repeats. Whole-exome sequencing confirmed that the LDI administered to hPSCs did not induce gross genomic alterations or affect cellular viability. Irradiated and targeted lines were karyotypically normal and made all differentiated lineages that continued to express green fluorescent protein targeted at the AAVS1 locus. This simple method allows higher throughput of new, targeted hPSC lines that are crucial to expand the use of disease modeling and to develop novel avenues of cell therapy. The simple and relevant technique described in this report uses a low level of radiation to increase desired gene modifications in human pluripotent stem cells by an order of magnitude. This higher efficiency permits greater throughput with reduced time and cost. The low level of radiation also greatly increased the recombination frequency when combined with developed engineered nucleases. Critically, the radiation did not lead to increases in DNA mutations or to reductions in overall cellular viability. This novel technique enables not only the rapid production of disease models using human stem cells but also the possibility of treating genetically based diseases by correcting patient-derived cells. ©AlphaMed Press.

  4. EXTINCTION AND DUST GEOMETRY IN M83 H II REGIONS: AN HUBBLE SPACE TELESCOPE/WFC3 STUDY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guilin; Calzetti, Daniela; Hong, Sungryong

    We present Hubble Space Telescope/WFC3 narrow-band imaging of the starburst galaxy M83 targeting the hydrogen recombination lines (Hβ, Hα, and Paβ), which we use to investigate the dust extinction in the H II regions. We derive extinction maps with 6 pc spatial resolution from two combinations of hydrogen lines (Hα/Hβ and Hα/Paβ), and show that the longer wavelengths probe larger optical depths, with A{sub V} values larger by ≳1 mag than those derived from the shorter wavelengths. This difference leads to a factor ≳2 discrepancy in the extinction-corrected Hα luminosity, a significant effect when studying extragalactic H II regions. By comparing thesemore » observations to a series of simple models, we conclude that a large diversity of absorber/emitter geometric configurations can account for the data, implying a more complex physical structure than the classical foreground ''dust screen'' assumption. However, most data points are bracketed by the foreground screen and a model where dust and emitters are uniformly mixed. When averaged over large (≳100-200 pc) scales, the extinction becomes consistent with a ''dust screen'', suggesting that other geometries tend to be restricted to more local scales. Moreover, the extinction in any region can be described by a combination of the foreground screen and the uniform mixture model with weights of 1/3 and 2/3 in the center (≲2 kpc), respectively, and 2/3 and 1/3 for the rest of the disk. This simple prescription significantly improves the accuracy of the dust extinction corrections and can be especially useful for pixel-based analyses of galaxies similar to M83.« less

  5. OpenACC directive-based GPU acceleration of an implicit reconstructed discontinuous Galerkin method for compressible flows on 3D unstructured grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lou, Jialin; Xia, Yidong; Luo, Lixiang

    2016-09-01

    In this study, we use a combination of modeling techniques to describe the relationship between fracture radius that might be accomplished in a hypothetical enhanced geothermal system (EGS) and drilling distance required to create and access those fractures. We use a combination of commonly applied analytical solutions for heat transport in parallel fractures and 3D finite-element method models of more realistic heat extraction geometries. For a conceptual model involving multiple parallel fractures developed perpendicular to an inclined or horizontal borehole, calculations demonstrate that EGS will likely require very large fractures, of greater than 300 m radius, to keep interfracture drillingmore » distances to ~10 km or less. As drilling distances are generally inversely proportional to the square of fracture radius, drilling costs quickly escalate as the fracture radius decreases. It is important to know, however, whether fracture spacing will be dictated by thermal or mechanical considerations, as the relationship between drilling distance and number of fractures is quite different in each case. Information about the likelihood of hydraulically creating very large fractures comes primarily from petroleum recovery industry data describing hydraulic fractures in shale. Those data suggest that fractures with radii on the order of several hundred meters may, indeed, be possible. The results of this study demonstrate that relatively simple calculations can be used to estimate primary design constraints on a system, particularly regarding the relationship between generated fracture radius and the total length of drilling needed in the fracture creation zone. Comparison of the numerical simulations of more realistic geometries than addressed in the analytical solutions suggest that simple proportionalities can readily be derived to relate a particular flow field.« less

  6. Denaturation of RNA secondary and tertiary structure by urea: simple unfolded state models and free energy parameters account for measured m-values

    PubMed Central

    Lambert, Dominic; Draper, David E.

    2012-01-01

    To investigate the mechanism by which urea destabilizes RNA structure, urea-induced unfolding of four different RNA secondary and tertiary structures was quantified in terms of an m-value, the rate at which the free energy of unfolding changes with urea molality. From literature data and our osmometric study of a backbone analog, we derived average interaction potentials (per Å2 of solvent accessible surface) between urea and three kinds of RNA surfaces: phosphate, ribose, and base. Estimates of the increases in solvent accessible surface areas upon RNA denaturation were based on a simple model of unfolded RNA as a combination of helical and single strand segments. These estimates, combined with the three interaction potentials and a term to account for urea interactions with released ions, yield calculated m-values in good agreement with experimental values (200 mm monovalent salt). Agreement was obtained only if single-stranded RNAs were modeled in a highly stacked, A form conformation. The primary driving force for urea induced denaturation is the strong interaction of urea with the large surface areas of bases that become exposed upon denaturation of either RNA secondary or tertiary structure, though urea interactions with backbone and released ions may account for up to a third of the m-value. Urea m-values for all four RNA are salt-dependent, which we attribute to an increased extension (or decreased charge density) of unfolded RNAs with increased urea concentration. The sensitivity of the urea m-value to base surface exposure makes it a potentially useful probe of the conformations of RNA unfolded states. PMID:23088364

  7. Combined influence of visual scene and body tilt on arm pointing movements: gravity matters!

    PubMed

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R; Bourdin, Christophe; Mestre, Daniel R; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., 'combined' tilts equal to the sum of 'single' tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues.

  8. Automated palpation for breast tissue discrimination based on viscoelastic biomechanical properties.

    PubMed

    Tsukune, Mariko; Kobayashi, Yo; Miyashita, Tomoyuki; Fujie, G Masakatsu

    2015-05-01

    Accurate, noninvasive methods are sought for breast tumor detection and diagnosis. In particular, a need for noninvasive techniques that measure both the nonlinear elastic and viscoelastic properties of breast tissue has been identified. For diagnostic purposes, it is important to select a nonlinear viscoelastic model with a small number of parameters that highly correlate with histological structure. However, the combination of conventional viscoelastic models with nonlinear elastic models requires a large number of parameters. A nonlinear viscoelastic model of breast tissue based on a simple equation with few parameters was developed and tested. The nonlinear viscoelastic properties of soft tissues in porcine breast were measured experimentally using fresh ex vivo samples. Robotic palpation was used for measurements employed in a finite element model. These measurements were used to calculate nonlinear viscoelastic parameters for fat, fibroglandular breast parenchyma and muscle. The ability of these parameters to distinguish the tissue types was evaluated in a two-step statistical analysis that included Holm's pairwise [Formula: see text] test. The discrimination error rate of a set of parameters was evaluated by the Mahalanobis distance. Ex vivo testing in porcine breast revealed significant differences in the nonlinear viscoelastic parameters among combinations of three tissue types. The discrimination error rate was low among all tested combinations of three tissue types. Although tissue discrimination was not achieved using only a single nonlinear viscoelastic parameter, a set of four nonlinear viscoelastic parameters were able to reliably and accurately discriminate fat, breast fibroglandular tissue and muscle.

  9. Modeling the Benchmark Active Control Technology Wind-Tunnel Model for Active Control Design Applications

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1998-01-01

    This report describes the formulation of a model of the dynamic behavior of the Benchmark Active Controls Technology (BACT) wind tunnel model for active control design and analysis applications. The model is formed by combining the equations of motion for the BACT wind tunnel model with actuator models and a model of wind tunnel turbulence. The primary focus of this report is the development of the equations of motion from first principles by using Lagrange's equations and the principle of virtual work. A numerical form of the model is generated by making use of parameters obtained from both experiment and analysis. Comparisons between experimental and analytical data obtained from the numerical model show excellent agreement and suggest that simple coefficient-based aerodynamics are sufficient to accurately characterize the aeroelastic response of the BACT wind tunnel model. The equations of motion developed herein have been used to aid in the design and analysis of a number of flutter suppression controllers that have been successfully implemented.

  10. Quasi-brittle damage modeling based on incremental energy relaxation combined with a viscous-type regularization

    NASA Astrophysics Data System (ADS)

    Langenfeld, K.; Junker, P.; Mosler, J.

    2018-05-01

    This paper deals with a constitutive model suitable for the analysis of quasi-brittle damage in structures. The model is based on incremental energy relaxation combined with a viscous-type regularization. A similar approach—which also represents the inspiration for the improved model presented in this paper—was recently proposed in Junker et al. (Contin Mech Thermodyn 29(1):291-310, 2017). Within this work, the model introduced in Junker et al. (2017) is critically analyzed first. This analysis leads to an improved model which shows the same features as that in Junker et al. (2017), but which (i) eliminates unnecessary model parameters, (ii) can be better interpreted from a physics point of view, (iii) can capture a fully softened state (zero stresses), and (iv) is characterized by a very simple evolution equation. In contrast to the cited work, this evolution equation is (v) integrated fully implicitly and (vi) the resulting time-discrete evolution equation can be solved analytically providing a numerically efficient closed-form solution. It is shown that the final model is indeed well-posed (i.e., its tangent is positive definite). Explicit conditions guaranteeing this well-posedness are derived. Furthermore, by additively decomposing the stress rate into deformation- and purely time-dependent terms, the functionality of the model is explained. Illustrative numerical examples confirm the theoretical findings.

  11. A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.

    PubMed

    Mignotte, Max

    2010-06-01

    This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.

  12. Innovative GOCI algorithm to derive turbidity in highly turbid waters: a case study in the Zhejiang coastal area.

    PubMed

    Qiu, Zhongfeng; Zheng, Lufei; Zhou, Yan; Sun, Deyong; Wang, Shengqiang; Wu, Wei

    2015-09-21

    An innovative algorithm is developed and validated to estimate the turbidity in Zhejiang coastal area (highly turbid waters) using data from the Geostationary Ocean Color Imager (GOCI). First, satellite-ground synchronous data (n = 850) was collected from 2014 to 2015 using 11 buoys equipped with a Yellow Spring Instrument (YSI) multi-parameter sonde capable of taking hourly turbidity measurements. The GOCI data-derived Rayleigh-corrected reflectance (R(rc)) was used in place of the widely used remote sensing reflectance (R(rs)) to model turbidity. Various band characteristics, including single band, band ratio, band subtraction, and selected band combinations, were analyzed to identify correlations with turbidity. The results indicated that band 6 had the closest relationship to turbidity; however, the combined bands 3 and 6 model simulated turbidity most accurately (R(2) = 0.821, p<0.0001), while the model based on band 6 alone performed almost as well (R(2) = 0.749, p<0.0001). An independent validation data set was used to evaluate the performances of both models, and the mean relative error values of 42.5% and 51.2% were obtained for the combined model and the band 6 model, respectively. The accurate performances of the proposed models indicated that the use of R(rc) to model turbidity in highly turbid coastal waters is feasible. As an example, the developed model was applied to 8 hourly GOCI images on 30 December 2014. Three cross sections were selected to identify the spatiotemporal variation of turbidity in the study area. Turbidity generally decreased from near-shore to offshore and from morning to afternoon. Overall, the findings of this study provide a simple and practical method, based on GOCI data, to estimate turbidity in highly turbid coastal waters at high temporal resolutions.

  13. Family practitioners' diagnostic decision-making processes regarding patients with respiratory tract infections: an observational study.

    PubMed

    Fischer, Thomas; Fischer, Susanne; Himmel, Wolfgang; Kochen, Michael M; Hummers-Pradier, Eva

    2008-01-01

    The influence of patient characteristics on family practitioners' (FPs') diagnostic decision making has mainly been investigated using indirect methods such as vignettes or questionnaires. Direct observation-borrowed from social and cultural anthropology-may be an alternative method for describing FPs' real-life behavior and may help in gaining insight into how FPs diagnose respiratory tract infections, which are frequent in primary care. To clarify FPs' diagnostic processes when treating patients suffering from symptoms of respiratory tract infection. This direct observation study was performed in 30 family practices using a checklist for patient complaints, history taking, physical examination, and diagnoses. The influence of patients' symptoms and complaints on the FPs' physical examination and diagnosis was calculated by logistic regression analyses. Dummy variables based on combinations of symptoms and complaints were constructed and tested against saturated (full) and backward regression models. In total, 273 patients (median age 37 years, 51% women) were included. The median number of symptoms described was 4 per patient, and most information was provided at the patients' own initiative. Multiple logistic regression analysis showed a strong association between patients' complaints and the physical examination. Frequent diagnoses were upper respiratory tract infection (URTI)/common cold (43%), bronchitis (26%), sinusitis (12%), and tonsillitis (11%). There were no significant statistical differences between "simple heuristic'' models and saturated regression models in the diagnoses of bronchitis, sinusitis, and tonsillitis, indicating that simple heuristics are probably used by the FPs, whereas "URTI/common cold'' was better explained by the full model. FPs tended to make their diagnosis based on a few patient symptoms and a limited physical examination. Simple heuristic models were almost as powerful in explaining most diagnoses as saturated models. Direct observation allowed for the study of decision making under real conditions, yielding both quantitative data and "qualitative'' information about the FPs' performance. It is important for investigators to be aware of the specific disadvantages of the method (e.g., a possible observer effect).

  14. Why the Long Face? The Mechanics of Mandibular Symphysis Proportions in Crocodiles

    PubMed Central

    Walmsley, Christopher W.; Smits, Peter D.; Quayle, Michelle R.; McCurry, Matthew R.; Richards, Heather S.; Oldfield, Christopher C.; Wroe, Stephen; Clausen, Phillip D.; McHenry, Colin R.

    2013-01-01

    Background Crocodilians exhibit a spectrum of rostral shape from long snouted (longirostrine), through to short snouted (brevirostrine) morphologies. The proportional length of the mandibular symphysis correlates consistently with rostral shape, forming as much as 50% of the mandible’s length in longirostrine forms, but 10% in brevirostrine crocodilians. Here we analyse the structural consequences of an elongate mandibular symphysis in relation to feeding behaviours. Methods/Principal Findings Simple beam and high resolution Finite Element (FE) models of seven species of crocodile were analysed under loads simulating biting, shaking and twisting. Using beam theory, we statistically compared multiple hypotheses of which morphological variables should control the biomechanical response. Brevi- and mesorostrine morphologies were found to consistently outperform longirostrine types when subject to equivalent biting, shaking and twisting loads. The best predictors of performance for biting and twisting loads in FE models were overall length and symphyseal length respectively; for shaking loads symphyseal length and a multivariate measurement of shape (PC1– which is strongly but not exclusively correlated with symphyseal length) were equally good predictors. Linear measurements were better predictors than multivariate measurements of shape in biting and twisting loads. For both biting and shaking loads but not for twisting, simple beam models agree with best performance predictors in FE models. Conclusions/Significance Combining beam and FE modelling allows a priori hypotheses about the importance of morphological traits on biomechanics to be statistically tested. Short mandibular symphyses perform well under loads used for feeding upon large prey, but elongate symphyses incur high strains under equivalent loads, underlining the structural constraints to prey size in the longirostrine morphotype. The biomechanics of the crocodilian mandible are largely consistent with beam theory and can be predicted from simple morphological measurements, suggesting that crocodilians are a useful model for investigating the palaeobiomechanics of other aquatic tetrapods. PMID:23342027

  15. Investigation on the correlation between energy deposition and clustered DNA damage induced by low-energy electrons.

    PubMed

    Liu, Wei; Tan, Zhenyu; Zhang, Liming; Champion, Christophe

    2018-05-01

    This study presents the correlation between energy deposition and clustered DNA damage, based on a Monte Carlo simulation of the spectrum of direct DNA damage induced by low-energy electrons including the dissociative electron attachment. Clustered DNA damage is classified as simple and complex in terms of the combination of single-strand breaks (SSBs) or double-strand breaks (DSBs) and adjacent base damage (BD). The results show that the energy depositions associated with about 90% of total clustered DNA damage are below 150 eV. The simple clustered DNA damage, which is constituted of the combination of SSBs and adjacent BD, is dominant, accounting for 90% of all clustered DNA damage, and the spectra of the energy depositions correlating with them are similar for different primary energies. One type of simple clustered DNA damage is the combination of a SSB and 1-5 BD, which is denoted as SSB + BD. The average contribution of SSB + BD to total simple clustered DNA damage reaches up to about 84% for the considered primary energies. In all forms of SSB + BD, the SSB + BD including only one base damage is dominant (above 80%). In addition, for the considered primary energies, there is no obvious difference between the average energy depositions for a fixed complexity of SSB + BD determined by the number of base damage, but average energy depositions increase with the complexity of SSB + BD. In the complex clustered DNA damage constituted by the combination of DSBs and BD around them, a relatively simple type is a DSB combining adjacent BD, marked as DSB + BD, and it is of substantial contribution (on average up to about 82%). The spectrum of DSB + BD is given mainly by the DSB in combination with different numbers of base damage, from 1 to 5. For the considered primary energies, the DSB combined with only one base damage contributes about 83% of total DSB + BD, and the average energy deposition is about 106 eV. However, the energy deposition increases with the complexity of clustered DNA damage, and therefore, the clustered DNA damage with high complexity still needs to be considered in the study of radiation biological effects, in spite of their small contributions to all clustered DNA damage.

  16. Economic and environmental optimization of waste treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Münster, M.; Ravn, H.; Hedegaard, K.

    2015-04-15

    Highlights: • Optimizing waste treatment by incorporating LCA methodology. • Applying different objectives (minimizing costs or GHG emissions). • Prioritizing multiple objectives given different weights. • Optimum depends on objective and assumed displaced electricity production. - Abstract: This article presents the new systems engineering optimization model, OptiWaste, which incorporates a life cycle assessment (LCA) methodology and captures important characteristics of waste management systems. As part of the optimization, the model identifies the most attractive waste management options. The model renders it possible to apply different optimization objectives such as minimizing costs or greenhouse gas emissions or to prioritize several objectivesmore » given different weights. A simple illustrative case is analysed, covering alternative treatments of one tonne of residual household waste: incineration of the full amount or sorting out organic waste for biogas production for either combined heat and power generation or as fuel in vehicles. The case study illustrates that the optimal solution depends on the objective and assumptions regarding the background system – illustrated with different assumptions regarding displaced electricity production. The article shows that it is feasible to combine LCA methodology with optimization. Furthermore, it highlights the need for including the integrated waste and energy system into the model.« less

  17. Winter wheat quality monitoring and forecasting system based on remote sensing and environmental factors

    NASA Astrophysics Data System (ADS)

    Haiyang, Yu; Yanmei, Liu; Guijun, Yang; Xiaodong, Yang; Dong, Ren; Chenwei, Nie

    2014-03-01

    To achieve dynamic winter wheat quality monitoring and forecasting in larger scale regions, the objective of this study was to design and develop a winter wheat quality monitoring and forecasting system by using a remote sensing index and environmental factors. The winter wheat quality trend was forecasted before the harvest and quality was monitored after the harvest, respectively. The traditional quality-vegetation index from remote sensing monitoring and forecasting models were improved. Combining with latitude information, the vegetation index was used to estimate agronomy parameters which were related with winter wheat quality in the early stages for forecasting the quality trend. A combination of rainfall in May, temperature in May, illumination at later May, the soil available nitrogen content and other environmental factors established the quality monitoring model. Compared with a simple quality-vegetation index, the remote sensing monitoring and forecasting model used in this system get greatly improved accuracy. Winter wheat quality was monitored and forecasted based on the above models, and this system was completed based on WebGIS technology. Finally, in 2010 the operation process of winter wheat quality monitoring system was presented in Beijing, the monitoring and forecasting results was outputted as thematic maps.

  18. Connectivity in Agricultural Landscapes; do we Need More than a Dem?

    NASA Astrophysics Data System (ADS)

    Foster, I.; Boardman, J.; Favis-Mortlock, D.

    2017-12-01

    DEM's at a scale of metres to kilometres form the basis for many erosion models in part because data have long been available and published by national mapping agencies, such as the UK Ordnance Survey, and also because modelling gradient and flow pathways relative to topography is often simply executed within a GIS. That most landscape connectivity is not driven by topography is a simple issue that modellers appear reluctant to accept, or too challenging to model, yet there is an urgent need to rethink how landscapes function and what drives connectivity laterally and longitudinally at different spatial and temporal scales within agricultural landscapes. Landscape connectivity is driven by a combination of natural and anthropogenic factors that can enhance, reduce or eliminate connectivity at different timescales. In this paper we explore the use of a range of data sources that can be used to build a detailed picture of landscape connectivity at different scales. From a number of case studies we combine the use of maps, lidar data, field mapping, lake and floodplain coring fingerprinting and process monitoring to identify lateral and longitudinal connectivity and the way in which these have changed through time.

  19. High-reliability release mechanism

    NASA Technical Reports Server (NTRS)

    Paradise, J. J.

    1971-01-01

    Release mechanism employing simple clevis fitting in combination with two pin-pullers achieves high reliability degree through active mechanical redundancy. Mechanism releases solar arrays. It is simple and inexpensive and performs effectively. It adapts to other release-system applications with variety of pin-puller devices.

  20. Artificial hair cell integrated with an artificial neuron: Interplay between criticality and excitability

    NASA Astrophysics Data System (ADS)

    Lee, Woo Seok; Jeong, Wonhee; Ahn, Kang-Hun

    2014-12-01

    We provide a simple dynamical model of a hair cell with an afferent neuron where the spectral and the temporal responses are controlled by the hair bundle's criticality and the neuron's excitability. To demonstrate that these parameters, indeed, specify the resolution of the sound encoding, we fabricate a neuromorphic device that models the hair cell bundle and its afferent neuron. Then, we show that the neural response of the biomimetic system encodes sounds with either high temporal or spectral resolution or with a combination of both resolutions. Our results suggest that the hair cells may easily specialize to fulfil various roles in spite of their similar physiological structures.

  1. Magnetization Reversal of Nanoscale Islands: How Size and Shape Affect the Arrhenius Prefactor

    NASA Astrophysics Data System (ADS)

    Krause, S.; Herzog, G.; Stapelfeldt, T.; Berbil-Bautista, L.; Bode, M.; Vedmedenko, E. Y.; Wiesendanger, R.

    2009-09-01

    The thermal switching behavior of individual in-plane magnetized Fe/W(110) nanoislands is investigated by a combined study of variable-temperature spin-polarized scanning tunneling microscopy and Monte Carlo simulations. Even for islands consisting of less than 100 atoms the magnetization reversal takes place via nucleation and propagation. The Arrhenius prefactor is found to strongly depend on the individual island size and shape, and based on the experimental results a simple model is developed to describe the magnetization reversal in terms of metastable states. Complementary Monte Carlo simulations confirm the model and provide new insight into the microscopic processes involved in magnetization reversal of smallest nanomagnets.

  2. Partition-free approach to open quantum systems in harmonic environments: An exact stochastic Liouville equation

    NASA Astrophysics Data System (ADS)

    McCaul, G. M. G.; Lorenz, C. D.; Kantorovich, L.

    2017-03-01

    We present a partition-free approach to the evolution of density matrices for open quantum systems coupled to a harmonic environment. The influence functional formalism combined with a two-time Hubbard-Stratonovich transformation allows us to derive a set of exact differential equations for the reduced density matrix of an open system, termed the extended stochastic Liouville-von Neumann equation. Our approach generalizes previous work based on Caldeira-Leggett models and a partitioned initial density matrix. This provides a simple, yet exact, closed-form description for the evolution of open systems from equilibriated initial conditions. The applicability of this model and the potential for numerical implementations are also discussed.

  3. Reduced Diversity of Life around Proxima Centauri and TRAPPIST-1

    NASA Astrophysics Data System (ADS)

    Lingam, Manasvi; Loeb, Abraham

    2017-09-01

    The recent discovery of potentially habitable exoplanets around Proxima Centauri and TRAPPIST-1 has attracted much attention due to their potential for hosting life. We delineate a simple model that accurately describes the evolution of biological diversity on Earth. Combining this model with constraints on atmospheric erosion and the maximal evolutionary timescale arising from the star’s lifetime, we arrive at two striking conclusions: (I) Earth-analogs orbiting low-mass M-dwarfs are unlikely to be inhabited, and (II) K-dwarfs and some G-type stars are potentially capable of hosting more complex biospheres than the Earth. Hence, future searches for biosignatures may have higher chances of success when targeting planets around K-dwarf stars.

  4. Neural associative memories for the integration of language, vision and action in an autonomous agent.

    PubMed

    Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G

    2009-03-01

    Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.

  5. Experimental and rendering-based investigation of laser radar cross sections of small unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Bacher, Emmanuel; Christnacher, Frank

    2017-12-01

    Laser imaging systems are prominent candidates for detection and tracking of small unmanned aerial vehicles (UAVs) in current and future security scenarios. Laser reflection characteristics for laser imaging (e.g., laser gated viewing) of small UAVs are investigated to determine their laser radar cross section (LRCS) by analyzing the intensity distribution of laser reflection in high resolution images. For the first time, LRCSs are determined in a combined experimental and computational approaches by high resolution laser gated viewing and three-dimensional rendering. An optimized simple surface model is calculated taking into account diffuse and specular reflectance properties based on the Oren-Nayar and the Cook-Torrance reflectance models, respectively.

  6. Mechanisms of structural colour in the Morpho butterfly: cooperation of regularity and irregularity in an iridescent scale.

    PubMed Central

    Kinoshita, Shuichi; Yoshioka, Shinya; Kawagoe, Kenji

    2002-01-01

    Structural colour in the Morpho butterfly originates from submicron structure within a scale and, for over a century, its colour and reflectivity have been explained as interference of light due to the multilayer of cuticle and air. However, this model fails to explain the extraordinarily uniform colour of the wing with respect to the observation direction. We have performed microscopic, optical and theoretical investigations, and have found that the separate lamellar structure with irregular heights is extremely important. Using a simple model, we have shown that the combined action of interference and diffraction is essential for the structural colour of the Morpho butterfly. PMID:12137569

  7. Coarse-Grained Simulations of Membrane Insertion and Folding of Small Helical Proteins Using the CABS Model.

    PubMed

    Pulawski, Wojciech; Jamroz, Michal; Kolinski, Michal; Kolinski, Andrzej; Kmiecik, Sebastian

    2016-11-28

    The CABS coarse-grained model is a well-established tool for modeling globular proteins (predicting their structure, dynamics, and interactions). Here we introduce an extension of the CABS representation and force field (CABS-membrane) to the modeling of the effect of the biological membrane environment on the structure of membrane proteins. We validate the CABS-membrane model in folding simulations of 10 short helical membrane proteins not using any knowledge about their structure. The simulations start from random protein conformations placed outside the membrane environment and allow for full flexibility of the modeled proteins during their spontaneous insertion into the membrane. In the resulting trajectories, we have found models close to the experimental membrane structures. We also attempted to select the correctly folded models using simple filtering followed by structural clustering combined with reconstruction to the all-atom representation and all-atom scoring. The CABS-membrane model is a promising approach for further development toward modeling of large protein-membrane systems.

  8. Cross-section and rate formulas for electron-impact ionization, excitation, deexcitation, and total depopulation of excited atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vriens, L.; Smeets, A.H.M.

    1980-09-01

    For electron-induced ionization, excitation, and de-excitation, mainly from excited atomic states, a detailed analysis is presented of the dependence of the cross sections and rate coefficients on electron energy and temperature, and on atomic parameters. A wide energy range is covered, including sudden as well as adiabatic collisions. By combining the available experimental and theoretical information, a set of simple analytical formulas is constructed for the cross sections and rate coefficients of the processes mentioned, for the total depopulation, and for three-body recombination. The formulas account for large deviations from classical and semiclassical scaling, as found for excitation. They agreemore » with experimental data and with the theories in their respective ranges of validity, but have a wider range of validity than the separate theories. The simple analytical form further facilitates the application in plasma modeling.« less

  9. Validation of optical codes based on 3D nanostructures

    NASA Astrophysics Data System (ADS)

    Carnicer, Artur; Javidi, Bahram

    2017-05-01

    Image information encoding using random phase masks produce speckle-like noise distributions when the sample is propagated in the Fresnel domain. As a result, information cannot be accessed by simple visual inspection. Phase masks can be easily implemented in practice by attaching cello-tape to the plain-text message. Conventional 2D-phase masks can be generalized to 3D by combining glass and diffusers resulting in a more complex, physical unclonable function. In this communication, we model the behavior of a 3D phase mask using a simple approach: light is propagated trough glass using the angular spectrum of plane waves whereas the diffusor is described as a random phase mask and a blurring effect on the amplitude of the propagated wave. Using different designs for the 3D phase mask and multiple samples, we demonstrate that classification is possible using the k-nearest neighbors and random forests machine learning algorithms.

  10. A new approach to global control of redundant manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1989-01-01

    A new and simple approach to configuration control of redundant manipulators is presented. In this approach, the redundancy is utilized to control the manipulator configuration directly in task space, where the task will be performed. A number of kinematic functions are defined to reflect the desirable configuration that will be achieved for a given end-effector position. The user-defined kinematic functions and the end-effector Cartesian coordinates are combined to form a set of task-related configuration variables as generalized coordinates for the manipulator. An adaptive scheme is then utilized to globally control the configuration variables so as to achieve tracking of some desired reference trajectories. This accomplishes the basic task of desired end-effector motion, while utilizing the redundancy to achieve any additional task through the desired time variation of the kinematic functions. The control law is simple and computationally very fast, and does not require the complex manipulator dynamic model.

  11. Use of 3D Printing for Custom Wind Tunnel Fabrication

    NASA Astrophysics Data System (ADS)

    Gagorik, Paul; Bates, Zachary; Issakhanian, Emin

    2016-11-01

    Small-scale wind tunnels for the most part are fairly simple to produce with standard building equipment. However, the intricate bell housing and inlet shape of an Eiffel type wind tunnel, as well as the transition from diffuser to fan in a rectangular tunnel can present design and construction obstacles. With the help of 3D printing, these shapes can be custom designed in CAD models and printed in the lab at very low cost. The undergraduate team at Loyola Marymount University has built a custom benchtop tunnel for gas turbine film cooling experiments. 3D printing is combined with conventional construction methods to build the tunnel. 3D printing is also used to build the custom tunnel floor and interchangeable experimental pieces for various experimental shapes. This simple and low-cost tunnel is a custom solution for specific engineering experiments for gas turbine technology research.

  12. Computational assignment of redox states to Coulomb blockade diamonds.

    PubMed

    Olsen, Stine T; Arcisauskaite, Vaida; Hansen, Thorsten; Kongsted, Jacob; Mikkelsen, Kurt V

    2014-09-07

    With the advent of molecular transistors, electrochemistry can now be studied at the single-molecule level. Experimentally, the redox chemistry of the molecule manifests itself as features in the observed Coulomb blockade diamonds. We present a simple theoretical method for explicit construction of the Coulomb blockade diamonds of a molecule. A combined quantum mechanical/molecular mechanical method is invoked to calculate redox energies and polarizabilities of the molecules, including the screening effect of the metal leads. This direct approach circumvents the need for explicit modelling of the gate electrode. From the calculated parameters the Coulomb blockade diamonds are constructed using simple theory. We offer a theoretical tool for assignment of Coulomb blockade diamonds to specific redox states in particular, and a study of chemical details in the diamonds in general. With the ongoing experimental developments in molecular transistor experiments, our tool could find use in molecular electronics, electrochemistry, and electrocatalysis.

  13. Modeling the Spin Equilibrium of Neutron Stars in LMXBs Without Gravitational Radiation

    NASA Technical Reports Server (NTRS)

    Andersson, N.; Glampedakis, K.; Haskell, B.; Watts, A. L.

    2004-01-01

    In this paper we discuss the spin-equilibrium of accreting neutron stars in LMXBs. We demonstrate that, when combined with a naive spin-up torque, the observed data leads to inferred magnetic fields which are at variance with those of galactic millisecond radiopulsars. This indicates the need for either additional spin-down torques (eg. gravitational radiation) or an improved accretion model. We show that a simple consistent accretion model can be arrived at by accounting for radiation pressure in rapidly accreting systems (above a few percent of the Eddington accretion rate). In our model the inner disk region is thick and significantly sub-Keplerian, and the estimated equilibrium periods are such that the LMXB neutron stars have properties that accord well with the galactic millisecond radiopulsar sample. The implications for future gravitational-wave observations are also discussed briefly.

  14. Fish tracking by combining motion based segmentation and particle filtering

    NASA Astrophysics Data System (ADS)

    Bichot, E.; Mascarilla, L.; Courtellemont, P.

    2006-01-01

    In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Kyoungsoo, E-mail: kpark16@illinois.ed; Paulino, Glaucio H.; Roesler, Jeffery

    A simple, effective, and practical constitutive model for cohesive fracture of fiber reinforced concrete is proposed by differentiating the aggregate bridging zone and the fiber bridging zone. The aggregate bridging zone is related to the total fracture energy of plain concrete, while the fiber bridging zone is associated with the difference between the total fracture energy of fiber reinforced concrete and the total fracture energy of plain concrete. The cohesive fracture model is defined by experimental fracture parameters, which are obtained through three-point bending and split tensile tests. As expected, the model describes fracture behavior of plain concrete beams. Inmore » addition, it predicts the fracture behavior of either fiber reinforced concrete beams or a combination of plain and fiber reinforced concrete functionally layered in a single beam specimen. The validated model is also applied to investigate continuously, functionally graded fiber reinforced concrete composites.« less

  16. Triple-loaded single-anchor stitch configurations: an analysis of cyclically loaded suture-tendon interface security.

    PubMed

    Coons, David A; Barber, F Alan; Herbert, Morley A

    2006-11-01

    This study evaluated the strength and suture-tendon interface security of different suture configurations from triple-suture-loaded anchors. A juvenile bovine infraspinatus tendon was detached and repaired by use of 4 different suture combinations from 2 suture anchors: 3 simple sutures in each anchor (ThreeVo anchor; Linvatec, Largo, FL); 2 peripheral simple stitches and 1 central horizontal mattress suture passed deeper into the tendon, creating a larger footprint (bigfoot-print anchor); 2 peripheral simple stitches with 1 central horizontal mattress stitch passed through the same holes as the simple sutures (stitch-of-Burns); and 2 simple stitches (TwoVo anchor; Linvatec). The constructs were cyclically loaded between 10 N and 180 N for 3,500 cycles and then destructively tested. The number of cycles required to create a 5-mm gap and a 10-mm gap and the ultimate load to failure and failure mode were recorded. The ThreeVo anchor was strongest and most resistant to cyclic loading (P < .01). The TwoVo anchor was least resistant to cyclic loading. The stitch-of-Burns anchor was more resistant to cyclic loading than both the bigfoot-print anchor and the TwoVo anchor (P < .03). The ThreeVo, stitch-of-Burns, and TwoVo anchors were stronger than the bigfoot-print anchor (P < .05). Three simple sutures in an anchor hold better than two simple sutures. Three simple sutures provide superior suture-tendon security than combinations of one mattress and two simple stitches subjected to cyclic loading. A central mattress stitch placed more medially than two peripheral simple stitches (bigfoot-print anchor) configured to enlarge the tendon-suture footprint was not as resistant to cyclic loading or destructive testing as three simple stitches (ThreeVo anchor). Placing a central mattress stitch more medially than 2 peripheral simple stitches to enlarge the tendon-suture footprint was not as resistant to cyclic loading or destructive testing as 3 simple stitches.

  17. ViSimpl: Multi-View Visual Analysis of Brain Simulation Data

    PubMed Central

    Galindo, Sergio E.; Toharia, Pablo; Robles, Oscar D.; Pastor, Luis

    2016-01-01

    After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures. PMID:27774062

  18. ViSimpl: Multi-View Visual Analysis of Brain Simulation Data.

    PubMed

    Galindo, Sergio E; Toharia, Pablo; Robles, Oscar D; Pastor, Luis

    2016-01-01

    After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures.

  19. Compression of Probabilistic XML Documents

    NASA Astrophysics Data System (ADS)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  20. Model-based complete enzymatic production of 3,6-anhydro-L-galactose from red algal biomass.

    PubMed

    Pathiraja, Duleepa; Lee, Saeyoung; Choi, In-Geol

    2018-06-13

    3,6-Anhydro-L-galactose (L-AHG) is a bioactive constituent of agar polysaccharides. To be used as a cosmetic or pharmaceutical ingredient, L-AHG is more favorably prepared by enzymatic saccharification of agar using a combination of agarolytic enzymes. Determining the optimum enzyme combination from natural repertoire is a bottleneck for designing an efficient enzymatic hydrolysis process. We consider all theoretical enzymatic saccharification routes in the natural agarolytic pathway of a marine bacterium, Saccharophagus degradans 2-40. Among these routes, three representative routes were determined by removing redundant enzymatic reactions. We simulated each L-AHG production route by simple kinetic models and validated the reaction feasibility by experimental procedure. The optimal enzyme mixture (with 67.3% maximum saccharification yield) was composed of endo-type β-agarase, exo-type β-agarase, agarooligosaccharolytic β-galactosidase and α-neoagarobiose hydrolase. This approach will reduce time and effort for developing a coherent enzymatic process to produce L-AHG on mass scale.

  1. Prediction of binding poses to FXR using multi-targeted docking combined with molecular dynamics and enhanced sampling

    NASA Astrophysics Data System (ADS)

    Bhakat, Soumendranath; Åberg, Emil; Söderhjelm, Pär

    2018-01-01

    Advanced molecular docking methods often aim at capturing the flexibility of the protein upon binding to the ligand. In this study, we investigate whether instead a simple rigid docking method can be applied, if combined with multiple target structures to model the backbone flexibility and molecular dynamics simulations to model the sidechain and ligand flexibility. The methods are tested for the binding of 35 ligands to FXR as part of the first stage of the Drug Design Data Resource (D3R) Grand Challenge 2 blind challenge. The results show that the multiple-target docking protocol performs surprisingly well, with correct poses found for 21 of the ligands. MD simulations started on the docked structures are remarkably stable, but show almost no tendency of refining the structure closer to the experimentally found binding pose. Reconnaissance metadynamics enhances the exploration of new binding poses, but additional collective variables involving the protein are needed to exploit the full potential of the method.

  2. Prediction of binding poses to FXR using multi-targeted docking combined with molecular dynamics and enhanced sampling.

    PubMed

    Bhakat, Soumendranath; Åberg, Emil; Söderhjelm, Pär

    2018-01-01

    Advanced molecular docking methods often aim at capturing the flexibility of the protein upon binding to the ligand. In this study, we investigate whether instead a simple rigid docking method can be applied, if combined with multiple target structures to model the backbone flexibility and molecular dynamics simulations to model the sidechain and ligand flexibility. The methods are tested for the binding of 35 ligands to FXR as part of the first stage of the Drug Design Data Resource (D3R) Grand Challenge 2 blind challenge. The results show that the multiple-target docking protocol performs surprisingly well, with correct poses found for 21 of the ligands. MD simulations started on the docked structures are remarkably stable, but show almost no tendency of refining the structure closer to the experimentally found binding pose. Reconnaissance metadynamics enhances the exploration of new binding poses, but additional collective variables involving the protein are needed to exploit the full potential of the method.

  3. A simple approach to the joint inversion of seismic body and surface waves applied to the southwest U.S.

    NASA Astrophysics Data System (ADS)

    West, Michael; Gao, Wei; Grand, Stephen

    2004-08-01

    Body and surface wave tomography have complementary strengths when applied to regional-scale studies of the upper mantle. We present a straight-forward technique for their joint inversion which hinges on treating surface waves as horizontally-propagating rays with deep sensitivity kernels. This formulation allows surface wave phase or group measurements to be integrated directly into existing body wave tomography inversions with modest effort. We apply the joint inversion to a synthetic case and to data from the RISTRA project in the southwest U.S. The data variance reductions demonstrate that the joint inversion produces a better fit to the combined dataset, not merely a compromise. For large arrays, this method offers an improvement over augmenting body wave tomography with a one-dimensional model. The joint inversion combines the absolute velocity of a surface wave model with the high resolution afforded by body waves-both qualities that are required to understand regional-scale mantle phenomena.

  4. Adsorption of basic dyes on granular activated carbon and natural zeolite.

    PubMed

    Meshko, V; Markovska, L; Mincheva, M; Rodrigues, A E

    2001-10-01

    The adsorption of basic dyes from aqueous solution onto granular activated carbon and natural zeolite has been studied using an agitated batch adsorber. The influence of agitation, initial dye concentration and adsorbent mass has been studied. The parameters of Langmuir and Freundlich adsorption isotherms have been determined using the adsorption data. Homogeneous diffusion model (solid diffusion) combined with external mass transfer resistance is proposed for the kinetic investigation. The dependence of solid diffusion coefficient on initial concentration and mass adsorbent is represented by the simple empirical equations.

  5. Highlights on gamma rays, neutrinos and antiprotons from TeV Dark Matter

    NASA Astrophysics Data System (ADS)

    Gammaldi, Viviana

    2016-07-01

    It has been shown that the gamma-ray flux observed by HESS from the J1745-290 Galactic Center source is well fitted as the secondary gamma-rays photons generated from Dark Matter annihilating into Standard Model particles in combination with a simple power law background. The neutrino flux expected from such Dark Matter source has been also analyzed. The main results of such analyses for 50 TeV Dark Matter annihilating into W+W- gauge boson and preliminary results for antiprotons are presented.

  6. Unconditional optimality of Gaussian attacks against continuous-variable quantum key distribution.

    PubMed

    García-Patrón, Raúl; Cerf, Nicolas J

    2006-11-10

    A fully general approach to the security analysis of continuous-variable quantum key distribution (CV-QKD) is presented. Provided that the quantum channel is estimated via the covariance matrix of the quadratures, Gaussian attacks are shown to be optimal against all collective eavesdropping strategies. The proof is made strikingly simple by combining a physical model of measurement, an entanglement-based description of CV-QKD, and a recent powerful result on the extremality of Gaussian states [M. M. Wolf, Phys. Rev. Lett. 96, 080502 (2006)10.1103/PhysRevLett.96.080502].

  7. Design of Circular, Square, Single, and Multi-layer Induction Coils for Electromagnetic Priming Using Inductance Estimates

    NASA Astrophysics Data System (ADS)

    Fritzsch, Robert; Kennedy, Mark W.; Aune, Ragnhild E.

    2018-02-01

    Special induction coils used for electro magnetic priming of ceramic foam filters in liquid metal filtration have been designed using a combination of analytical and finite element modeling. Relatively simple empirical equations published by Wheeler in 1928 and 1982 have been used during the design process. The equations were found to accurately predict the z-component of the magnetic flux densities of both single- and multi-layer coils as verified both experimentally and by using COMSOL® 5.1 multiphysics simulations.

  8. Modelling Fluctuations in the Concentration of Neutrally Buoyant Substances in the Atmosphere.

    NASA Astrophysics Data System (ADS)

    Ride, David John

    1987-09-01

    Available from UMI in association with The British Library. This thesis sets out to model the probability density function (pdf) of the perceived concentration of a contaminant in the atmosphere using simple, physical representations of the dispersing contaminant. Sensors of differing types perceive a given concentration field in different ways; the chosen pdf must be able to describe all possible perceptions of the same field. Herein, sensors are characterised by the time taken to achieve a reading and by a threshold level of concentration below which the sensor does not respond and thus records a concentration of zero. A literature survey of theoretical and experimental work concerning concentration fluctuations is conducted, and the merits--or otherwise--of some standard pdfs in common use are discussed. The ways in which the central moments, the peak-to-mean ratio, the intermittency and the autocorrelation function behave under various combinations of threshold levels and time averaging are investigated. An original experiment designed to test the suitability of time averaging as a valid simulation of both sensor response times and sampling volumes is reported. The results suggest that, for practical purposes, smoothing from combined volume/time characteristics of a sensor can be modelled by time averaging the output of a more responsive sensor. A possible non -linear volume/time effect was observed at very high temporal resolutions. Intermittency is shown to be an important parameter of the concentration field. A geometric model for describing and explaining the intermittency of a meandering plume of material in terms of the ratio of the plume width to the amplitude of meander and the within-plume intermittency is developed and validated. It shows that the model cross plume profiles of intermittency cannot, in general, be represented by simple functional forms. A new physical model for the fluctuations in concentration from a dispersing contaminant is described which leads to the adoption of a truncated Gaussian (or 'clipped normal') pdf for time averaged concentrations. A series of experiments is described which was designed to test the aptness of this distribution and display changes in the perception of the parameters of the concentration field wrought by various combinations of thresholding and time averaging. The truncated Gaussian pdf is shown to be more suitable for describing fluctuations than the log-normal and negative exponential pdfs, and to possess a better physical basis than either of them. The combination of thresholding and time averaging on the output of a sensor is shown to produce complex results which could affect profoundly the assessment of the potential hazard presented by a toxic, flammable or explosive plume or cloud.

  9. Synonym extraction and abbreviation expansion with ensembles of semantic spaces.

    PubMed

    Henriksson, Aron; Moen, Hans; Skeppstedt, Maria; Daudaravičius, Vidas; Duneld, Martin

    2014-02-05

    Terminologies that account for variation in language use by linking synonyms and abbreviations to their corresponding concept are important enablers of high-quality information extraction from medical texts. Due to the use of specialized sub-languages in the medical domain, manual construction of semantic resources that accurately reflect language use is both costly and challenging, often resulting in low coverage. Although models of distributional semantics applied to large corpora provide a potential means of supporting development of such resources, their ability to isolate synonymy from other semantic relations is limited. Their application in the clinical domain has also only recently begun to be explored. Combining distributional models and applying them to different types of corpora may lead to enhanced performance on the tasks of automatically extracting synonyms and abbreviation-expansion pairs. A combination of two distributional models - Random Indexing and Random Permutation - employed in conjunction with a single corpus outperforms using either of the models in isolation. Furthermore, combining semantic spaces induced from different types of corpora - a corpus of clinical text and a corpus of medical journal articles - further improves results, outperforming a combination of semantic spaces induced from a single source, as well as a single semantic space induced from the conjoint corpus. A combination strategy that simply sums the cosine similarity scores of candidate terms is generally the most profitable out of the ones explored. Finally, applying simple post-processing filtering rules yields substantial performance gains on the tasks of extracting abbreviation-expansion pairs, but not synonyms. The best results, measured as recall in a list of ten candidate terms, for the three tasks are: 0.39 for abbreviations to long forms, 0.33 for long forms to abbreviations, and 0.47 for synonyms. This study demonstrates that ensembles of semantic spaces can yield improved performance on the tasks of automatically extracting synonyms and abbreviation-expansion pairs. This notion, which merits further exploration, allows different distributional models - with different model parameters - and different types of corpora to be combined, potentially allowing enhanced performance to be obtained on a wide range of natural language processing tasks.

  10. Synonym extraction and abbreviation expansion with ensembles of semantic spaces

    PubMed Central

    2014-01-01

    Background Terminologies that account for variation in language use by linking synonyms and abbreviations to their corresponding concept are important enablers of high-quality information extraction from medical texts. Due to the use of specialized sub-languages in the medical domain, manual construction of semantic resources that accurately reflect language use is both costly and challenging, often resulting in low coverage. Although models of distributional semantics applied to large corpora provide a potential means of supporting development of such resources, their ability to isolate synonymy from other semantic relations is limited. Their application in the clinical domain has also only recently begun to be explored. Combining distributional models and applying them to different types of corpora may lead to enhanced performance on the tasks of automatically extracting synonyms and abbreviation-expansion pairs. Results A combination of two distributional models – Random Indexing and Random Permutation – employed in conjunction with a single corpus outperforms using either of the models in isolation. Furthermore, combining semantic spaces induced from different types of corpora – a corpus of clinical text and a corpus of medical journal articles – further improves results, outperforming a combination of semantic spaces induced from a single source, as well as a single semantic space induced from the conjoint corpus. A combination strategy that simply sums the cosine similarity scores of candidate terms is generally the most profitable out of the ones explored. Finally, applying simple post-processing filtering rules yields substantial performance gains on the tasks of extracting abbreviation-expansion pairs, but not synonyms. The best results, measured as recall in a list of ten candidate terms, for the three tasks are: 0.39 for abbreviations to long forms, 0.33 for long forms to abbreviations, and 0.47 for synonyms. Conclusions This study demonstrates that ensembles of semantic spaces can yield improved performance on the tasks of automatically extracting synonyms and abbreviation-expansion pairs. This notion, which merits further exploration, allows different distributional models – with different model parameters – and different types of corpora to be combined, potentially allowing enhanced performance to be obtained on a wide range of natural language processing tasks. PMID:24499679

  11. Effective co-delivery of doxorubicin and dasatinib using a PEG-Fmoc nanocarrier for combination cancer chemotherapy.

    PubMed

    Zhang, Peng; Li, Jiang; Ghazwani, Mohammed; Zhao, Wenchen; Huang, Yixian; Zhang, Xiaolan; Venkataramanan, Raman; Li, Song

    2015-10-01

    A simple PEGylated peptidic nanocarrier, PEG5000-lysyl-(α-Fmoc-ε-Cbz-lysine)2 (PLFCL), was developed for effective co-delivery of doxorubicin (DOX) and dasatinib (DAS) for combination chemotherapy. Significant synergy of DOX and DAS in inhibition of cancer cell proliferation was demonstrated in various types of cancer cells, including breast, prostate, and colon cancers. Co-encapsulation of the two agents was facilitated by incorporation of 9-Fluorenylmethoxycarbonyl (Fmoc) and carboxybenzyl (Cbz) groups into a nanocarrier for effective carrier-drug interactions. Spherical nanomicelles with a small size of ∼30 nm were self-assembled by PLFCL. Strong carrier/drug intermolecular π-π stacking was demonstrated in fluorescence quenching and UV absorption. Fluorescence study showed more effective accumulation of DOX in nuclei of cancer cells following treatment with DOX&DAS/PLFCL in comparison with cells treated with DOX/PLFCL. DOX&DAS/PLFCL micelles were also more effective than other treatments in inhibiting the proliferation and migration of cultured cancer cells. Finally, a superior anti-tumor activity was demonstrated with DOX&DAS/PLFCL. A tumor growth inhibition rate of 95% was achieved at a respective dose of 5 mg/kg for DOX and DAS in a murine breast cancer model. Our nanocarrier may represent a simple and effective system that could facilitate clinical translation of this promising multi-agent regimen in combination chemotherapy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Optimal interpolation analysis of leaf area index using MODIS data

    USGS Publications Warehouse

    Gu, Yingxin; Belair, Stephane; Mahfouf, Jean-Francois; Deblonde, Godelieve

    2006-01-01

    A simple data analysis technique for vegetation leaf area index (LAI) using Moderate Resolution Imaging Spectroradiometer (MODIS) data is presented. The objective is to generate LAI data that is appropriate for numerical weather prediction. A series of techniques and procedures which includes data quality control, time-series data smoothing, and simple data analysis is applied. The LAI analysis is an optimal combination of the MODIS observations and derived climatology, depending on their associated errors σo and σc. The “best estimate” LAI is derived from a simple three-point smoothing technique combined with a selection of maximum LAI (after data quality control) values to ensure a higher quality. The LAI climatology is a time smoothed mean value of the “best estimate” LAI during the years of 2002–2004. The observation error is obtained by comparing the MODIS observed LAI with the “best estimate” of the LAI, and the climatological error is obtained by comparing the “best estimate” of LAI with the climatological LAI value. The LAI analysis is the result of a weighting between these two errors. Demonstration of the method described in this paper is presented for the 15-km grid of Meteorological Service of Canada (MSC)'s regional version of the numerical weather prediction model. The final LAI analyses have a relatively smooth temporal evolution, which makes them more appropriate for environmental prediction than the original MODIS LAI observation data. They are also more realistic than the LAI data currently used operationally at the MSC which is based on land-cover databases.

  13. A data-driven model for influenza transmission incorporating media effects.

    PubMed

    Mitchell, Lewis; Ross, Joshua V

    2016-10-01

    Numerous studies have attempted to model the effect of mass media on the transmission of diseases such as influenza; however, quantitative data on media engagement has until recently been difficult to obtain. With the recent explosion of 'big data' coming from online social media and the like, large volumes of data on a population's engagement with mass media during an epidemic are becoming available to researchers. In this study, we combine an online dataset comprising millions of shared messages relating to influenza with traditional surveillance data on flu activity to suggest a functional form for the relationship between the two. Using this data, we present a simple deterministic model for influenza dynamics incorporating media effects, and show that such a model helps explain the dynamics of historical influenza outbreaks. Furthermore, through model selection we show that the proposed media function fits historical data better than other media functions proposed in earlier studies.

  14. The Effect of Sea-Surface Sun Glitter on Microwave Radiometer Measurements

    NASA Technical Reports Server (NTRS)

    Wentz, F. J.

    1981-01-01

    A relatively simple model for the microwave brightness temperature of sea surface Sun glitter is presented. The model is an accurate closeform approximation for the fourfold Sun glitter integral. The model computations indicate that Sun glitter contamination of on orbit radiometer measurements is appreciable over a large swath area. For winds near 20 m/s, Sun glitter affects the retrieval of environmental parameters for Sun angles as large as 20 to 25 deg. The model predicted biases in retrieved wind speed and sea surface temperature due to neglecting Sun glitter are consistent with those experimentally observed in SEASAT SMMR retrievals. A least squares retrieval algorithm that uses a combined sea and Sun model function shows the potential of retrieving accurate environmental parameters in the presence of Sun glitter so long as the Sun angles and wind speed are above 5 deg and 2 m/s, respectively.

  15. Modeling human diseases with induced pluripotent stem cells: from 2D to 3D and beyond.

    PubMed

    Liu, Chun; Oikonomopoulos, Angelos; Sayed, Nazish; Wu, Joseph C

    2018-03-08

    The advent of human induced pluripotent stem cells (iPSCs) presents unprecedented opportunities to model human diseases. Differentiated cells derived from iPSCs in two-dimensional (2D) monolayers have proven to be a relatively simple tool for exploring disease pathogenesis and underlying mechanisms. In this Spotlight article, we discuss the progress and limitations of the current 2D iPSC disease-modeling platform, as well as recent advancements in the development of human iPSC models that mimic in vivo tissues and organs at the three-dimensional (3D) level. Recent bioengineering approaches have begun to combine different 3D organoid types into a single '4D multi-organ system'. We summarize the advantages of this approach and speculate on the future role of 4D multi-organ systems in human disease modeling. © 2018. Published by The Company of Biologists Ltd.

  16. Modeling the Energy Use of a Connected and Automated Transportation System (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonder, J.; Brown, A.

    Early research points to large potential impacts of connected and automated vehicles (CAVs) on transportation energy use - dramatic savings, increased use, or anything in between. Due to a lack of suitable data and integrated modeling tools to explore these complex future systems, analyses to date have relied on simple combinations of isolated effects. This poster proposes a framework for modeling the potential energy implications from increasing penetration of CAV technologies and for assessing technology and policy options to steer them toward favorable energy outcomes. Current CAV modeling challenges include estimating behavior change, understanding potential vehicle-to-vehicle interactions, and assessing trafficmore » flow and vehicle use under different automation scenarios. To bridge these gaps and develop a picture of potential future automated systems, NREL is integrating existing modeling capabilities with additional tools and data inputs to create a more fully integrated CAV assessment toolkit.« less

  17. Dissociative recombination by frame transformation to Siegert pseudostates: A comparison with a numerically solvable model

    NASA Astrophysics Data System (ADS)

    Hvizdoš, Dávid; Váňa, Martin; Houfek, Karel; Greene, Chris H.; Rescigno, Thomas N.; McCurdy, C. William; Čurík, Roman

    2018-02-01

    We present a simple two-dimensional model of the indirect dissociative recombination process. The model has one electronic and one nuclear degree of freedom and it can be solved to high precision, without making any physically motivated approximations, by employing the exterior complex scaling method together with the finite-elements method and discrete variable representation. The approach is applied to solve a model for dissociative recombination of H2 + in the singlet ungerade channels, and the results serve as a benchmark to test validity of several physical approximations commonly used in the computational modeling of dissociative recombination for real molecular targets. The second, approximate, set of calculations employs a combination of multichannel quantum defect theory and frame transformation into a basis of Siegert pseudostates. The cross sections computed with the two methods are compared in detail for collision energies from 0 to 2 eV.

  18. Accurate prediction of pregnancy viability by means of a simple scoring system.

    PubMed

    Bottomley, Cecilia; Van Belle, Vanya; Kirk, Emma; Van Huffel, Sabine; Timmerman, Dirk; Bourne, Tom

    2013-01-01

    What is the performance of a simple scoring system to predict whether women will have an ongoing viable intrauterine pregnancy beyond the first trimester? A simple scoring system using demographic and initial ultrasound variables accurately predicts pregnancy viability beyond the first trimester with an area under the curve (AUC) in a receiver operating characteristic curve of 0.924 [95% confidence interval (CI) 0.900-0.947] on an independent test set. Individual demographic and ultrasound factors, such as maternal age, vaginal bleeding and gestational sac size, are strong predictors of miscarriage. Previous mathematical models have combined individual risk factors with reasonable performance. A simple scoring system derived from a mathematical model that can be easily implemented in clinical practice has not previously been described for the prediction of ongoing viability. This was a prospective observational study in a single early pregnancy assessment centre during a 9-month period. A cohort of 1881 consecutive women undergoing transvaginal ultrasound scan at a gestational age <84 days were included. Women were excluded if the first trimester outcome was not known. Demographic features, symptoms and ultrasound variables were tested for their influence on ongoing viability. Logistic regression was used to determine the influence on first trimester viability from demographics and symptoms alone, ultrasound findings alone and then from all the variables combined. Each model was developed on a training data set, and a simple scoring system was derived from this. This scoring system was tested on an independent test data set. The final outcome based on a total of 1435 participants was an ongoing viable pregnancy in 885 (61.7%) and early pregnancy loss in 550 (38.3%) women. The scoring system using significant demographic variables alone (maternal age and amount of bleeding) to predict ongoing viability gave an AUC of 0.724 (95% CI = 0.692-0.756) in the training set and 0.729 (95% CI = 0.684-0.774) in the test set. The scoring system using significant ultrasound variables alone (mean gestation sac diameter, mean yolk sac diameter and the presence of fetal heart beat) gave an AUC of 0.873 (95% CI = 0.850-0.897) and 0.900 (95% CI = 0.871-0.928) in the training and the test sets, respectively. The final scoring system using demographic and ultrasound variables together gave an AUC of 0.901 (95% CI = 0.881-0.920) and 0.924 (CI = 0.900-0.947) in the training and the test sets, respectively. After defining the cut-off at which the sensitivity is 0.90 on the training set, this model performed with a sensitivity of 0.92, specificity of 0.73, positive predictive value of 84.7% and negative predictive value of 85.4% in the test set. BMI and smoking variables were a potential omission in the data collection and might further improve the model performance if included. A further limitation is the absence of information on either bleeding or pain in 18% of women. Caution should be exercised before implementation of this scoring system prior to further external validation studies This simple scoring system incorporates readily available data that are routinely collected in clinical practice and does not rely on complex data entry. As such it could, unlike most mathematical models, be easily incorporated into normal early pregnancy care, where women may appreciate an individualized calculation of the likelihood of ongoing pregnancy viability. Research by V.V.B. supported by Research Council KUL: GOA MaNet, PFV/10/002 (OPTEC), several PhD/postdoc & fellow grants; IWT: TBM070706-IOTA3, PhD Grants; IBBT; Belgian Federal Science Policy Office: IUAP P7/(DYSCO, `Dynamical systems, control and optimization', 2012-2017). T.B. is supported by the Imperial Healthcare NHS Trust NIHR Biomedical Research Centre. Not applicable.

  19. A methodology for physically based rockfall hazard assessment

    NASA Astrophysics Data System (ADS)

    Crosta, G. B.; Agliardi, F.

    Rockfall hazard assessment is not simple to achieve in practice and sound, physically based assessment methodologies are still missing. The mobility of rockfalls implies a more difficult hazard definition with respect to other slope instabilities with minimal runout. Rockfall hazard assessment involves complex definitions for "occurrence probability" and "intensity". This paper is an attempt to evaluate rockfall hazard using the results of 3-D numerical modelling on a topography described by a DEM. Maps portraying the maximum frequency of passages, velocity and height of blocks at each model cell, are easily combined in a GIS in order to produce physically based rockfall hazard maps. Different methods are suggested and discussed for rockfall hazard mapping at a regional and local scale both along linear features or within exposed areas. An objective approach based on three-dimensional matrixes providing both a positional "Rockfall Hazard Index" and a "Rockfall Hazard Vector" is presented. The opportunity of combining different parameters in the 3-D matrixes has been evaluated to better express the relative increase in hazard. Furthermore, the sensitivity of the hazard index with respect to the included variables and their combinations is preliminarily discussed in order to constrain as objective as possible assessment criteria.

  20. Testlet-Based Multidimensional Adaptive Testing

    PubMed Central

    Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen

    2016-01-01

    Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range. PMID:27917132

Top