Sample records for previously introduced model

  1. A Bayesian Nonparametric Approach to Test Equating

    ERIC Educational Resources Information Center

    Karabatsos, George; Walker, Stephen G.

    2009-01-01

    A Bayesian nonparametric model is introduced for score equating. It is applicable to all major equating designs, and has advantages over previous equating models. Unlike the previous models, the Bayesian model accounts for positive dependence between distributions of scores from two tests. The Bayesian model and the previous equating models are…

  2. Performance of Renormalization Group Algebraic Turbulence Model on Boundary Layer Transition Simulation

    NASA Technical Reports Server (NTRS)

    Ahn, Kyung H.

    1994-01-01

    The RNG-based algebraic turbulence model, with a new method of solving the cubic equation and applying new length scales, is introduced. An analysis is made of the RNG length scale which was previously reported and the resulting eddy viscosity is compared with those from other algebraic turbulence models. Subsequently, a new length scale is introduced which actually uses the two previous RNG length scales in a systematic way to improve the model performance. The performance of the present RNG model is demonstrated by simulating the boundary layer flow over a flat plate and the flow over an airfoil.

  3. Competing opinions and stubborness: Connecting models to data.

    PubMed

    Burghardt, Keith; Rand, William; Girvan, Michelle

    2016-03-01

    We introduce a general contagionlike model for competing opinions that includes dynamic resistance to alternative opinions. We show that this model can describe candidate vote distributions, spatial vote correlations, and a slow approach to opinion consensus with sensible parameter values. These empirical properties of large group dynamics, previously understood using distinct models, may be different aspects of human behavior that can be captured by a more unified model, such as the one introduced in this paper.

  4. Adding small differences can increase similarity and choice.

    PubMed

    Kim, Jongmin; Novemsky, Nathan; Dhar, Ravi

    2013-02-01

    Similarity plays a critical role in many judgments and choices. Traditional models of similarity posit that increasing the number of differences between objects cannot increase judged similarity between them. In contrast to these previous models, the present research shows that introducing a small difference in an attribute that previously was identical across objects can increase perceived similarity between those objects. We propose an explanation based on the idea that small differences draw more attention than identical attributes do and that people's perceptions of similarity involve averaging attributes that are salient. We provide evidence that introducing small differences between objects increases perceived similarity. We also show that an increase in similarity decreases the difficulty of choice and the likelihood that a choice will be deferred.

  5. Examining a scaled dynamical system of telomere shortening

    NASA Astrophysics Data System (ADS)

    Cyrenne, Benoit M.; Gooding, Robert J.

    2015-02-01

    A model of telomere dynamics is proposed and examined. Our model, which extends a previously introduced model that incorporates stem cells as progenitors of new cells, imposes the Hayflick limit, the maximum number of cell divisions that are possible. This new model leads to cell populations for which the average telomere length is not necessarily a monotonically decreasing function of time, in contrast to previously published models. We provide a phase diagram indicating where such results would be expected via the introduction of scaled populations, rate constants and time. The application of this model to available leukocyte baboon data is discussed.

  6. IDENTIFICATION OF REGIME SHIFTS IN TIME SERIES USING NEIGHBORHOOD STATISTICS

    EPA Science Inventory

    The identification of alternative dynamic regimes in ecological systems requires several lines of evidence. Previous work on time series analysis of dynamic regimes includes mainly model-fitting methods. We introduce two methods that do not use models. These approaches use state-...

  7. A Trio of Brownian Donkeys

    NASA Astrophysics Data System (ADS)

    van den Broeck, C.; Cleuren, B.; Kawai, R.; Kambon, M.

    A previously introduced model (B. Cleuren and C. Van den Broeck, Europhys. Lett. 54, 1 (2001)) is studied numerically. Pure negative mobility is found for the minimum number of three interacting walkers.

  8. Stochastic Processes as True-Score Models for Highly Speeded Mental Tests.

    ERIC Educational Resources Information Center

    Moore, William E.

    The previous theoretical development of the Poisson process as a strong model for the true-score theory of mental tests is discussed, and additional theoretical properties of the model from the standpoint of individual examinees are developed. The paper introduces the Erlang process as a family of test theory models and shows in the context of…

  9. An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression

    ERIC Educational Resources Information Center

    Weiss, Brandi A.; Dardick, William

    2016-01-01

    This article introduces an entropy-based measure of data-model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify…

  10. Combining a popularity-productivity stochastic block model with a discriminative-content model for general structure detection.

    PubMed

    Chai, Bian-fang; Yu, Jian; Jia, Cai-Yan; Yang, Tian-bao; Jiang, Ya-wen

    2013-07-01

    Latent community discovery that combines links and contents of a text-associated network has drawn more attention with the advance of social media. Most of the previous studies aim at detecting densely connected communities and are not able to identify general structures, e.g., bipartite structure. Several variants based on the stochastic block model are more flexible for exploring general structures by introducing link probabilities between communities. However, these variants cannot identify the degree distributions of real networks due to a lack of modeling of the differences among nodes, and they are not suitable for discovering communities in text-associated networks because they ignore the contents of nodes. In this paper, we propose a popularity-productivity stochastic block (PPSB) model by introducing two random variables, popularity and productivity, to model the differences among nodes in receiving links and producing links, respectively. This model has the flexibility of existing stochastic block models in discovering general community structures and inherits the richness of previous models that also exploit popularity and productivity in modeling the real scale-free networks with power law degree distributions. To incorporate the contents in text-associated networks, we propose a combined model which combines the PPSB model with a discriminative model that models the community memberships of nodes by their contents. We then develop expectation-maximization (EM) algorithms to infer the parameters in the two models. Experiments on synthetic and real networks have demonstrated that the proposed models can yield better performances than previous models, especially on networks with general structures.

  11. Combining a popularity-productivity stochastic block model with a discriminative-content model for general structure detection

    NASA Astrophysics Data System (ADS)

    Chai, Bian-fang; Yu, Jian; Jia, Cai-yan; Yang, Tian-bao; Jiang, Ya-wen

    2013-07-01

    Latent community discovery that combines links and contents of a text-associated network has drawn more attention with the advance of social media. Most of the previous studies aim at detecting densely connected communities and are not able to identify general structures, e.g., bipartite structure. Several variants based on the stochastic block model are more flexible for exploring general structures by introducing link probabilities between communities. However, these variants cannot identify the degree distributions of real networks due to a lack of modeling of the differences among nodes, and they are not suitable for discovering communities in text-associated networks because they ignore the contents of nodes. In this paper, we propose a popularity-productivity stochastic block (PPSB) model by introducing two random variables, popularity and productivity, to model the differences among nodes in receiving links and producing links, respectively. This model has the flexibility of existing stochastic block models in discovering general community structures and inherits the richness of previous models that also exploit popularity and productivity in modeling the real scale-free networks with power law degree distributions. To incorporate the contents in text-associated networks, we propose a combined model which combines the PPSB model with a discriminative model that models the community memberships of nodes by their contents. We then develop expectation-maximization (EM) algorithms to infer the parameters in the two models. Experiments on synthetic and real networks have demonstrated that the proposed models can yield better performances than previous models, especially on networks with general structures.

  12. Approximating a retarded-advanced differential equation that models human phonation

    NASA Astrophysics Data System (ADS)

    Teodoro, M. Filomena

    2017-11-01

    In [1, 2, 3] we have got the numerical solution of a linear mixed type functional differential equation (MTFDE) introduced initially in [4], considering the autonomous and non-autonomous case by collocation, least squares and finite element methods considering B-splines basis set. The present work introduces a numerical scheme using least squares method (LSM) and Gaussian basis functions to solve numerically a nonlinear mixed type equation with symmetric delay and advance which models human phonation. The preliminary results are promising. We obtain an accuracy comparable with the previous results.

  13. A Comprehensive Model for Developing and Evaluating Study Abroad Programs in Counselor Education

    ERIC Educational Resources Information Center

    Santos, Syntia Dinora

    2014-01-01

    This paper introduces a model to guide the process of designing and evaluating study abroad programs, addressing particular stages and influential factors. The main purpose of the model is to serve as a basic structure for those who want to develop their own program or evaluate previous cultural immersion experiences. The model is based on the…

  14. Techtalk: An Online Framework for Developmental Literacy

    ERIC Educational Resources Information Center

    Burgess, Melissa; Caverly, David C.

    2010-01-01

    In a previous Techtalk column, Peterson and Caverly (2005) introduced the Community of Inquiry (CoI) model (Garrison, Anderson, & Archer, 2001) as a guide for online learning. The CoI model has maintained longevity and applicability to a variety of both synchronous and asynchronous technologies (Ice, Curtis, Phillips, & Wells, 2007). In this…

  15. Racial Prejudice and Locational Equilibrium in an Urban Area.

    ERIC Educational Resources Information Center

    Yinger, John

    Racial prejudice is said to influence strongly the locational decisions of households in urban areas. This paper introduces racial prejudice into a model of an urban area and derives several results about residential location. A previously developed long-run model of an urban area adds a locational dimension to a model of the housing market under…

  16. How Long is my Toilet Roll?--A Simple Exercise in Mathematical Modelling

    ERIC Educational Resources Information Center

    Johnston, Peter R.

    2013-01-01

    The simple question of how much paper is left on my toilet roll is studied from a mathematical modelling perspective. As is typical with applied mathematics, models of increasing complexity are introduced and solved. Solutions produced at each step are compared with the solution from the previous step. This process exposes students to the typical…

  17. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Janine; Freestate, David; Riley, Cameron

    2016-11-01

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results frommore » both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.« less

  18. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Janine; Freestate, David; Hobbs, William

    2016-11-21

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results frommore » both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.« less

  19. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Janine; Freestate, David; Hobbs, William

    2016-06-05

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results frommore » both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.« less

  20. Improvements, testing and development of the ADM-τ sub-grid surface tension model for two-phase LES

    NASA Astrophysics Data System (ADS)

    Aniszewski, Wojciech

    2016-12-01

    In this paper, a specific subgrid term occurring in Large Eddy Simulation (LES) of two-phase flows is investigated. This and other subgrid terms are presented, we subsequently elaborate on the existing models for those and re-formulate the ADM-τ model for sub-grid surface tension previously published by these authors. This paper presents a substantial, conceptual simplification over the original model version, accompanied by a decrease in its computational cost. At the same time, it addresses the issues the original model version faced, e.g. introduces non-isotropic applicability criteria based on resolved interface's principal curvature radii. Additionally, this paper introduces more throughout testing of the ADM-τ, in both simple and complex flows.

  1. Adaptive partially hidden Markov models with application to bilevel image coding.

    PubMed

    Forchhammer, S; Rasmussen, T S

    1999-01-01

    Partially hidden Markov models (PHMMs) have previously been introduced. The transition and emission/output probabilities from hidden states, as known from the HMMs, are conditioned on the past. This way, the HMM may be applied to images introducing the dependencies of the second dimension by conditioning. In this paper, the PHMM is extended to multiple sequences with a multiple token version and adaptive versions of PHMM coding are presented. The different versions of the PHMM are applied to lossless bilevel image coding. To reduce and optimize the model cost and size, the contexts are organized in trees and effective quantization of the parameters is introduced. The new coding methods achieve results that are better than the JBIG standard on selected test images, although at the cost of increased complexity. By the minimum description length principle, the methods presented for optimizing the code length may apply as guidance for training (P)HMMs for, e.g., segmentation or recognition purposes. Thereby, the PHMM models provide a new approach to image modeling.

  2. Approximate solution to the Callan-Giddings-Harvey-Strominger field equations for two-dimensional evaporating black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ori, Amos

    2010-11-15

    Callan, Giddings, Harvey, and Strominger (CGHS) previously introduced a two-dimensional semiclassical model of gravity coupled to a dilaton and to matter fields. Their model yields a system of field equations which may describe the formation of a black hole in gravitational collapse as well as its subsequent evaporation. Here we present an approximate analytical solution to the semiclassical CGHS field equations. This solution is constructed using the recently introduced formalism of flux-conserving hyperbolic systems. We also explore the asymptotic behavior at the horizon of the evaporating black hole.

  3. Design, development, and application of LANDIS-II, a spatial landscape simulation model with flexible temporal and spatial resolution

    Treesearch

    Robert M. Scheller; James B. Domingo; Brian R. Sturtevant; Jeremy S. Williams; Arnold Rudy; Eric J. Gustafson; David J. Mladenoff

    2007-01-01

    We introduce LANDIS-II, a landscape model designed to simulate forest succession and disturbances. LANDIS-II builds upon and preserves the functionality of previous LANDIS forest landscape simulation models. LANDIS-II is distinguished by the inclusion of variable time steps for different ecological processes; our use of a rigorous development and testing process used...

  4. Lagrangian derivation of the two coupled field equations in the Janus cosmological model

    NASA Astrophysics Data System (ADS)

    Petit, Jean-Pierre; D'Agostini, G.

    2015-05-01

    After a review citing the results obtained in previous articles introducing the Janus Cosmological Model, consisting of a set of two coupled field equations, where one metrics refers to the positive masses and the other to the negative masses, which explains the observed cosmic acceleration and the nature of dark energy, we present the Lagrangian derivation of the model.

  5. Neural network modelling of thermal stratification in a solar DHW storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geczy-Vig, P.; Farkas, I.

    2010-05-15

    In this study an artificial neural network (ANN) model is introduced for modelling the layer temperatures in a storage tank of a solar thermal system. The model is based on the measured data of a domestic hot water system. The temperatures distribution in the storage tank divided in 8 equal parts in vertical direction were calculated every 5 min using the average 5 min data of solar radiation, ambient temperature, mass flow rate of collector loop, load and the temperature of the layers in previous time steps. The introduced ANN model consists of two parts describing the load periods andmore » the periods between the loads. The identified model gives acceptable results inside the training interval as the average deviation was 0.22 C during the training and 0.24 C during the validation. (author)« less

  6. Identifying Differential Item Functioning of Rating Scale Items with the Rasch Model: An Introduction and an Application

    ERIC Educational Resources Information Center

    Myers, Nicholas D.; Wolfe, Edward W.; Feltz, Deborah L.; Penfield, Randall D.

    2006-01-01

    This study (a) provided a conceptual introduction to differential item functioning (DIF), (b) introduced the multifaceted Rasch rating scale model (MRSM) and an associated statistical procedure for identifying DIF in rating scale items, and (c) applied this procedure to previously collected data from American coaches who responded to the coaching…

  7. Digital Modeling in Design Foundation Coursework: An Exploratory Study of the Effectiveness of Conceptual Design Software

    ERIC Educational Resources Information Center

    Guidera, Stan; MacPherson, D. Scot

    2008-01-01

    This paper presents the results of a study that was conducted to identify and document student perceptions of the effectiveness of computer modeling software introduced in a design foundations course that had previously utilized only conventional manually-produced representation techniques. Rather than attempt to utilize a production-oriented CAD…

  8. The National Health Insurance system as one type of new typology: the case of South Korea and Taiwan.

    PubMed

    Lee, Sang-Yi; Chun, Chang-Bae; Lee, Yong-Gab; Seo, Nam Kyu

    2008-01-01

    A typology is the useful way of understanding the key frameworks of health care system. With many different criteria of health care system, several typologies have been introduced and applied to each country's health care system. Among those, National Health Service (NHS), Social Health Insurance (SHI), and Private Health Insurance (PHI) are three most well-known types of health care system in the 3-model typology. Differentiated from the existing 3-model typology of health care system, South Korea and Taiwan implemented new concept of National Health Insurance (NHI) system. Since none of previous typologies can be applied to these countries' NHI to explain its unique features in a proper manner, a new typology needs to be introduced. Therefore, this paper introduces a new typology with two crucial variables that are 'state administration for health care financing' and 'main body for health care provision'. With these two variables, the world's national health care systems can be divided into four types of model: NHS, SHI, NHI, and PHI (Liberal model). This research outlines the rationale of developing new typology and introduces main features and frameworks of the NHI that South Korea and Taiwan implemented in the 1990 s.

  9. An Electromyographic-driven Musculoskeletal Torque Model using Neuro-Fuzzy System Identification: A Case Study

    PubMed Central

    Jafari, Zohreh; Edrisi, Mehdi; Marateb, Hamid Reza

    2014-01-01

    The purpose of this study was to estimate the torque from high-density surface electromyography signals of biceps brachii, brachioradialis, and the medial and lateral heads of triceps brachii muscles during moderate-to-high isometric elbow flexion-extension. The elbow torque was estimated in two following steps: First, surface electromyography (EMG) amplitudes were estimated using principal component analysis, and then a fuzzy model was proposed to illustrate the relationship between the EMG amplitudes and the measured torque signal. A neuro-fuzzy method, with which the optimum number of rules could be estimated, was used to identify the model with suitable complexity. Utilizing the proposed neuro-fuzzy model, the clinical interpretability was introduced; contrary to the previous linear and nonlinear black-box system identification models. It also reduced the estimation error compared with that of the most recent and accurate nonlinear dynamic model introduced in the literature. The optimum number of the rules for all trials was 4 ± 1, that might be related to motor control strategies and the % variance accounted for criterion was 96.40 ± 3.38 which in fact showed considerable improvement compared with the previous methods. The proposed method is thus a promising new tool for EMG-Torque modeling in clinical applications. PMID:25426427

  10. An Interval Type-2 Fuzzy Multiple Echelon Supply Chain Model

    NASA Astrophysics Data System (ADS)

    Miller, Simon; John, Robert

    Planning resources for a supply chain is a major factor determining its success or failure. In this paper we build on previous work introducing an Interval Type-2 Fuzzy Logic model of a multiple echelon supply chain. It is believed that the additional degree of uncertainty provided by Interval Type-2 Fuzzy Logic will allow for better representation of the uncertainty and vagueness present in resource planning models. First, the subject of Supply Chain Management is introduced, then some background is given on related work using Type-1 Fuzzy Logic. A description of the Interval Type-2 Fuzzy model is given, and a test scenario detailed. A Genetic Algorithm uses the model to search for a near-optimal plan for the scenario. A discussion of the results follows, along with conclusions and details of intended further work.

  11. Notes from 1999 on computational algorithm of the Local Wave-Vector (LWV) model for the dynamical evolution of the second-rank velocity correlation tensor starting from the mean-flow-coupled Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemach, Charles; Kurien, Susan

    These notes present an account of the Local Wave Vector (LWV) model of a turbulent flow defined throughout physical space. The previously-developed Local Wave Number (LWN) model is taken as a point of departure. Some general properties of turbulent fields and appropriate notation are given first. The LWV model is presently restricted to incompressible flows and the incompressibility assumption is introduced at an early point in the discussion. The assumption that the turbulence is homogeneous is also introduced early on. This assumption can be relaxed by generalizing the space diffusion terms of LWN, but the present discussion is focused onmore » a modeling of homogeneous turbulence.« less

  12. A BRST formulation for the conic constrained particle

    NASA Astrophysics Data System (ADS)

    Barbosa, Gabriel D.; Thibes, Ronaldo

    2018-04-01

    We describe the gauge invariant BRST formulation of a particle constrained to move in a general conic. The model considered constitutes an explicit example of an originally second-class system which can be quantized within the BRST framework. We initially impose the conic constraint by means of a Lagrange multiplier leading to a consistent second-class system which generalizes previous models studied in the literature. After calculating the constraint structure and the corresponding Dirac brackets, we introduce a suitable first-order Lagrangian, the resulting modified system is then shown to be gauge invariant. We proceed to the extended phase space introducing fermionic ghost variables, exhibiting the BRST symmetry transformations and writing the Green’s function generating functional for the BRST quantized model.

  13. 78 FR 37701 - Airworthiness Directives; Pilatus Aircraft Ltd. Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-24

    ... Airworthiness Limitations Document (ALS), depending on the aeroplane model. These documents include the... of maintenance instructions and/or airworthiness limitations in accordance with Pilatus PC-6 ALS...-6 ALS (Number 02334) issue 3 to introduce a threshold for replacement of previously not listed Flap...

  14. On the Connection Between One-and Two-Equation Models of Turbulence

    NASA Technical Reports Server (NTRS)

    Menter, F. R.; Rai, Man Mohan (Technical Monitor)

    1994-01-01

    A formalism will be presented that allows the transformation of two-equation eddy viscosity turbulence models into one-equation models. The transformation is based on an assumption that is widely accepted over a large range of boundary layer flows and that has been shown to actually improve predictions when incorporated into two-equation models of turbulence. Based on that assumption, a new one-equation turbulence model will be derived. The new model will be tested in great detail against a previously introduced one-equation model and against its parent two-equation model.

  15. Loopless nontrapping invasion-percolation model for fracking.

    PubMed

    Norris, J Quinn; Turcotte, Donald L; Rundle, John B

    2014-02-01

    Recent developments in hydraulic fracturing (fracking) have enabled the recovery of large quantities of natural gas and oil from old, low-permeability shales. These developments include a change from low-volume, high-viscosity fluid injection to high-volume, low-viscosity injection. The injected fluid introduces distributed damage that provides fracture permeability for the extraction of the gas and oil. In order to model this process, we utilize a loopless nontrapping invasion percolation previously introduced to model optimal polymers in a strongly disordered medium and for determining minimum energy spanning trees on a lattice. We performed numerical simulations on a two-dimensional square lattice and find significant differences from other percolation models. Additionally, we find that the growing fracture network satisfies both Horton-Strahler and Tokunaga network statistics. As with other invasion percolation models, our model displays burst dynamics, in which the cluster extends rapidly into a connected region. We introduce an alternative definition of bursts to be a consecutive series of opened bonds whose strengths are all below a specified value. Using this definition of bursts, we find good agreement with a power-law frequency-area distribution. These results are generally consistent with the observed distribution of microseismicity observed during a high-volume frack.

  16. Adler-Kostant-Symes scheme for face and Calogero-Moser-Sutherland-type models

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Schupp, Peter

    1998-07-01

    We give the construction of quantum Lax equations for IRF models and the difference version of the Calogero-Moser-Sutherland model introduced by Ruijsenaars. We solve the equations using factorization properties of the underlying face Hopf algebras/elliptic quantum groups. This construction is in the spirit of the Adler-Kostant-Symes method and generalizes our previous work to the case of face Hopf algebras/elliptic quantum groups with dynamical R matrices.

  17. A new model for yaw attitude of Global Positioning System satellites

    NASA Technical Reports Server (NTRS)

    Bar-Sever, Y. E.

    1995-01-01

    Proper modeling of the Global Positioning System (GPS) satellite yaw attitude is important in high-precision applications. A new model for the GPS satellite yaw attitude is introduced that constitutes a significant improvement over the previously available model in terms of efficiency, flexibility, and portability. The model is described in detail, and implementation issues, including the proper estimation strategy, are addressed. The performance of the new model is analyzed, and an error budget is presented. This is the first self-contained description of the GPS yaw attitude model.

  18. Increasing the credibility of regional climate simulations by introducing subgrid-scale cloud – radiation interactions

    EPA Science Inventory

    The radiation schemes in the Weather Research and Forecasting (WRF) model have previously not accounted for the presence of subgrid-scale cumulus clouds, thereby resulting in unattenuated shortwave radiation, which can lead to overly energetic convection and overpredicted surface...

  19. Direct handling of sharp interfacial energy for microstructural evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernández–Rivera, Efraín; Tikare, Veena; Noirot, Laurence

    In this study, we introduce a simplification to the previously demonstrated hybrid Potts–phase field (hPPF), which relates interfacial energies to microstructural sharp interfaces. The model defines interfacial energy by a Potts-like discrete interface approach of counting unlike neighbors, which we use to compute local curvature. The model is compared to the hPPF by studying interfacial characteristics and grain growth behavior. The models give virtually identical results, while the new model allows the simulator more direct control of interfacial energy.

  20. Direct handling of sharp interfacial energy for microstructural evolution

    DOE PAGES

    Hernández–Rivera, Efraín; Tikare, Veena; Noirot, Laurence; ...

    2014-08-24

    In this study, we introduce a simplification to the previously demonstrated hybrid Potts–phase field (hPPF), which relates interfacial energies to microstructural sharp interfaces. The model defines interfacial energy by a Potts-like discrete interface approach of counting unlike neighbors, which we use to compute local curvature. The model is compared to the hPPF by studying interfacial characteristics and grain growth behavior. The models give virtually identical results, while the new model allows the simulator more direct control of interfacial energy.

  1. Gadolinia depletion analysis by CASMO-4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Y.; Saji, E.; Toba, A.

    1993-01-01

    CASMO-4 is the most recent version of the lattice physics code CASMO introduced by Studsvik. The principal aspects of the CASMO-4 model that differ from the models in previous CASMO versions are as follows: (1) heterogeneous model for two-dimensional transport theory calculations; and (2) microregion depletion model for burnable absorbers, such as gadolinia. Of these aspects, the first has previously been benchmarked against measured data of critical experiments and Monte Carlo calculations, verifying the high degree of accuracy. To proceed with CASMO-4 benchmarking, it is desirable to benchmark the microregion depletion model, which enables CASMO-4 to calculate gadolinium depletion directlymore » without the need for precalculated MICBURN cross-section data. This paper presents the benchmarking results for the microregion depletion model in CASMO-4 using the measured data of depleted gadolinium rods.« less

  2. The 2014 update to the National Seismic Hazard Model in California

    USGS Publications Warehouse

    Powers, Peter; Field, Edward H.

    2015-01-01

    The 2014 update to the U. S. Geological Survey National Seismic Hazard Model in California introduces a new earthquake rate model and new ground motion models (GMMs) that give rise to numerous changes to seismic hazard throughout the state. The updated earthquake rate model is the third version of the Uniform California Earthquake Rupture Forecast (UCERF3), wherein the rates of all ruptures are determined via a self-consistent inverse methodology. This approach accommodates multifault ruptures and reduces the overprediction of moderate earthquake rates exhibited by the previous model (UCERF2). UCERF3 introduces new faults, changes to slip or moment rates on existing faults, and adaptively smoothed gridded seismicity source models, all of which contribute to significant changes in hazard. New GMMs increase ground motion near large strike-slip faults and reduce hazard over dip-slip faults. The addition of very large strike-slip ruptures and decreased reverse fault rupture rates in UCERF3 further enhances these effects.

  3. A bicycle safety index for evaluating urban street facilities.

    PubMed

    Asadi-Shekari, Zohreh; Moeinaddini, Mehdi; Zaly Shah, Muhammad

    2015-01-01

    The objectives of this research are to conceptualize the Bicycle Safety Index (BSI) that considers all parts of the street and to propose a universal guideline with microscale details. A point system method comparing existing safety facilities to a defined standard is proposed to estimate the BSI. Two streets in Singapore and Malaysia are chosen to examine this model. The majority of previous measurements to evaluate street conditions for cyclists usually cannot cover all parts of streets, including segments and intersections. Previous models also did not consider all safety indicators and cycling facilities at a microlevel in particular. This study introduces a new concept of a practical BSI to complete previous studies using its practical, easy-to-follow, point system-based outputs. This practical model can be used in different urban settings to estimate the level of safety for cycling and suggest some improvements based on the standards.

  4. Stability issues of nonlocal gravity during primordial inflation

    NASA Astrophysics Data System (ADS)

    Belgacem, Enis; Cusin, Giulia; Foffa, Stefano; Maggiore, Michele; Mancarella, Michele

    2018-01-01

    We study the cosmological evolution of some nonlocal gravity models, when the initial conditions are set during a phase of primordial inflation. We examine in particular three models, the so-called RT, RR and Δ4 models, previously introduced by our group. We find that, during inflation, the RT model has a viable background evolution, but at the level of cosmological perturbations develops instabilities that make it nonviable. In contrast, the RR and Δ4 models have a viable evolution even when their initial conditions are set during a phase of primordial inflation.

  5. A multilayer approach for price dynamics in financial markets

    NASA Astrophysics Data System (ADS)

    Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea

    2017-02-01

    We introduce a new Self-Organized Criticality (SOC) model for simulating price evolution in an artificial financial market, based on a multilayer network of traders. The model also implements, in a quite realistic way with respect to previous studies, the order book dynamics, by considering two assets with variable fundamental prices. Fat tails in the probability distributions of normalized returns are observed, together with other features of real financial markets.

  6. Evolving bipartite authentication graph partitions

    DOE PAGES

    Pope, Aaron Scott; Tauritz, Daniel Remy; Kent, Alexander D.

    2017-01-16

    As large scale enterprise computer networks become more ubiquitous, finding the appropriate balance between user convenience and user access control is an increasingly challenging proposition. Suboptimal partitioning of users’ access and available services contributes to the vulnerability of enterprise networks. Previous edge-cut partitioning methods unduly restrict users’ access to network resources. This paper introduces a novel method of network partitioning superior to the current state-of-the-art which minimizes user impact by providing alternate avenues for access that reduce vulnerability. Networks are modeled as bipartite authentication access graphs and a multi-objective evolutionary algorithm is used to simultaneously minimize the size of largemore » connected components while minimizing overall restrictions on network users. Lastly, results are presented on a real world data set that demonstrate the effectiveness of the introduced method compared to previous naive methods.« less

  7. Evolving bipartite authentication graph partitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, Aaron Scott; Tauritz, Daniel Remy; Kent, Alexander D.

    As large scale enterprise computer networks become more ubiquitous, finding the appropriate balance between user convenience and user access control is an increasingly challenging proposition. Suboptimal partitioning of users’ access and available services contributes to the vulnerability of enterprise networks. Previous edge-cut partitioning methods unduly restrict users’ access to network resources. This paper introduces a novel method of network partitioning superior to the current state-of-the-art which minimizes user impact by providing alternate avenues for access that reduce vulnerability. Networks are modeled as bipartite authentication access graphs and a multi-objective evolutionary algorithm is used to simultaneously minimize the size of largemore » connected components while minimizing overall restrictions on network users. Lastly, results are presented on a real world data set that demonstrate the effectiveness of the introduced method compared to previous naive methods.« less

  8. Genetic demographic networks: Mathematical model and applications.

    PubMed

    Kimmel, Marek; Wojdyła, Tomasz

    2016-10-01

    Recent improvement in the quality of genetic data obtained from extinct human populations and their ancestors encourages searching for answers to basic questions regarding human population history. The most common and successful are model-based approaches, in which genetic data are compared to the data obtained from the assumed demography model. Using such approach, it is possible to either validate or adjust assumed demography. Model fit to data can be obtained based on reverse-time coalescent simulations or forward-time simulations. In this paper we introduce a computational method based on mathematical equation that allows obtaining joint distributions of pairs of individuals under a specified demography model, each of them characterized by a genetic variant at a chosen locus. The two individuals are randomly sampled from either the same or two different populations. The model assumes three types of demographic events (split, merge and migration). Populations evolve according to the time-continuous Moran model with drift and Markov-process mutation. This latter process is described by the Lyapunov-type equation introduced by O'Brien and generalized in our previous works. Application of this equation constitutes an original contribution. In the result section of the paper we present sample applications of our model to both simulated and literature-based demographies. Among other we include a study of the Slavs-Balts-Finns genetic relationship, in which we model split and migrations between the Balts and Slavs. We also include another example that involves the migration rates between farmers and hunters-gatherers, based on modern and ancient DNA samples. This latter process was previously studied using coalescent simulations. Our results are in general agreement with the previous method, which provides validation of our approach. Although our model is not an alternative to simulation methods in the practical sense, it provides an algorithm to compute pairwise distributions of alleles, in the case of haploid non-recombining loci such as mitochondrial and Y-chromosome loci in humans. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. A Framework for Analyzing the Collaborative Construction of Arguments and Its Interplay with Agency

    ERIC Educational Resources Information Center

    Mueller, Mary; Yankelewitz, Dina; Maher, Carolyn

    2012-01-01

    In this report, we offer a framework for analyzing the ways in which collaboration influences learners' building of mathematical arguments and thus promotes mathematical understanding. Building on a previous model used to analyze discursive practices of students engaged in mathematical problem solving, we introduce three types of collaboration and…

  10. The Influence of a Substellar Continent on the Climate of a Tidally Locked Exoplanet

    NASA Astrophysics Data System (ADS)

    Lewis, Neil T.; Lambert, F. Hugo; Boutle, Ian A.; Mayne, Nathan J.; Manners, James; Acreman, David M.

    2018-02-01

    Previous studies have demonstrated that continental carbon-silicate weathering is important to the continued habitability of a terrestrial planet. Despite this, few studies have considered the influence of land on the climate of a tidally locked planet. In this work we use the Met Office Unified Model, coupled to a land-surface model, to investigate the climate effects of a continent located at the substellar point. We choose to use the orbital and planetary parameters of Proxima Centauri B as a template, to allow comparison with the work of others. A region of the surface where T s > 273.15 K is always retained, and previous conclusions on the habitability of Proxima Centauri B remain intact. We find that substellar land causes global cooling and increases day–night temperature contrasts by limiting heat redistribution. Furthermore, we find that substellar land is able to introduce a regime change in the atmospheric circulation. Specifically, when a continent offset to the east of the substellar point is introduced, we observe the formation of two mid-latitude counterrotating jets, and a substantially weakened equatorial superrotating jet.

  11. Nonlocal and nonlinear electrostatics of a dipolar Coulomb fluid.

    PubMed

    Sahin, Buyukdagli; Ralf, Blossey

    2014-07-16

    We study a model Coulomb fluid consisting of dipolar solvent molecules of finite extent which generalizes the point-like dipolar Poisson-Boltzmann model (DPB) previously introduced by Coalson and Duncan (1996 J. Phys. Chem. 100 2612) and Abrashkin et al (2007 Phys. Rev. Lett. 99 077801). We formulate a nonlocal Poisson-Boltzmann equation (NLPB) and study both linear and nonlinear dielectric response in this model for the case of a single plane geometry. Our results shed light on the relevance of nonlocal versus nonlinear effects in continuum models of material electrostatics.

  12. A stepwise approach for introducing numerical modeling in Environmental Engineering MSc unit: The impact of clear assessment criteria and detailed feedback

    NASA Astrophysics Data System (ADS)

    Rosolem, R.; Pritchard, J.

    2017-12-01

    An important aspect for the new generation of hydrologists and water resources managers is the understanding of hydrological processes through the application of numerical environmental models. Despite its importance, teaching numerical modeling subjects to young students in our MSc Water and Environment Management programme has been difficult, for instance, due to the wide range of student background and lack or poor contact with numerical modeling tools in the past. In previous years, this numerical skills concept has been introduced as a project assignment in our Terrestrial Hydrometeorology unit. However, previous efforts have shown non-optimal engagement by students with often signs of lack of interest or anxiety. Given our initial experience with this unit, we decided to make substantial changes to the coursework format with the aim to introduce a more efficient learning environment to the students. The proposed changes include: (1) a clear presentation and discussion of the assessment criteria at the beginning of the unit, (2) a stepwise approach in which students use our learning environment to acquire knowledge for individual components of the model step-by-step, and (3) access to timely and detailed feedback allowing for particular steps to be retraced or retested. In order to understand the overall impact on assessment and feedback, we carried out two surveys at the beginning and end of the module. Our results indicate a positive impact to student learning experience, as the students have clearly benefited from the early discussion on assignment criteria and appeared to have correctly identified the skills and knowledge required to carry out the assignment. In addition, we have observed a substantial increase in the quality of the reports. Our results results support that student engagement has increased since changes to the format of the coursework were introduced. Interestingly, we also observed a positive impact on the assignment to the final exam marks, even for students who did not particularly performed well in the coursework. This indicates that despite not reaching ideal marks, students were able to use this new learning environment to acquire their knowledge of key concepts which are needed for their final exam.

  13. Mathematical models for predicting human mobility in the context of infectious disease spread: introducing the impedance model.

    PubMed

    Sallah, Kankoé; Giorgi, Roch; Bengtsson, Linus; Lu, Xin; Wetter, Erik; Adrien, Paul; Rebaudet, Stanislas; Piarroux, Renaud; Gaudart, Jean

    2017-11-22

    Mathematical models of human mobility have demonstrated a great potential for infectious disease epidemiology in contexts of data scarcity. While the commonly used gravity model involves parameter tuning and is thus difficult to implement without reference data, the more recent radiation model based on population densities is parameter-free, but biased. In this study we introduce the new impedance model, by analogy with electricity. Previous research has compared models on the basis of a few specific available spatial patterns. In this study, we use a systematic simulation-based approach to assess the performances. Five hundred spatial patterns were generated using various area sizes and location coordinates. Model performances were evaluated based on these patterns. For simulated data, comparison measures were average root mean square error (aRMSE) and bias criteria. Modeling of the 2010 Haiti cholera epidemic with a basic susceptible-infected-recovered (SIR) framework allowed an empirical evaluation through assessing the goodness-of-fit of the observed epidemic curve. The new, parameter-free impedance model outperformed previous models on simulated data according to average aRMSE and bias criteria. The impedance model achieved better performances with heterogeneous population densities and small destination populations. As a proof of concept, the basic compartmental SIR framework was used to confirm the results obtained with the impedance model in predicting the spread of cholera in Haiti in 2010. The proposed new impedance model provides accurate estimations of human mobility, especially when the population distribution is highly heterogeneous. This model can therefore help to achieve more accurate predictions of disease spread in the context of an epidemic.

  14. Unifying error structures in commonly used biotracer mixing models.

    PubMed

    Stock, Brian C; Semmens, Brice X

    2016-10-01

    Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.

  15. Evaluation of an instructional model to teach clinically relevant medicinal chemistry in a campus and a distance pathway.

    PubMed

    Alsharif, Naser Z; Galt, Kimberly A

    2008-04-15

    To evaluate an instructional model for teaching clinically relevant medicinal chemistry. An instructional model that uses Bloom's cognitive and Krathwohl's affective taxonomy, published and tested concepts in teaching medicinal chemistry, and active learning strategies, was introduced in the medicinal chemistry courses for second-professional year (P2) doctor of pharmacy (PharmD) students (campus and distance) in the 2005-2006 academic year. Student learning and the overall effectiveness of the instructional model were assessed. Student performance after introducing the instructional model was compared to that in prior years. Student performance on course examinations improved compared to previous years. Students expressed overall enthusiasm about the course and better understood the value of medicinal chemistry to clinical practice. The explicit integration of the cognitive and affective learning objectives improved student performance, student ability to apply medicinal chemistry to clinical practice, and student attitude towards the discipline. Testing this instructional model provided validation to this theoretical framework. The model is effective for both our campus and distance-students. This instructional model may also have broad-based applications to other science courses.

  16. Modal kinematics for multisection continuum arms.

    PubMed

    Godage, Isuru S; Medrano-Cerda, Gustavo A; Branson, David T; Guglielmino, Emanuele; Caldwell, Darwin G

    2015-05-13

    This paper presents a novel spatial kinematic model for multisection continuum arms based on mode shape functions (MSF). Modal methods have been used in many disciplines from finite element methods to structural analysis to approximate complex and nonlinear parametric variations with simple mathematical functions. Given certain constraints and required accuracy, this helps to simplify complex phenomena with numerically efficient implementations leading to fast computations. A successful application of the modal approximation techniques to develop a new modal kinematic model for general variable length multisection continuum arms is discussed. The proposed method solves the limitations associated with previous models and introduces a new approach for readily deriving exact, singularity-free and unique MSF's that simplifies the approach and avoids mode switching. The model is able to simulate spatial bending as well as straight arm motions (i.e., pure elongation/contraction), and introduces inverse position and orientation kinematics for multisection continuum arms. A kinematic decoupling feature, splitting position and orientation inverse kinematics is introduced. This type of decoupling has not been presented for these types of robotic arms before. The model also carefully accounts for physical constraints in the joint space to provide enhanced insight into practical mechanics and impose actuator mechanical limitations onto the kinematics thus generating fully realizable results. The proposed method is easily applicable to a broad spectrum of continuum arm designs.

  17. Bubbles and denaturation in DNA

    NASA Astrophysics Data System (ADS)

    van Erp, T. S.; Cuesta-López, S.; Peyrard, M.

    2006-08-01

    The local opening of DNA is an intriguing phenomenon from a statistical-physics point of view, but is also essential for its biological function. For instance, the transcription and replication of our genetic code cannot take place without the unwinding of the DNA double helix. Although these biological processes are driven by proteins, there might well be a relation between these biological openings and the spontaneous bubble formation due to thermal fluctuations. Mesoscopic models, like the Peyrard-Bishop-Dauxois (PBD) model, have fairly accurately reproduced some experimental denaturation curves and the sharp phase transition in the thermodynamic limit. It is, hence, tempting to see whether these models could be used to predict the biological activity of DNA. In a previous study, we introduced a method that allows to obtain very accurate results on this subject, which showed that some previous claims in this direction, based on molecular-dynamics studies, were premature. This could either imply that the present PBD model should be improved or that biological activity can only be predicted in a more complex framework that involves interactions with proteins and super helical stresses. In this article, we give a detailed description of the statistical method introduced before. Moreover, for several DNA sequences, we give a thorough analysis of the bubble-statistics as a function of position and bubble size and the so-called l-denaturation curves that can be measured experimentally. These show that some important experimental observations are missing in the present model. We discuss how the present model could be improved.

  18. An optimal pole-matching observer design for estimating tyre-road friction force

    NASA Astrophysics Data System (ADS)

    Faraji, Mohammad; Johari Majd, Vahid; Saghafi, Behrooz; Sojoodi, Mahdi

    2010-10-01

    In this paper, considering the dynamical model of tyre-road contacts, we design a nonlinear observer for the on-line estimation of tyre-road friction force using the average lumped LuGre model without any simplification. The design is the extension of a previously offered observer to allow a muchmore realistic estimation by considering the effect of the rolling resistance and a term related to the relative velocity in the observer. Our aim is not to introduce a new friction model, but to present a more accurate nonlinear observer for the assumed model. We derive linear matrix equality conditions to obtain an observer gain with minimum pole mismatch for the desired observer error dynamic system. We prove the convergence of the observer for the non-simplified model. Finally, we compare the performance of the proposed observer with that of the previously mentioned nonlinear observer, which shows significant improvement in the accuracy of estimation.

  19. Physics-based deformable organisms for medical image analysis

    NASA Astrophysics Data System (ADS)

    Hamarneh, Ghassan; McIntosh, Chris

    2005-04-01

    Previously, "Deformable organisms" were introduced as a novel paradigm for medical image analysis that uses artificial life modelling concepts. Deformable organisms were designed to complement the classical bottom-up deformable models methodologies (geometrical and physical layers), with top-down intelligent deformation control mechanisms (behavioral and cognitive layers). However, a true physical layer was absent and in order to complete medical image segmentation tasks, deformable organisms relied on pure geometry-based shape deformations guided by sensory data, prior structural knowledge, and expert-generated schedules of behaviors. In this paper we introduce the use of physics-based shape deformations within the deformable organisms framework yielding additional robustness by allowing intuitive real-time user guidance and interaction when necessary. We present the results of applying our physics-based deformable organisms, with an underlying dynamic spring-mass mesh model, to segmenting and labelling the corpus callosum in 2D midsagittal magnetic resonance images.

  20. Material point method of modelling and simulation of reacting flow of oxygen

    NASA Astrophysics Data System (ADS)

    Mason, Matthew; Chen, Kuan; Hu, Patrick G.

    2014-07-01

    Aerospace vehicles are continually being designed to sustain flight at higher speeds and higher altitudes than previously attainable. At hypersonic speeds, gases within a flow begin to chemically react and the fluid's physical properties are modified. It is desirable to model these effects within the Material Point Method (MPM). The MPM is a combined Eulerian-Lagrangian particle-based solver that calculates the physical properties of individual particles and uses a background grid for information storage and exchange. This study introduces chemically reacting flow modelling within the MPM numerical algorithm and illustrates a simple application using the AeroElastic Material Point Method (AEMPM) code. The governing equations of reacting flows are introduced and their direct application within an MPM code is discussed. A flow of 100% oxygen is illustrated and the results are compared with independently developed computational non-equilibrium algorithms. Observed trends agree well with results from an independently developed source.

  1. A Grobner Basis Solution for Lightning Ground Flash Fraction Retrieval

    NASA Technical Reports Server (NTRS)

    Solakiewicz, Richard; Attele, Rohan; Koshak, William

    2011-01-01

    A Bayesian inversion method was previously introduced for retrieving the fraction of ground flashes in a set of flashes observed from a (low earth orbiting or geostationary) satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters, a scalar function was minimized by a numerical method. In order to improve this optimization, we introduce a Grobner basis solution to obtain analytic representations of the model parameters that serve as a refined initialization scheme to the numerical optimization. Using the Grobner basis, we show that there are exactly 2 solutions involving the first 3 moments of the (exponentially distributed) data. When the mean of the ground flash optical characteristic (e.g., such as the Maximum Group Area, MGA) is larger than that for cloud flashes, then a unique solution can be obtained.

  2. Recombinant Newcastle disease virus expressing IL15 demonstrates promising antitumor efficiency in melanoma model

    USDA-ARS?s Scientific Manuscript database

    Recombinant Newcastle Disease Virus (rNDV) has shown oncolytic therapeutic effect in preclinical studies. Previous data indicate that rNDV carrying IL2 has shown promise in cancer therapy. Due to the significant side effects of IL2, IL15 has been introduced into cancer therapy. A number of studies h...

  3. Reconceptualising Pre-Service Teacher Education: The Applicability of a Cognitive Apprenticeship Model.

    ERIC Educational Resources Information Center

    Kane, Ruth

    This paper introduces the design for a study to investigate application of a cognitive apprenticeship approach to preservice teacher education. The research will be informed by and build upon the findings of a previous Master of Education dissertation. In particular, the study seeks to investigate answers to the following research question: To…

  4. Transition probabilities for non self-adjoint Hamiltonians in infinite dimensional Hilbert spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagarello, F., E-mail: fabio.bagarello@unipa.it

    In a recent paper we have introduced several possible inequivalent descriptions of the dynamics and of the transition probabilities of a quantum system when its Hamiltonian is not self-adjoint. Our analysis was carried out in finite dimensional Hilbert spaces. This is useful, but quite restrictive since many physically relevant quantum systems live in infinite dimensional Hilbert spaces. In this paper we consider this situation, and we discuss some applications to well known models, introduced in the literature in recent years: the extended harmonic oscillator, the Swanson model and a generalized version of the Landau levels Hamiltonian. Not surprisingly we willmore » find new interesting features not previously found in finite dimensional Hilbert spaces, useful for a deeper comprehension of this kind of physical systems.« less

  5. Class dependency of fuzzy relational database using relational calculus and conditional probability

    NASA Astrophysics Data System (ADS)

    Deni Akbar, Mohammad; Mizoguchi, Yoshihiro; Adiwijaya

    2018-03-01

    In this paper, we propose a design of fuzzy relational database to deal with a conditional probability relation using fuzzy relational calculus. In the previous, there are several researches about equivalence class in fuzzy database using similarity or approximate relation. It is an interesting topic to investigate the fuzzy dependency using equivalence classes. Our goal is to introduce a formulation of a fuzzy relational database model using the relational calculus on the category of fuzzy relations. We also introduce general formulas of the relational calculus for the notion of database operations such as ’projection’, ’selection’, ’injection’ and ’natural join’. Using the fuzzy relational calculus and conditional probabilities, we introduce notions of equivalence class, redundant, and dependency in the theory fuzzy relational database.

  6. Introducing a new semi-active engine mount using force controlled variable stiffness

    NASA Astrophysics Data System (ADS)

    Azadi, Mojtaba; Behzadipour, Saeed; Faulkner, Gary

    2013-05-01

    This work introduces a new concept in designing semi-active engine mounts. Engine mounts are under continuous development to provide better and more cost-effective engine vibration control. Passive engine mounts do not provide satisfactory solution. Available semi-active and active mounts provide better solutions but they are more complex and expensive. The variable stiffness engine mount (VSEM) is a semi-active engine mount with a simple ON-OFF control strategy. However, unlike available semi-active engine mounts that work based on damping change, the VSEM works based on the static stiffness change by using a new fast response force controlled variable spring. The VSEM is an improved version of the vibration mount introduced by the authors in their previous work. The results showed significant performance improvements over a passive rubber mount. The VSEM also provides better vibration control than a hydromount at idle speed. Low hysteresis and the ability to be modelled by a linear model in low-frequency are the advantages of the VSEM over the vibration isolator introduced earlier and available hydromounts. These specifications facilitate the use of VSEM in the automotive industry, however, further evaluation and developments are needed for this purpose.

  7. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  8. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE PAGES

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...

    2017-03-24

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  9. Value-at-Risk forecasts by a spatiotemporal model in Chinese stock market

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Weng, Yingliang

    2016-01-01

    This paper generalizes a recently proposed spatial autoregressive model and introduces a spatiotemporal model for forecasting stock returns. We support the view that stock returns are affected not only by the absolute values of factors such as firm size, book-to-market ratio and momentum but also by the relative values of factors like trading volume ranking and market capitalization ranking in each period. This article studies a new method for constructing stocks' reference groups; the method is called quartile method. Applying the method empirically to the Shanghai Stock Exchange 50 Index, we compare the daily volatility forecasting performance and the out-of-sample forecasting performance of Value-at-Risk (VaR) estimated by different models. The empirical results show that the spatiotemporal model performs surprisingly well in terms of capturing spatial dependences among individual stocks, and it produces more accurate VaR forecasts than the other three models introduced in the previous literature. Moreover, the findings indicate that both allowing for serial correlation in the disturbances and using time-varying spatial weight matrices can greatly improve the predictive accuracy of a spatial autoregressive model.

  10. Assessing household health expenditure with Box-Cox censoring models.

    PubMed

    Chaze, Jean-Paul

    2005-09-01

    In order to assess the combined presence of zero expenditures and a heavily skewed distribution of positive expenditures, the Box-Cox transformation with location parameter is used to define a set of models generalising the standard Tobit, Heckman selection and double-hurdle models. Extended flexibility with respect to previous specifications is introduced, notably regarding negative transformation parameters, which may prove necessary for medical expenditures, and corner-solution outcomes. An illustration is provided by the analysis of household health expenditure in Switzerland. Copyright (c) 2005 John Wiley & Sons, Ltd.

  11. Solubility of organic compounds in octanol: Improved predictions based on the geometrical fragment approach.

    PubMed

    Mathieu, Didier

    2017-09-01

    Two new models are introduced to predict the solubility of chemicals in octanol (S oct ), taking advantage of the extensive character of log(S oct ) through a decomposition of molecules into so-called geometrical fragments (GF). They are extensively validated and their compliance with regulatory requirements is demonstrated. The first model requires just a molecular formula as input. Despite an extreme simplicity, it performs as well as an advanced random forest model involving 86 descriptors, with a root mean square error (RMSE) of 0.64 log units for an external test set of 100 molecules. For the second one, which requires the melting point T m as input, introducing GF descriptors reduces the RMSE from about 0.7 to <0.5 log units, a performance that could previously be obtained only through the use of Abraham descriptors. A script is provided for easy application of the models, taking into account the limits of their applicability domains. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A Hierarchy of Models for Two-Phase Flows

    NASA Astrophysics Data System (ADS)

    Bouchut, F.; Brenier, Y.; Cortes, J.; Ripoll, J.-F.

    2000-12-01

    We derive a hierarchy of models for gas-liquid two-phase flows in the limit of infinite density ratio, when the liquid is assumed to be incompressible. The starting model is a system of nonconservative conservation laws with relaxation. At first order in the density ratio, we get a simplified system with viscosity, while at the limit we obtain a system of two conservation laws, the system of pressureless gases with constraint and undetermined pressure. Formal properties of this constraint model are provided, and sticky blocks solutions are introduced. We propose numerical methods for this last model, and the results are compared with the two previous models.

  13. Meta-heuristic algorithms as tools for hydrological science

    NASA Astrophysics Data System (ADS)

    Yoo, Do Guen; Kim, Joong Hoon

    2014-12-01

    In this paper, meta-heuristic optimization techniques are introduced and their applications to water resources engineering, particularly in hydrological science are introduced. In recent years, meta-heuristic optimization techniques have been introduced that can overcome the problems inherent in iterative simulations. These methods are able to find good solutions and require limited computation time and memory use without requiring complex derivatives. Simulation-based meta-heuristic methods such as Genetic algorithms (GAs) and Harmony Search (HS) have powerful searching abilities, which can occasionally overcome the several drawbacks of traditional mathematical methods. For example, HS algorithms can be conceptualized from a musical performance process and used to achieve better harmony; such optimization algorithms seek a near global optimum determined by the value of an objective function, providing a more robust determination of musical performance than can be achieved through typical aesthetic estimation. In this paper, meta-heuristic algorithms and their applications (focus on GAs and HS) in hydrological science are discussed by subject, including a review of existing literature in the field. Then, recent trends in optimization are presented and a relatively new technique such as Smallest Small World Cellular Harmony Search (SSWCHS) is briefly introduced, with a summary of promising results obtained in previous studies. As a result, previous studies have demonstrated that meta-heuristic algorithms are effective tools for the development of hydrological models and the management of water resources.

  14. Beam wandering statistics of twin thin laser beam propagation under generalized atmospheric conditions.

    PubMed

    Pérez, Darío G; Funes, Gustavo

    2012-12-03

    Under the Geometrics Optics approximation is possible to estimate the covariance between the displacements of two thin beams after they have propagated through a turbulent medium. Previous works have concentrated in long propagation distances to provide models for the wandering statistics. These models are useful when the separation between beams is smaller than the propagation path-regardless of the characteristics scales of the turbulence. In this work we give a complete model for these covariances, behavior introducing absolute limits to the validity of former approximations. Moreover, these generalizations are established for non-Kolmogorov atmospheric models.

  15. Higher-Order Extended Lagrangian Born–Oppenheimer Molecular Dynamics for Classical Polarizable Models

    DOE PAGES

    Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.

    2018-01-09

    Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less

  16. Inequity aversion and the evolution of cooperation

    NASA Astrophysics Data System (ADS)

    Ahmed, Asrar; Karlapalem, Kamalakar

    2014-02-01

    Evolution of cooperation is a widely studied problem in biology, social science, economics, and artificial intelligence. Most of the existing approaches that explain cooperation rely on some notion of direct or indirect reciprocity. These reciprocity based models assume agents recognize their partner and know their previous interactions, which requires advanced cognitive abilities. In this paper we are interested in developing a model that produces cooperation without requiring any explicit memory of previous game plays. Our model is based on the notion of inequity aversion, a concept introduced within behavioral economics, whereby individuals care about payoff equality in outcomes. Here we explore the effect of using income inequality to guide partner selection and interaction. We study our model by considering both the well-mixed and the spatially structured population and present the conditions under which cooperation becomes dominant. Our results support the hypothesis that inequity aversion promotes cooperative relationship among nonkin.

  17. Inequity aversion and the evolution of cooperation.

    PubMed

    Ahmed, Asrar; Karlapalem, Kamalakar

    2014-02-01

    Evolution of cooperation is a widely studied problem in biology, social science, economics, and artificial intelligence. Most of the existing approaches that explain cooperation rely on some notion of direct or indirect reciprocity. These reciprocity based models assume agents recognize their partner and know their previous interactions, which requires advanced cognitive abilities. In this paper we are interested in developing a model that produces cooperation without requiring any explicit memory of previous game plays. Our model is based on the notion of inequity aversion, a concept introduced within behavioral economics, whereby individuals care about payoff equality in outcomes. Here we explore the effect of using income inequality to guide partner selection and interaction. We study our model by considering both the well-mixed and the spatially structured population and present the conditions under which cooperation becomes dominant. Our results support the hypothesis that inequity aversion promotes cooperative relationship among nonkin.

  18. Higher-Order Extended Lagrangian Born-Oppenheimer Molecular Dynamics for Classical Polarizable Models.

    PubMed

    Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M N

    2018-02-13

    Generalized extended Lagrangian Born-Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate "shadow" potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential to any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.

  19. Higher-Order Extended Lagrangian Born–Oppenheimer Molecular Dynamics for Classical Polarizable Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.

    Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less

  20. An Investigation of State-Space Model Fidelity for SSME Data

    NASA Technical Reports Server (NTRS)

    Martin, Rodney Alexander

    2008-01-01

    In previous studies, a variety of unsupervised anomaly detection techniques for anomaly detection were applied to SSME (Space Shuttle Main Engine) data. The observed results indicated that the identification of certain anomalies were specific to the algorithmic method under consideration. This is the reason why one of the follow-on goals of these previous investigations was to build an architecture to support the best capabilities of all algorithms. We appeal to that goal here by investigating a cascade, serial architecture for the best performing and most suitable candidates from previous studies. As a precursor to a formal ROC (Receiver Operating Characteristic) curve analysis for validation of resulting anomaly detection algorithms, our primary focus here is to investigate the model fidelity as measured by variants of the AIC (Akaike Information Criterion) for state-space based models. We show that placing constraints on a state-space model during or after the training of the model introduces a modest level of suboptimality. Furthermore, we compare the fidelity of all candidate models including those embodying the cascade, serial architecture. We make recommendations on the most suitable candidates for application to subsequent anomaly detection studies as measured by AIC-based criteria.

  1. A Physics-Inspired Mechanistic Model of Migratory Movement Patterns in Birds.

    PubMed

    Revell, Christopher; Somveille, Marius

    2017-08-29

    In this paper, we introduce a mechanistic model of migratory movement patterns in birds, inspired by ideas and methods from physics. Previous studies have shed light on the factors influencing bird migration but have mainly relied on statistical correlative analysis of tracking data. Our novel method offers a bottom up explanation of population-level migratory movement patterns. It differs from previous mechanistic models of animal migration and enables predictions of pathways and destinations from a given starting location. We define an environmental potential landscape from environmental data and simulate bird movement within this landscape based on simple decision rules drawn from statistical mechanics. We explore the capacity of the model by qualitatively comparing simulation results to the non-breeding migration patterns of a seabird species, the Black-browed Albatross (Thalassarche melanophris). This minimal, two-parameter model was able to capture remarkably well the previously documented migration patterns of the Black-browed Albatross, with the best combination of parameter values conserved across multiple geographically separate populations. Our physics-inspired mechanistic model could be applied to other bird and highly-mobile species, improving our understanding of the relative importance of various factors driving migration and making predictions that could be useful for conservation.

  2. Spatial-temporal modeling of the association between air pollution exposure and preterm birth: identifying critical windows of exposure.

    PubMed

    Warren, Joshua; Fuentes, Montserrat; Herring, Amy; Langlois, Peter

    2012-12-01

    Exposure to high levels of air pollution during the pregnancy is associated with increased probability of preterm birth (PTB), a major cause of infant morbidity and mortality. New statistical methodology is required to specifically determine when a particular pollutant impacts the PTB outcome, to determine the role of different pollutants, and to characterize the spatial variability in these results. We develop a new Bayesian spatial model for PTB which identifies susceptible windows throughout the pregnancy jointly for multiple pollutants (PM(2.5) , ozone) while allowing these windows to vary continuously across space and time. We geo-code vital record birth data from Texas (2002-2004) and link them with standard pollution monitoring data and a newly introduced EPA product of calibrated air pollution model output. We apply the fully spatial model to a region of 13 counties in eastern Texas consisting of highly urban as well as rural areas. Our results indicate significant signal in the first two trimesters of pregnancy with different pollutants leading to different critical windows. Introducing the spatial aspect uncovers critical windows previously unidentified when space is ignored. A proper inference procedure is introduced to correctly analyze these windows. © 2012, The International Biometric Society.

  3. Encouraging Evidence on a Sector-Focused Advancement Strategy: Two-Year Impacts from the WorkAdvance Demonstration

    ERIC Educational Resources Information Center

    Hendra, Richard; Greenberg, David H.; Hamilton, Gayle; Oppenheim, Ari; Pennington, Alexandra; Schaberg, Kelsey; Tessler, Betsy L.

    2016-01-01

    This report summarizes the two-year findings of a rigorous random assignment evaluation of the WorkAdvance model, a sectoral training, and advancement initiative. Launched in 2011, WorkAdvance goes beyond the previous generation of employment programs by introducing demand-driven skills training and a focus on jobs that have career pathways. The…

  4. Where do bike lanes work best? A Bayesian spatial model of bicycle lanes and bicycle crashes

    Treesearch

    Michelle C. Kondo; Christopher Morrison; Erick Guerra; Elinore J. Kaufman; Douglas J. Wiebe

    2018-01-01

    US municipalities are increasingly introducing bicycle lanes to promote bicycle use, increase roadway safety and improve public health. The aim of this study was to identify specific locations where bicycle lanes, if created, could most effectively reduce crash rates. Previous research has found that bike lanes reduce crash incidence, but a lack of comprehensive...

  5. Encouraging Evidence on a Sector-Focused Advancement Strategy: Two-Year Impacts from the WorkAdvance Demonstration. Preview Summary

    ERIC Educational Resources Information Center

    Hendra, Richard; Greenberg, David H.; Hamilton, Gayle; Oppenheim, Ari; Pennington, Alexandra; Schaberg, Kelsey; Tessler, Betsy L.

    2016-01-01

    This report summarizes the two-year findings of a rigorous random assignment evaluation of the WorkAdvance model, a sectoral training and advancement initiative. Launched in 2011, WorkAdvance goes beyond the previous generation of employment programs by introducing demand-driven skills training and a focus on jobs that have career pathways. The…

  6. Un-reduction in field theory.

    PubMed

    Arnaudon, Alexis; López, Marco Castrillón; Holm, Darryl D

    2018-01-01

    The un-reduction procedure introduced previously in the context of classical mechanics is extended to covariant field theory. The new covariant un-reduction procedure is applied to the problem of shape matching of images which depend on more than one independent variable (for instance, time and an additional labelling parameter). Other possibilities are also explored: nonlinear [Formula: see text]-models and the hyperbolic flows of curves.

  7. High storage capacity in the Hopfield model with auto-interactions—stability analysis

    NASA Astrophysics Data System (ADS)

    Rocchi, Jacopo; Saad, David; Tantari, Daniele

    2017-11-01

    Recent studies point to the potential storage of a large number of patterns in the celebrated Hopfield associative memory model, well beyond the limits obtained previously. We investigate the properties of new fixed points to discover that they exhibit instabilities for small perturbations and are therefore of limited value as associative memories. Moreover, a large deviations approach also shows that errors introduced to the original patterns induce additional errors and increased corruption with respect to the stored patterns.

  8. Nursing home queues and home health users.

    PubMed

    Swan, J H; Benjamin, A E

    1993-01-01

    Home health market growth suggests the need for models explaining home health utilization. We have previously explained state-level Medicare home health visits with reference to nursing home markets. Here we introduce a model whereby state-level Medicare home health use is a function of nursing home queues and other demand and supply factors. Medicare home health users per state population is negatively related to nursing home bed stock, positively to Medicaid eligibility levels and to Medicaid nursing home recipients per population, as well as to various other demand and supply measures. This explanation of home health users explains previously-reported findings for home health visits. The findings support the argument that home health use is explained by factors affecting lengths of nursing home queues.

  9. Close-packed structure dynamics with finite-range interaction: computational mechanics with individual layer interaction.

    PubMed

    Rodriguez-Horta, Edwin; Estevez-Rams, Ernesto; Lora-Serrano, Raimundo; Neder, Reinhard

    2017-09-01

    This is the second contribution in a series of papers dealing with dynamical models in equilibrium theories of polytypism. A Hamiltonian introduced by Ahmad & Khan [Phys. Status Solidi B (2000), 218, 425-430] avoids the unphysical assignment of interaction terms to fictitious entities given by spins in the Hägg coding of the stacking arrangement. In this paper an analysis of polytype generation and disorder in close-packed structures is made for such a Hamiltonian. Results are compared with a previous analysis using the Ising model. Computational mechanics is the framework under which the analysis is performed. The competing effects of disorder and structure, as given by entropy density and excess entropy, respectively, are discussed. It is argued that the Ahmad & Khan model is simpler and predicts a larger set of polytypes than previous treatments.

  10. An improved model for computing the trajectories of conductive particles in roll-type electrostatic separator for recycling metals from WEEE.

    PubMed

    Wu, Jiang; Li, Jia; Xu, Zhenming

    2009-08-15

    Electrostatic separation presents an effective and environmentally friendly way for recycling metals and nonmetals from ground waste electrical and electronic equipment (WEEE). For this process, the trajectory of conductive particle is significant and some models have been established. However, the results of previous researches are limited by some simplifying assumptions and lead to a notable discrepancy between the model prediction and the experimental results. In the present research, a roll-type corona-electrostatic separator and ground printed circuit board (PCB) wastes were used to investigate the trajectory of the conductive particle. Two factors, the air drag force and the different charging situation, were introduced into the improved model. Their effects were analyzed and an improved model for the theoretical trajectory of conductive particle was established. Compared with the previous one, the improved model shows a good agreement with the experimental results. It provides a positive guidance for designing of separator and makes a progress for recycling the metals and nonmetals from WEEE.

  11. Lithium-ion battery models: a comparative study and a model-based powerline communication

    NASA Astrophysics Data System (ADS)

    Saidani, Fida; Hutter, Franz X.; Scurtu, Rares-George; Braunwarth, Wolfgang; Burghartz, Joachim N.

    2017-09-01

    In this work, various Lithium-ion (Li-ion) battery models are evaluated according to their accuracy, complexity and physical interpretability. An initial classification into physical, empirical and abstract models is introduced. Also known as white, black and grey boxes, respectively, the nature and characteristics of these model types are compared. Since the Li-ion battery cell is a thermo-electro-chemical system, the models are either in the thermal or in the electrochemical state-space. Physical models attempt to capture key features of the physical process inside the cell. Empirical models describe the system with empirical parameters offering poor analytical, whereas abstract models provide an alternative representation. In addition, a model selection guideline is proposed based on applications and design requirements. A complex model with a detailed analytical insight is of use for battery designers but impractical for real-time applications and in situ diagnosis. In automotive applications, an abstract model reproducing the battery behavior in an equivalent but more practical form, mainly as an equivalent circuit diagram, is recommended for the purpose of battery management. As a general rule, a trade-off should be reached between the high fidelity and the computational feasibility. Especially if the model is embedded in a real-time monitoring unit such as a microprocessor or a FPGA, the calculation time and memory requirements rise dramatically with a higher number of parameters. Moreover, examples of equivalent circuit models of Lithium-ion batteries are covered. Equivalent circuit topologies are introduced and compared according to the previously introduced criteria. An experimental sequence to model a 20 Ah cell is presented and the results are used for the purposes of powerline communication.

  12. Mixed Model Association with Family-Biased Case-Control Ascertainment.

    PubMed

    Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L

    2017-01-05

    Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  13. Evaluation of an Instructional Model to Teach Clinically Relevant Medicinal Chemistry in a Campus and a Distance Pathway

    PubMed Central

    Galt, Kimberly A.

    2008-01-01

    Objectives To evaluate an instructional model for teaching clinically relevant medicinal chemistry. Methods An instructional model that uses Bloom's cognitive and Krathwohl's affective taxonomy, published and tested concepts in teaching medicinal chemistry, and active learning strategies, was introduced in the medicinal chemistry courses for second-professional year (P2) doctor of pharmacy (PharmD) students (campus and distance) in the 2005-2006 academic year. Student learning and the overall effectiveness of the instructional model were assessed. Student performance after introducing the instructional model was compared to that in prior years. Results Student performance on course examinations improved compared to previous years. Students expressed overall enthusiasm about the course and better understood the value of medicinal chemistry to clinical practice. Conclusion The explicit integration of the cognitive and affective learning objectives improved student performance, student ability to apply medicinal chemistry to clinical practice, and student attitude towards the discipline. Testing this instructional model provided validation to this theoretical framework. The model is effective for both our campus and distance-students. This instructional model may also have broad-based applications to other science courses. PMID:18483599

  14. Quasispecies dynamics on a network of interacting genotypes and idiotypes: formulation of the model

    NASA Astrophysics Data System (ADS)

    Barbosa, Valmir C.; Donangelo, Raul; Souza, Sergio R.

    2015-01-01

    A quasispecies is the stationary state of a set of interrelated genotypes that evolve according to the usual principles of selection and mutation. Quasispecies studies have for the most part concentrated on the possibility of errors during genotype replication and their role in promoting either the survival or the demise of the quasispecies. In a previous work, we introduced a network model of quasispecies dynamics, based on a single probability parameter (p) and capable of addressing several plausibility issues of previous models. Here we extend that model by pairing its network with another one aimed at modeling the dynamics of the immune system when confronted with the quasispecies. The new network is based on the idiotypic-network model of immunity and, together with the previous one, constitutes a network model of interacting genotypes and idiotypes. The resulting model requires further parameters and as a consequence leads to a vast phase space. We have focused on a particular niche in which it is possible to observe the trade-offs involved in the quasispecies' survival or destruction. Within this niche, we give simulation results that highlight some key preconditions for quasispecies survival. These include a minimum initial abundance of genotypes relative to that of the idiotypes and a minimum value of p. The latter, in particular, is to be contrasted with the stand-alone quasispecies network of our previous work, in which arbitrarily low values of p constitute a guarantee of quasispecies survival.

  15. The Work Compatibility Improvement Framework: an integrated perspective of the human-at-work system.

    PubMed

    Genaidy, Ash; Salem, Sam; Karwowski, Waldemar; Paez, Omar; Tuncel, Setenay

    2007-01-15

    The industrial revolution demonstrated the limitations of a pure mechanistic approach towards work design. Human work is now seen as a complex entity that involves different scientific branches and blurs the line between mental and physical activities. Job design has been a traditional concern of applied psychology, which has provided insight into the interaction between the individual and the work environment. The goal of this paper is to introduce the human-at-work system as a holistic approach to organizational design. It postulates that the well-being of workers and work outcomes are issues that need to be addressed jointly, moving beyond traditional concepts of job satisfaction and work stress. The work compatibility model (WCM) is introduced as an engineering approach that seeks to integrate previous constructs of job and organizational design. The WCM seeks a balance between energy expenditure and replenishment. The implementation of the WCM in industrial settings is described within the context of the Work Compatibility Improvement Framework. A sample review of six models (motivation-hygiene theory; job characteristics theory; person-environment fit; demand-control model; and balance theory) provides the foundation for the interaction between the individual and the work environment. A review of three workload assessment methods (position analysis questionnaire, job task analysis and NASA task load index) gives an example of the foundation for the taxonomy of work environment domains. Previous models have sought to identify a balance state for the human-at-work system. They differentiated between the objective and subjective effects of the environment and the worker. An imbalance between the person and the environment has been proven to increase health risks. The WCM works with a taxonomy of 12 work domains classified in terms of the direct (acting) or indirect (experienced) effect on the worker. In terms of measurement, two quantitative methods are proposed to measure the state of the system. The first method introduced by Abdallah et al. (2004) identifies operating zones. The second method introduced by Salem et al. (2006) identifies the distribution of the work elements on the x/y coordinate plane. While previous efforts have identified some relevant elements of the systems, they failed to provide a holistic, quantitative approach combining organizational and human factors into a common framework. It is postulated that improving the well-being of workers will simultaneously improve organizational outcomes. The WCM moves beyond previous models by providing a hierarchical structure of work domains and a combination of methods to diagnose any organizational setting. The WCM is an attempt to achieve organizational excellence in human resource management, moving beyond job design to an integrated improvement strategy. A joint approach to organizational and job design will not only result in decreased prevalence of health risks, but in enhanced organizational effectiveness as well. The implementation of the WCM, that is, the Work Compatibility Improvement Framework, provides the basis for integrating different elements of the work environment into a single reliable construct. An improvement framework is essential to ensure that the measures of the WCM result in a system that is adaptive and self-regulated.

  16. Lateral specialization in unilateral spatial neglect: a cognitive robotics model.

    PubMed

    Conti, Daniela; Di Nuovo, Santo; Cangelosi, Angelo; Di Nuovo, Alessandro

    2016-08-01

    In this paper, we present the experimental results of an embodied cognitive robotic approach for modelling the human cognitive deficit known as unilateral spatial neglect (USN). To this end, we introduce an artificial neural network architecture designed and trained to control the spatial attentional focus of the iCub robotic platform. Like the human brain, the architecture is divided into two hemispheres and it incorporates bio-inspired plasticity mechanisms, which allow the development of the phenomenon of the specialization of the right hemisphere for spatial attention. In this study, we validate the model by replicating a previous experiment with human patients affected by the USN and numerical results show that the robot mimics the behaviours previously exhibited by humans. We also simulated recovery after the damage to compare the performance of each of the two hemispheres as additional validation of the model. Finally, we highlight some possible advantages of modelling cognitive dysfunctions of the human brain by means of robotic platforms, which can supplement traditional approaches for studying spatial impairments in humans.

  17. Propagation of a Gaussian-beam wave in general anisotropic turbulence

    NASA Astrophysics Data System (ADS)

    Andrews, L. C.; Phillips, R. L.; Crabbs, R.

    2014-10-01

    Mathematical models for a Gaussian-beam wave propagating through anisotropic non-Kolmogorov turbulence have been developed in the past by several researchers. In previous publications, the anisotropic spatial power spectrum model was based on the assumption that propagation was in the z direction with circular symmetry maintained in the orthogonal xy-plane throughout the path. In the present analysis, however, the anisotropic spectrum model is no longer based on a single anisotropy parameter—instead, two such parameters are introduced in the orthogonal xyplane so that circular symmetry in this plane is no longer required. In addition, deviations from the 11/3 power-law behavior in the spectrum model are allowed by assuming power-law index variations 3 < α < 4 . In the current study we develop theoretical models for beam spot size, spatial coherence, and scintillation index that are valid in weak irradiance fluctuation regimes as well as in deep turbulence, or strong irradiance fluctuation regimes. These new results are compared with those derived from the more specialized anisotropic spectrum used in previous analyses.

  18. A COCHLEAR MODEL USING THE TIME-AVERAGED LAGRANGIAN AND THE PUSH-PULL MECHANISM IN THE ORGAN OF CORTI.

    PubMed

    Yoon, Yongjin; Puria, Sunil; Steele, Charles R

    2009-09-05

    In our previous work, the basilar membrane velocity V(BM) for a gerbil cochlea was calculated and compared with physiological measurements. The calculated V(BM) showed excessive phase excursion and, in the active case, a best-frequency place shift of approximately two fifths of an octave higher. Here we introduce a refined model that uses the time-averaged Lagrangian for the conservative system to resolve the phase excursion issues. To improve the overestimated best-frequency place found in the previous feed-forward active model, we implement in the new model a push-pull mechanism from the outer hair cells and phalangeal process. Using this new model, the V(BM) for the gerbil cochlea was calculated and compared with animal measurements, The results show excellent agreement for mapping the location of the maximum response to frequency, while the agreement for the response at a fixed point as a function of frequency is excellent for the amplitude and good for the phase.

  19. A COCHLEAR MODEL USING THE TIME-AVERAGED LAGRANGIAN AND THE PUSH-PULL MECHANISM IN THE ORGAN OF CORTI

    PubMed Central

    YOON, YONGJIN; PURIA, SUNIL; STEELE, CHARLES R.

    2010-01-01

    In our previous work, the basilar membrane velocity VBM for a gerbil cochlea was calculated and compared with physiological measurements. The calculated VBM showed excessive phase excursion and, in the active case, a best-frequency place shift of approximately two fifths of an octave higher. Here we introduce a refined model that uses the time-averaged Lagrangian for the conservative system to resolve the phase excursion issues. To improve the overestimated best-frequency place found in the previous feed-forward active model, we implement in the new model a push-pull mechanism from the outer hair cells and phalangeal process. Using this new model, the VBM for the gerbil cochlea was calculated and compared with animal measurements, The results show excellent agreement for mapping the location of the maximum response to frequency, while the agreement for the response at a fixed point as a function of frequency is excellent for the amplitude and good for the phase. PMID:20485540

  20. Prediction of aircraft handling qualities using analytical models of the human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1982-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot-induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  1. Prediction of aircraft handling qualities using analytical models of the human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1982-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations is formulated. Finally, a model based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  2. Optimization Control of the Color-Coating Production Process for Model Uncertainty

    PubMed Central

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563

  3. Optimization Control of the Color-Coating Production Process for Model Uncertainty.

    PubMed

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.

  4. Dynamic model based on voltage transfer curve for pattern formation in dielectric barrier glow discharge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ben; He, Feng; Ouyang, Jiting, E-mail: jtouyang@bit.edu.cn

    2015-12-15

    Simulation work is very important for understanding the formation of self-organized discharge patterns. Previous works have witnessed different models derived from other systems for simulation of discharge pattern, but most of these models are complicated and time-consuming. In this paper, we introduce a convenient phenomenological dynamic model based on the basic dynamic process of glow discharge and the voltage transfer curve (VTC) to study the dielectric barrier glow discharge (DBGD) pattern. VTC is an important characteristic of DBGD, which plots the change of wall voltage after a discharge as a function of the initial total gap voltage. In the modeling,more » the combined effect of the discharge conditions is included in VTC, and the activation-inhibition effect is expressed by a spatial interaction term. Besides, the model reduces the dimensionality of the system by just considering the integration effect of current flow. All these greatly facilitate the construction of this model. Numerical simulations turn out to be in good accordance with our previous fluid modeling and experimental result.« less

  5. Generalization of a model of hysteresis for dynamical systems.

    PubMed

    Piquette, Jean C; McLaughlin, Elizabeth A; Ren, Wei; Mukherjee, Binu K

    2002-06-01

    A previously described model of hysteresis [J. C. Piquette and S. E. Forsythe, J. Acoust. Soc. Am. 106, 3317-3327 (1999); 106, 3328-3334 (1999)] is generalized to apply to a dynamical system. The original model produces theoretical hysteresis loops that agree well with laboratory measurements acquired under quasi-static conditions. The loops are produced using three-dimensional rotation matrices. An iterative procedure, which allows the model to be applied to a dynamical system, is introduced here. It is shown that, unlike the quasi-static case, self-crossing of the loops is a realistic possibility when inertia and viscous friction are taken into account.

  6. Modeling the magnetic properties of lanthanide complexes: relationship of the REC parameters with Pauling electronegativity and coordination number.

    PubMed

    Baldoví, José J; Gaita-Ariño, Alejandro; Coronado, Eugenio

    2015-07-28

    In a previous study, we introduced the Radial Effective Charge (REC) model to study the magnetic properties of lanthanide single ion magnets. Now, we perform an empirical determination of the effective charges (Zi) and radial displacements (Dr) of this model using spectroscopic data. This systematic study allows us to relate Dr and Zi with chemical factors such as the coordination number and the electronegativities of the metal and the donor atoms. This strategy is being used to drastically reduce the number of free parameters in the modeling of the magnetic and spectroscopic properties of f-element complexes.

  7. The growth of business firms: theoretical framework and empirical evidence.

    PubMed

    Fu, Dongfeng; Pammolli, Fabio; Buldyrev, S V; Riccaboni, Massimo; Matia, Kaushik; Yamasaki, Kazuko; Stanley, H Eugene

    2005-12-27

    We introduce a model of proportional growth to explain the distribution P(g)(g) of business-firm growth rates. The model predicts that P(g)(g) is exponential in the central part and depicts an asymptotic power-law behavior in the tails with an exponent zeta = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. In this article, we test the model at different levels of aggregation in the economy, from products to firms to countries, and we find that the predictions of the model agree with empirical growth distributions and size-variance relationships.

  8. A Numerical Model for Trickle Bed Reactors

    NASA Astrophysics Data System (ADS)

    Propp, Richard M.; Colella, Phillip; Crutchfield, William Y.; Day, Marcus S.

    2000-12-01

    Trickle bed reactors are governed by equations of flow in porous media such as Darcy's law and the conservation of mass. Our numerical method for solving these equations is based on a total-velocity splitting, sequential formulation which leads to an implicit pressure equation and a semi-implicit mass conservation equation. We use high-resolution finite-difference methods to discretize these equations. Our solution scheme extends previous work in modeling porous media flows in two ways. First, we incorporate physical effects due to capillary pressure, a nonlinear inlet boundary condition, spatial porosity variations, and inertial effects on phase mobilities. In particular, capillary forces introduce a parabolic component into the recast evolution equation, and the inertial effects give rise to hyperbolic nonconvexity. Second, we introduce a modification of the slope-limiting algorithm to prevent our numerical method from producing spurious shocks. We present a numerical algorithm for accommodating these difficulties, show the algorithm is second-order accurate, and demonstrate its performance on a number of simplified problems relevant to trickle bed reactor modeling.

  9. Calculating Stress: From Entropy to a Thermodynamic Concept of Health and Disease

    PubMed Central

    Nečesánek, Ivo; Konečný, David; Vasku, Anna

    2016-01-01

    To date, contemporary science has lacked a satisfactory tool for the objective expression of stress. This text thus introduces a new–thermodynamically derived–approach to stress measurement, based on entropy production in time and independent of the quality or modality of a given stressor or a combination thereof. Hereto, we propose a novel model of stress response based on thermodynamic modelling of entropy production, both in the tissues/organs and in regulatory feedbacks. Stress response is expressed in our model on the basis of stress entropic load (SEL), a variable we introduced previously; the mathematical expression of SEL, provided here for the first time, now allows us to describe the various states of a living system, including differentiating between states of health and disease. The resulting calculation of stress response regardless of the type of stressor(s) in question is thus poised to become an entirely new tool for predicting the development of a living system. PMID:26771542

  10. Validating and improving a zero-dimensional stack voltage model of the Vanadium Redox Flow Battery

    NASA Astrophysics Data System (ADS)

    König, S.; Suriyah, M. R.; Leibfried, T.

    2018-02-01

    Simple, computationally efficient battery models can contribute significantly to the development of flow batteries. However, validation studies for these models on an industrial-scale stack level are rarely published. We first extensively present a simple stack voltage model for the Vanadium Redox Flow Battery. For modeling the concentration overpotential, we derive mass transfer coefficients from experimental results presented in the 1990s. The calculated mass transfer coefficient of the positive half-cell is 63% larger than of the negative half-cell, which is not considered in models published to date. Further, we advance the concentration overpotential model by introducing an apparent electrochemically active electrode surface which differs from the geometric electrode area. We use the apparent surface as fitting parameter for adapting the model to experimental results of a flow battery manufacturer. For adapting the model, we propose a method for determining the agreement between model and reality quantitatively. To protect the manufacturer's intellectual property, we introduce a normalization method for presenting the results. For the studied stack, the apparent electrochemically active surface of the electrode is 41% larger than its geometrical area. Hence, the current density in the diffusion layer is 29% smaller than previously reported for a zero-dimensional model.

  11. Improving Transferability of Introduced Species’ Distribution Models: New Tools to Forecast the Spread of a Highly Invasive Seaweed

    PubMed Central

    Verbruggen, Heroen; Tyberghein, Lennert; Belton, Gareth S.; Mineur, Frederic; Jueterbock, Alexander; Hoarau, Galice; Gurgel, C. Frederico D.; De Clerck, Olivier

    2013-01-01

    The utility of species distribution models for applications in invasion and global change biology is critically dependent on their transferability between regions or points in time, respectively. We introduce two methods that aim to improve the transferability of presence-only models: density-based occurrence thinning and performance-based predictor selection. We evaluate the effect of these methods along with the impact of the choice of model complexity and geographic background on the transferability of a species distribution model between geographic regions. Our multifactorial experiment focuses on the notorious invasive seaweed Caulerpacylindracea (previously Caulerpa racemosa var. cylindracea ) and uses Maxent, a commonly used presence-only modeling technique. We show that model transferability is markedly improved by appropriate predictor selection, with occurrence thinning, model complexity and background choice having relatively minor effects. The data shows that, if available, occurrence records from the native and invaded regions should be combined as this leads to models with high predictive power while reducing the sensitivity to choices made in the modeling process. The inferred distribution model of Caulerpacylindracea shows the potential for this species to further spread along the coasts of Western Europe, western Africa and the south coast of Australia. PMID:23950789

  12. Strain-based diffusion solver for realistic representation of diffusion front in physical reactions

    PubMed Central

    2017-01-01

    When simulating fluids, such as water or fire, interacting with solids, it is a challenging problem to represent details of diffusion front in physical reaction. Previous approaches commonly use isotropic or anisotropic diffusion to model the transport of a quantity through a medium or long interface. We have identified unrealistic monotonous patterns with previous approaches and therefore, propose to extend these approaches by integrating the deformation of the material with the diffusion process. Specifically, stretching deformation represented by strain is incorporated in a divergence-constrained diffusion model. A novel diffusion model is introduced to increase the global rate at which the solid acquires relevant quantities, such as heat or saturation. This ensures that the equations describing fluid flow are linked to the change of solid geometry, and also satisfy the divergence-free condition. Experiments show that our method produces convincing results. PMID:28448591

  13. Quasisaddles as relevant points of the potential energy surface in the dynamics of supercooled liquids

    NASA Astrophysics Data System (ADS)

    Angelani, L.; Di Leonardo, R.; Ruocco, G.; Scala, A.; Sciortino, F.

    2002-06-01

    The supercooled dynamics of a Lennard-Jones model liquid is numerically investigated studying relevant points of the potential energy surface, i.e., the minima of the square gradient of total potential energy V. The main findings are (i) the number of negative curvatures n of these sampled points appears to extrapolate to zero at the mode coupling critical temperature Tc; (ii) the temperature behavior of n(T) has a close relationship with the temperature behavior of the diffusivity; (iii) the potential energy landscape shows a high regularity in the distances among the relevant points and in their energy location. Finally we discuss a model of the landscape, previously introduced by Madan and Keyes [J. Chem. Phys. 98, 3342 (1993)], able to reproduce the previous findings.

  14. Histologic morphometry confirms a prophylactic effect for hyperbaric oxygen in the prevention of delayed radiation enteropathy.

    PubMed

    Feldmeier, J J; Davolt, D A; Court, W S; Onoda, J M; Alecu, R

    1998-01-01

    In a previous publication (Feldmeier et al., Radiother Oncol 1995; 35:138-144) we reported our success in preventing delayed radiation enteropathy in a murine model by the application of hyperbaric oxygen (HBO2). In this study we introduce a histologic morphometric technique for assessing fibrosis in the submucosa of these same animal specimens and relate this assay to the previous results. The histologic morphometry, like the previous gross morphometry and compliance assays, demonstrates a significant protective effect for HBO2. The present assay is related to the previous assays in a statistically significant fashion. The predictive value for the histologic morphometric assay demonstrates a sensitivity of 75% and a specificity of 62.5%. The applicability of this assay to other organ systems and its potential superiority to the compliance assay are discussed.

  15. Advanced approach to the analysis of a series of in-situ nuclear forward scattering experiments

    NASA Astrophysics Data System (ADS)

    Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel

    2017-03-01

    This study introduces a sequential fitting procedure as a specific approach to nuclear forward scattering (NFS) data evaluation. Principles and usage of this advanced evaluation method are described in details and its utilization is demonstrated on NFS in-situ investigations of fast processes. Such experiments frequently consist of hundreds of time spectra which need to be evaluated. The introduced procedure allows the analysis of these experiments and significantly decreases the time needed for the data evaluation. The key contributions of the study are the sequential use of the output fitting parameters of a previous data set as the input parameters for the next data set and the model suitability crosscheck option of applying the procedure in ascending and descending directions of the data sets. Described fitting methodology is beneficial for checking of model validity and reliability of obtained results.

  16. Flexible Kernel Memory

    PubMed Central

    Nowicki, Dimitri; Siegelmann, Hava

    2010-01-01

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces. PMID:20552013

  17. On the selection of significant variables in a model for the deteriorating process of facades

    NASA Astrophysics Data System (ADS)

    Serrat, C.; Gibert, V.; Casas, J. R.; Rapinski, J.

    2017-10-01

    In previous works the authors of this paper have introduced a predictive system that uses survival analysis techniques for the study of time-to-failure in the facades of a building stock. The approach is population based, in order to obtain information on the evolution of the stock across time, and to help the manager in the decision making process on global maintenance strategies. For the decision making it is crutial to determine those covariates -like materials, morphology and characteristics of the facade, orientation or environmental conditions- that play a significative role in the progression of different failures. The proposed platform also incorporates an open source GIS plugin that includes survival and test moduli that allow the investigator to model the time until a lesion taking into account the variables collected during the inspection process. The aim of this paper is double: a) to shortly introduce the predictive system, as well as the inspection and the analysis methodologies and b) to introduce and illustrate the modeling strategy for the deteriorating process of an urban front. The illustration will be focused on the city of L’Hospitalet de Llobregat (Barcelona, Spain) in which more than 14,000 facades have been inspected and analyzed.

  18. Analysis of aircraft longitudinal handling qualities

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1981-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  19. An analytical approach for predicting pilot induced oscillations

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1981-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion or determining the susceptability of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  20. Relieving the tension between weak lensing and cosmic microwave background with interacting dark matter and dark energy models

    NASA Astrophysics Data System (ADS)

    An, Rui; Feng, Chang; Wang, Bin

    2018-02-01

    We constrain interacting dark matter and dark energy (IDMDE) models using a 450-degree-square cosmic shear data from the Kilo Degree Survey (KiDS) and the angular power spectra from Planck's latest cosmic microwave background measurements. We revisit the discordance problem in the standard Lambda cold dark matter (ΛCDM) model between weak lensing and Planck datasets and extend the discussion by introducing interacting dark sectors. The IDMDE models are found to be able to alleviate the discordance between KiDS and Planck as previously inferred from the ΛCDM model, and moderately favored by a combination of the two datasets.

  1. Formalization, implementation, and modeling of institutional controllers for distributed robotic systems.

    PubMed

    Pereira, José N; Silva, Porfírio; Lima, Pedro U; Martinoli, Alcherio

    2014-01-01

    The work described is part of a long term program of introducing institutional robotics, a novel framework for the coordination of robot teams that stems from institutional economics concepts. Under the framework, institutions are cumulative sets of persistent artificial modifications made to the environment or to the internal mechanisms of a subset of agents, thought to be functional for the collective order. In this article we introduce a formal model of institutional controllers based on Petri nets. We define executable Petri nets-an extension of Petri nets that takes into account robot actions and sensing-to design, program, and execute institutional controllers. We use a generalized stochastic Petri net view of the robot team controlled by the institutional controllers to model and analyze the stochastic performance of the resulting distributed robotic system. The ability of our formalism to replicate results obtained using other approaches is assessed through realistic simulations of up to 40 e-puck robots. In particular, we model a robot swarm and its institutional controller with the goal of maintaining wireless connectivity, and successfully compare our model predictions and simulation results with previously reported results, obtained by using finite state automaton models and controllers.

  2. Introducing improved structural properties and salt dependence into a coarse-grained model of DNA

    NASA Astrophysics Data System (ADS)

    Snodin, Benedict E. K.; Randisi, Ferdinando; Mosayebi, Majid; Šulc, Petr; Schreck, John S.; Romano, Flavio; Ouldridge, Thomas E.; Tsukanov, Roman; Nir, Eyal; Louis, Ard A.; Doye, Jonathan P. K.

    2015-06-01

    We introduce an extended version of oxDNA, a coarse-grained model of deoxyribonucleic acid (DNA) designed to capture the thermodynamic, structural, and mechanical properties of single- and double-stranded DNA. By including explicit major and minor grooves and by slightly modifying the coaxial stacking and backbone-backbone interactions, we improve the ability of the model to treat large (kilobase-pair) structures, such as DNA origami, which are sensitive to these geometric features. Further, we extend the model, which was previously parameterised to just one salt concentration ([Na+] = 0.5M), so that it can be used for a range of salt concentrations including those corresponding to physiological conditions. Finally, we use new experimental data to parameterise the oxDNA potential so that consecutive adenine bases stack with a different strength to consecutive thymine bases, a feature which allows a more accurate treatment of systems where the flexibility of single-stranded regions is important. We illustrate the new possibilities opened up by the updated model, oxDNA2, by presenting results from simulations of the structure of large DNA objects and by using the model to investigate some salt-dependent properties of DNA.

  3. Introducing improved structural properties and salt dependence into a coarse-grained model of DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snodin, Benedict E. K., E-mail: benedict.snodin@chem.ox.ac.uk; Mosayebi, Majid; Schreck, John S.

    2015-06-21

    We introduce an extended version of oxDNA, a coarse-grained model of deoxyribonucleic acid (DNA) designed to capture the thermodynamic, structural, and mechanical properties of single- and double-stranded DNA. By including explicit major and minor grooves and by slightly modifying the coaxial stacking and backbone-backbone interactions, we improve the ability of the model to treat large (kilobase-pair) structures, such as DNA origami, which are sensitive to these geometric features. Further, we extend the model, which was previously parameterised to just one salt concentration ([Na{sup +}] = 0.5M), so that it can be used for a range of salt concentrations including thosemore » corresponding to physiological conditions. Finally, we use new experimental data to parameterise the oxDNA potential so that consecutive adenine bases stack with a different strength to consecutive thymine bases, a feature which allows a more accurate treatment of systems where the flexibility of single-stranded regions is important. We illustrate the new possibilities opened up by the updated model, oxDNA2, by presenting results from simulations of the structure of large DNA objects and by using the model to investigate some salt-dependent properties of DNA.« less

  4. Sandpile-based model for capturing magnitude distributions and spatiotemporal clustering and separation in regional earthquakes

    NASA Astrophysics Data System (ADS)

    Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.

    2017-04-01

    We propose a cellular automata model for earthquake occurrences patterned after the sandpile model of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the model successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, model statistics show remarkable comparison with long-period empirical data from earthquakes from different seismogenic regions. The proposed model has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of earthquakes by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in earthquake-generating regions.

  5. Magnetic field effects on exciplex-forming systems: the effect on the locally excited fluorophore and its dependence on free energy.

    PubMed

    Kattnig, Daniel R; Rosspeintner, Arnulf; Grampp, Günter

    2011-02-28

    This study addresses magnetic field effects in exciplex forming donor-acceptor systems. For moderately exergonic systems, the exciplex and the locally excited fluorophore emission are found to be magneto-sensitive. A previously introduced model attributing this finding to excited state reversibility is confirmed. Systems characterised by a free energy of charge separation up to approximately -0.35 eV are found to exhibit a magnetic field effect on the fluorophore. A simple three-state model of the exciplex is introduced, which uses the reaction distance and the asymmetric electron transfer reaction coordinate as pertinent variables. Comparing the experimental emission band shapes with those predicted by the model, a semi-quantitative picture of the formation of the magnetic field effect is developed based on energy hypersurfaces. The model can also be applied to estimate the indirect contribution of the exchange interaction, even if the perturbative approach fails. The energetic parameters that are essential for the formation of large magnetic field effects on the exciplex are discussed.

  6. Universality, twisted fans, and the Ising model. [Renormalization, two-loop calculations, scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dash, J.W.; Harrington, S.J.

    1975-06-24

    Critical exponents are evaluated for the Ising model using universality in the form of ''twisted fans'' previously introduced in Reggeon field theory. The universality is with respect to scales induced through renormalization. Exact twists are obtained at ..beta.. = 0 in one loop for D = 2,3 with ..nu.. = 0.75 and 0.60 respectively. In two loops one obtains ..nu.. approximately 1.32 and 0.68. No twists are obtained for eta, however. The results for the standard two loop calculations are also presented as functions of a scale.

  7. X-ray satellite

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mock-up for the development of the Engineering Model (EM) and Flight Model (FM) is introduced which shortens the delay of 7 weeks regarding the previous planned launch date of September 30, to about 3 weeks maintaining the 4 weeks reserve is discussed. As compared with the new assembly integration test (EM-AIT) schedule of March 11, 1985, the EM data handling system is on the critical path. For the attitude measurement and control subsystem, sufficiently flexibility is achieved through combination of dummies and EM hardware to catch up with the existing delays.

  8. The Military Applications of Cloud Computing Technologies

    DTIC Science & Technology

    2013-05-23

    tactical networks will potentially cause some unique issues when implementing the JIE. Tactical networks are temporary in nature , and are utilized...connected ABCS clients will receive software updates and security patches as they are published over the network , rather than catching up after an extended...approach from the previous JNN network model, in that it introduces a limited, wireless capability to a unit’s LAN that will enable limited, on-the

  9. On the definition and K-theory realization of a modular functor

    NASA Astrophysics Data System (ADS)

    Kriz, Igor; Lai, Luhang

    We present a definition of a (super)-modular functor which includes certain interesting cases that previous definitions do not allow. We also introduce a notion of topological twisting of a modular functor, and construct formally a realization by a 2-dimensional topological field theory valued in twisted K-modules. We discuss, among other things, the N = 1-supersymmetric minimal models from the point of view of this formalism.

  10. Evolving random fractal Cantor superlattices for the infrared using a genetic algorithm

    PubMed Central

    Bossard, Jeremy A.; Lin, Lan; Werner, Douglas H.

    2016-01-01

    Ordered and chaotic superlattices have been identified in Nature that give rise to a variety of colours reflected by the skin of various organisms. In particular, organisms such as silvery fish possess superlattices that reflect a broad range of light from the visible to the UV. Such superlattices have previously been identified as ‘chaotic’, but we propose that apparent ‘chaotic’ natural structures, which have been previously modelled as completely random structures, should have an underlying fractal geometry. Fractal geometry, often described as the geometry of Nature, can be used to mimic structures found in Nature, but deterministic fractals produce structures that are too ‘perfect’ to appear natural. Introducing variability into fractals produces structures that appear more natural. We suggest that the ‘chaotic’ (purely random) superlattices identified in Nature are more accurately modelled by multi-generator fractals. Furthermore, we introduce fractal random Cantor bars as a candidate for generating both ordered and ‘chaotic’ superlattices, such as the ones found in silvery fish. A genetic algorithm is used to evolve optimal fractal random Cantor bars with multiple generators targeting several desired optical functions in the mid-infrared and the near-infrared. We present optimized superlattices demonstrating broadband reflection as well as single and multiple pass bands in the near-infrared regime. PMID:26763335

  11. Lorentz-symmetry test at Planck-scale suppression with nucleons in a spin-polarized 133Cs cold atom clock

    NASA Astrophysics Data System (ADS)

    Pihan-Le Bars, H.; Guerlin, C.; Lasseri, R.-D.; Ebran, J.-P.; Bailey, Q. G.; Bize, S.; Khan, E.; Wolf, P.

    2017-04-01

    We introduce an improved model that links the frequency shift of the 133Cs hyperfine Zeeman transitions |F =3 ,mF ⟩↔|F =4 ,mF ⟩ to the Lorentz-violating Standard Model extension (SME) coefficients of the proton and neutron. The new model uses Lorentz transformations developed to second order in boost and additionally takes the nuclear structure into account, beyond the simple Schmidt model used previously in Standard Model extension analyses, thereby providing access to both proton and neutron SME coefficients including the isotropic coefficient c˜T T. Using this new model in a second analysis of the data delivered by the FO2 dual Cs/Rb fountain at Paris Observatory and previously analyzed in [1], we improve by up to 13 orders of magnitude the present maximum sensitivities for laboratory tests [2] on the c˜Q, c˜T J, and c˜T T coefficients for the neutron and on the c˜Q coefficient for the proton, reaching respectively 10-20, 10-17, 10-13, and 10-15 GeV .

  12. Mixed model of dietary fat effect on postprandial glucose-insulin metabolism from carbohydrates in type 1 diabetes.

    PubMed

    Yamamoto Noguchi, Claudia Cecilia; Kunikane, Noriaki; Hashimoto, Shogo; Furutani, Eiko

    2015-08-01

    In this study we introduce an extension of a previously developed model of glucose-insulin metabolism in type 1 diabetes (T1D) from carbohydrates that includes the effect of dietary fat on postprandial glycemia. We include two compartments that represent plasma triglyceride and nonesterified fatty acid (NEFA) concentration, in addition to a mathematical representation of delayed gastric emptying and insulin resistance, which are the most well-known effects of dietary fat metabolism. Simulation results show that postprandial glucose as well as lipid levels in our model approximates clinical data from T1D patients.

  13. Model-based ultrasound temperature visualization during and following HIFU exposure.

    PubMed

    Ye, Guoliang; Smith, Penny Probert; Noble, J Alison

    2010-02-01

    This paper describes the application of signal processing techniques to improve the robustness of ultrasound feedback for displaying changes in temperature distribution in treatment using high-intensity focused ultrasound (HIFU), especially at the low signal-to-noise ratios that might be expected in in vivo abdominal treatment. Temperature estimation is based on the local displacements in ultrasound images taken during HIFU treatment, and a method to improve robustness to outliers is introduced. The main contribution of the paper is in the application of a Kalman filter, a statistical signal processing technique, which uses a simple analytical temperature model of heat dispersion to improve the temperature estimation from the ultrasound measurements during and after HIFU exposure. To reduce the sensitivity of the method to previous assumptions on the material homogeneity and signal-to-noise ratio, an adaptive form is introduced. The method is illustrated using data from HIFU exposure of ex vivo bovine liver. A particular advantage of the stability it introduces is that the temperature can be visualized not only in the intervals between HIFU exposure but also, for some configurations, during the exposure itself. 2010 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  14. A generalized preferential attachment model for business firms growth rates. I. Empirical evidence

    NASA Astrophysics Data System (ADS)

    Pammolli, F.; Fu, D.; Buldyrev, S. V.; Riccaboni, M.; Matia, K.; Yamasaki, K.; Stanley, H. E.

    2007-05-01

    We introduce a model of proportional growth to explain the distribution P(g) of business firm growth rates. The model predicts that P(g) is Laplace in the central part and depicts an asymptotic power-law behavior in the tails with an exponent ζ = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. We test the model at different levels of aggregation in the economy, from products, to firms, to countries, and we find that the predictions are in good agreement with empirical evidence on both growth distributions and size-variance relationships.

  15. From the Cover: The growth of business firms: Theoretical framework and empirical evidence

    NASA Astrophysics Data System (ADS)

    Fu, Dongfeng; Pammolli, Fabio; Buldyrev, S. V.; Riccaboni, Massimo; Matia, Kaushik; Yamasaki, Kazuko; Stanley, H. Eugene

    2005-12-01

    We introduce a model of proportional growth to explain the distribution Pg(g) of business-firm growth rates. The model predicts that Pg(g) is exponential in the central part and depicts an asymptotic power-law behavior in the tails with an exponent = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. In this article, we test the model at different levels of aggregation in the economy, from products to firms to countries, and we find that the predictions of the model agree with empirical growth distributions and size-variance relationships. proportional growth | preferential attachment | Laplace distribution

  16. A constructive model potential method for atomic interactions

    NASA Technical Reports Server (NTRS)

    Bottcher, C.; Dalgarno, A.

    1974-01-01

    A model potential method is presented that can be applied to many electron single centre and two centre systems. The development leads to a Hamiltonian with terms arising from core polarization that depend parametrically upon the positions of the valence electrons. Some of the terms have been introduced empirically in previous studies. Their significance is clarified by an analysis of a similar model in classical electrostatics. The explicit forms of the expectation values of operators at large separations of two atoms given by the model potential method are shown to be equivalent to the exact forms when the assumption is made that the energy level differences of one atom are negligible compared to those of the other.

  17. Surface curvature singularities of polytropic spheres in Palatini f (R ,T ) gravity

    NASA Astrophysics Data System (ADS)

    Barrientos O., José; Rubilar, Guillermo F.

    2016-01-01

    We consider Palatini f (R ,T ) gravity models, similar to those introduced by Harko et al. (2012), where the gravitational Lagrangian is given by an arbitrary function of the curvature scalar R and of the trace of the energy-momentum tensor T . Interior spherical static solutions are studied considering the model of matter given by a perfect fluid configuration and a polytropic equation of state. We analyze the curvature singularities found previously for Palatini f (R ) gravity and discuss the possibility to remove them in some particular f (R ,T ) models. We show that it is possible to construct a restricted family of models for which these singularities are not present.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeo, Jingjie; Cheng, Yuan; Li, Weifeng

    A novel method of atomistic modelling and characterization of both pure ceramide and mixed lipid bilayers is being developed, using only the General Amber ForceField. Lipid bilayers modelled as pure ceramides adopt hexagonal packing after equilibration, and the area per lipid and bilayer thickness are consistent with previously reported theoretical results. Mixed lipid bilayers are modelled as a combination of ceramides, cholesterol, and free fatty acids. This model is shown to be stable after equilibration. Green tea extract, also known as epigallocatechin-3-gallate, is introduced as a spherical cluster on the surface of the mixed lipid bilayer. It is demonstrated thatmore » the cluster is able to bind to the bilayers as a cluster without diffusing into the surrounding water.« less

  19. Surface hopping with a manifold of electronic states. III. Transients, broadening, and the Marcus picture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, Wenjie; Subotnik, Joseph E.; Nitzan, Abraham

    2015-06-21

    In a previous paper [Dou et al., J. Chem. Phys. 142, 084110 (2015)], we have introduced a surface hopping (SH) approach to deal with the Anderson-Holstein model. Here, we address some interesting aspects that have not been discussed previously, including transient phenomena and extensions to arbitrary impurity-bath couplings. In particular, in this paper we show that the SH approach captures phonon coherence beyond the secular approximation, and that SH rates agree with Marcus theory at steady state. Finally, we show that, in cases where the electronic tunneling rate depends on nuclear position, a straightforward use of Marcus theory rates yieldsmore » a useful starting point for capturing level broadening. For a simple such model, we find I-V curves that exhibit negative differential resistance.« less

  20. Ultimate patterning limits for EUV at 5nm node and beyond

    NASA Astrophysics Data System (ADS)

    Ali, Rehab Kotb; Hamed Fatehy, Ahmed; Lafferty, Neal; Word, James

    2018-03-01

    The 5nm technology node introduces more aggressive geometries than previous nodes. In this paper, we are introducing a comprehensive study to examine the pattering limits of EUV at 0.33NA. The study is divided into two main approaches: (A) Exploring pattering limits of Single Exposure EUV Cut/Block mask in Self-Aligned-Multi-Patterning (SAMP) process, and (B) Exploring the pattering limits of a Single Exposure EUV printing of metal Layers. The printability of the resulted OPC masks is checked through a model based manufacturing flow for the two pattering approaches. The final manufactured patterns are quantified by Edge Placement Error (EPE), Process Variation Band (PVBand), soft/hard bridging and pinching, Image Log Slope (ILS) and Common Depth of Focus (CDOF)

  1. Calorie Changes in Large Chain Restaurants

    PubMed Central

    Bleich, Sara N.; Wolfson, Julia A.; Jarlenski, Marian P.

    2015-01-01

    Introduction Large chain restaurants reduced the number of calories in newly introduced menu items in 2013 by about 60 calories (or 12%) relative to 2012. This paper describes trends in calories available in large U.S. chain restaurants to understand whether previously documented patterns persist. Methods Data (a census of items for included restaurants) were obtained from the MenuStat project. This analysis included 66 of the 100 largest U.S. restaurants that are available in all three 3 of the data (2012–2014; N=23,066 items). Generalized linear models were used to examine: (1) per-item calorie changes from 2012 to 2014 among items on the menu in all years; and (2) mean calories in new items in 2013 and 2014 compared with items on the menu in 2012 only. Data were analyzed in 2014. Results Overall, calories in newly introduced menu items declined by 71 (or 15%) from 2012 to 2013 (p=0.001) and by 69 (or 14%) from 2012 to 2014 (p=0.03). These declines were concentrated mainly in new main course items (85 fewer calories in 2013 and 55 fewer calories in 2014; p=0.01). Although average calories in newly introduced menu items are declining, they are higher than items common to the menu in all 3 years. No differences in mean calories among items on menus in 2012, 2013, or 2014 were found. Conclusions The previously observed declines in newly introduced menu items among large restaurant chains have been maintained, which suggests the beginning of a trend toward reducing calories. PMID:26163168

  2. A Comprehensive Model of the Near-Earth Magnetic Field. Phase 3

    NASA Technical Reports Server (NTRS)

    Sabaka, Terence J.; Olsen, Nils; Langel, Robert A.

    2000-01-01

    The near-Earth magnetic field is due to sources in Earth's core, ionosphere, magnetosphere, lithosphere, and from coupling currents between ionosphere and magnetosphere and between hemispheres. Traditionally, the main field (low degree internal field) and magnetospheric field have been modeled simultaneously, and fields from other sources modeled separately. Such a scheme, however, can introduce spurious features. A new model, designated CMP3 (Comprehensive Model: Phase 3), has been derived from quiet-time Magsat and POGO satellite measurements and observatory hourly and annual means measurements as part of an effort to coestimate fields from all of these sources. This model represents a significant advancement in the treatment of the aforementioned field sources over previous attempts, and includes an accounting for main field influences on the magnetosphere, main field and solar activity influences on the ionosphere, seasonal influences on the coupling currents, a priori characterization of ionospheric and magnetospheric influence on Earth-induced fields, and an explicit parameterization and estimation of the lithospheric field. The result of this effort is a model whose fits to the data are generally superior to previous models and whose parameter states for the various constituent sources are very reasonable.

  3. Neuronvisio: A Graphical User Interface with 3D Capabilities for NEURON.

    PubMed

    Mattioni, Michele; Cohen, Uri; Le Novère, Nicolas

    2012-01-01

    The NEURON simulation environment is a commonly used tool to perform electrical simulation of neurons and neuronal networks. The NEURON User Interface, based on the now discontinued InterViews library, provides some limited facilities to explore models and to plot their simulation results. Other limitations include the inability to generate a three-dimensional visualization, no standard mean to save the results of simulations, or to store the model geometry within the results. Neuronvisio (http://neuronvisio.org) aims to address these deficiencies through a set of well designed python APIs and provides an improved UI, allowing users to explore and interact with the model. Neuronvisio also facilitates access to previously published models, allowing users to browse, download, and locally run NEURON models stored in ModelDB. Neuronvisio uses the matplotlib library to plot simulation results and uses the HDF standard format to store simulation results. Neuronvisio can be viewed as an extension of NEURON, facilitating typical user workflows such as model browsing, selection, download, compilation, and simulation. The 3D viewer simplifies the exploration of complex model structure, while matplotlib permits the plotting of high-quality graphs. The newly introduced ability of saving numerical results allows users to perform additional analysis on their previous simulations.

  4. A Generalized Information Theoretical Model for Quantum Secret Sharing

    NASA Astrophysics Data System (ADS)

    Bai, Chen-Ming; Li, Zhi-Hui; Xu, Ting-Ting; Li, Yong-Ming

    2016-11-01

    An information theoretical model for quantum secret sharing was introduced by H. Imai et al. (Quantum Inf. Comput. 5(1), 69-80 2005), which was analyzed by quantum information theory. In this paper, we analyze this information theoretical model using the properties of the quantum access structure. By the analysis we propose a generalized model definition for the quantum secret sharing schemes. In our model, there are more quantum access structures which can be realized by our generalized quantum secret sharing schemes than those of the previous one. In addition, we also analyse two kinds of important quantum access structures to illustrate the existence and rationality for the generalized quantum secret sharing schemes and consider the security of the scheme by simple examples.

  5. Molecular dynamics modelling of EGCG clusters on ceramide bilayers

    NASA Astrophysics Data System (ADS)

    Yeo, Jingjie; Cheng, Yuan; Li, Weifeng; Zhang, Yong-Wei

    2015-12-01

    A novel method of atomistic modelling and characterization of both pure ceramide and mixed lipid bilayers is being developed, using only the General Amber ForceField. Lipid bilayers modelled as pure ceramides adopt hexagonal packing after equilibration, and the area per lipid and bilayer thickness are consistent with previously reported theoretical results. Mixed lipid bilayers are modelled as a combination of ceramides, cholesterol, and free fatty acids. This model is shown to be stable after equilibration. Green tea extract, also known as epigallocatechin-3-gallate, is introduced as a spherical cluster on the surface of the mixed lipid bilayer. It is demonstrated that the cluster is able to bind to the bilayers as a cluster without diffusing into the surrounding water.

  6. An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression

    PubMed Central

    Weiss, Brandi A.; Dardick, William

    2015-01-01

    This article introduces an entropy-based measure of data–model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify the quality of classification and separation of group membership. Entropy complements preexisting measures of data–model fit and provides unique information not contained in other measures. Hypothetical data scenarios, an applied example, and Monte Carlo simulation results are used to demonstrate the application of entropy in logistic regression. Entropy should be used in conjunction with other measures of data–model fit to assess how well logistic regression models classify cases into observed categories. PMID:29795897

  7. An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression.

    PubMed

    Weiss, Brandi A; Dardick, William

    2016-12-01

    This article introduces an entropy-based measure of data-model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify the quality of classification and separation of group membership. Entropy complements preexisting measures of data-model fit and provides unique information not contained in other measures. Hypothetical data scenarios, an applied example, and Monte Carlo simulation results are used to demonstrate the application of entropy in logistic regression. Entropy should be used in conjunction with other measures of data-model fit to assess how well logistic regression models classify cases into observed categories.

  8. D Modelling of AN Indoor Space Using a Rotating Stereo Frame Camera System

    NASA Astrophysics Data System (ADS)

    Kang, J.; Lee, I.

    2016-06-01

    Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  9. Inhalation exposure to cleaning products: application of a two-zone model.

    PubMed

    Earnest, C Matt; Corsi, Richard L

    2013-01-01

    In this study, modifications were made to previously applied two-zone models to address important factors that can affect exposures during cleaning tasks. Specifically, we expand on previous applications of the two-zone model by (1) introducing the source in discrete elements (source-cells) as opposed to a complete instantaneous release, (2) placing source cells in both the inner (near person) and outer zones concurrently, (3) treating each source cell as an independent mixture of multiple constituents, and (4) tracking the time-varying liquid concentration and emission rate of each constituent in each source cell. Three experiments were performed in an environmentally controlled chamber with a thermal mannequin and a simplified pure chemical source to simulate emissions from a cleaning product. Gas phase concentration measurements were taken in the bulk air and in the breathing zone of the mannequin to evaluate the model. The mean ratio of the integrated concentration in the mannequin's breathing zone to the concentration in the outer zone was 4.3 (standard deviation, σ = 1.6). The mean ratio of measured concentration in the breathing zone to predicted concentrations in the inner zone was 0.81 (σ = 0.16). Intake fractions ranged from 1.9 × 10(-3) to 2.7 × 10(-3). Model results reasonably predict those of previous exposure monitoring studies and indicate the inadequacy of well-mixed single-zone model applications for some but not all cleaning events.

  10. Moduli of quantum Riemannian geometries on <=4 points

    NASA Astrophysics Data System (ADS)

    Majid, S.; Raineri, E.

    2004-12-01

    We classify parallelizable noncommutative manifold structures on finite sets of small size in the general formalism of framed quantum manifolds and vielbeins introduced previously [S. Majid, Commun. Math. Phys. 225, 131 (2002)]. The full moduli space is found for ⩽3 points, and a restricted moduli space for 4 points. Generalized Levi-Cività connections and their curvatures are found for a variety of models including models of a discrete torus. The topological part of the moduli space is found for ⩽9 points based on the known atlas of regular graphs. We also remark on aspects of quantum gravity in this approach.

  11. Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.

    PubMed

    van Elburg, Ronald A J; van Ooyen, Arjen

    2009-07-01

    An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.

  12. Microblog sentiment analysis using social and topic context.

    PubMed

    Zou, Xiaomei; Yang, Jing; Zhang, Jianpei

    2018-01-01

    Analyzing massive user-generated microblogs is very crucial in many fields, attracting many researchers to study. However, it is very challenging to process such noisy and short microblogs. Most prior works only use texts to identify sentiment polarity and assume that microblogs are independent and identically distributed, which ignore microblogs are networked data. Therefore, their performance is not usually satisfactory. Inspired by two sociological theories (sentimental consistency and emotional contagion), in this paper, we propose a new method combining social context and topic context to analyze microblog sentiment. In particular, different from previous work using direct user relations, we introduce structure similarity context into social contexts and propose a method to measure structure similarity. In addition, we also introduce topic context to model the semantic relations between microblogs. Social context and topic context are combined by the Laplacian matrix of the graph built by these contexts and Laplacian regularization are added into the microblog sentiment analysis model. Experimental results on two real Twitter datasets demonstrate that our proposed model can outperform baseline methods consistently and significantly.

  13. The HIrisPlex-S system for eye, hair and skin colour prediction from DNA: Introduction and forensic developmental validation.

    PubMed

    Chaitanya, Lakshmi; Breslin, Krystal; Zuñiga, Sofia; Wirken, Laura; Pośpiech, Ewelina; Kukla-Bartoszek, Magdalena; Sijen, Titia; Knijff, Peter de; Liu, Fan; Branicki, Wojciech; Kayser, Manfred; Walsh, Susan

    2018-07-01

    Forensic DNA Phenotyping (FDP), i.e. the prediction of human externally visible traits from DNA, has become a fast growing subfield within forensic genetics due to the intelligence information it can provide from DNA traces. FDP outcomes can help focus police investigations in search of unknown perpetrators, who are generally unidentifiable with standard DNA profiling. Therefore, we previously developed and forensically validated the IrisPlex DNA test system for eye colour prediction and the HIrisPlex system for combined eye and hair colour prediction from DNA traces. Here we introduce and forensically validate the HIrisPlex-S DNA test system (S for skin) for the simultaneous prediction of eye, hair, and skin colour from trace DNA. This FDP system consists of two SNaPshot-based multiplex assays targeting a total of 41 SNPs via a novel multiplex assay for 17 skin colour predictive SNPs and the previous HIrisPlex assay for 24 eye and hair colour predictive SNPs, 19 of which also contribute to skin colour prediction. The HIrisPlex-S system further comprises three statistical prediction models, the previously developed IrisPlex model for eye colour prediction based on 6 SNPs, the previous HIrisPlex model for hair colour prediction based on 22 SNPs, and the recently introduced HIrisPlex-S model for skin colour prediction based on 36 SNPs. In the forensic developmental validation testing, the novel 17-plex assay performed in full agreement with the Scientific Working Group on DNA Analysis Methods (SWGDAM) guidelines, as previously shown for the 24-plex assay. Sensitivity testing of the 17-plex assay revealed complete SNP profiles from as little as 63 pg of input DNA, equalling the previously demonstrated sensitivity threshold of the 24-plex HIrisPlex assay. Testing of simulated forensic casework samples such as blood, semen, saliva stains, of inhibited DNA samples, of low quantity touch (trace) DNA samples, and of artificially degraded DNA samples as well as concordance testing, demonstrated the robustness, efficiency, and forensic suitability of the new 17-plex assay, as previously shown for the 24-plex assay. Finally, we provide an update to the publically available HIrisPlex website https://hirisplex.erasmusmc.nl/, now allowing the estimation of individual probabilities for 3 eye, 4 hair, and 5 skin colour categories from HIrisPlex-S input genotypes. The HIrisPlex-S DNA test represents the first forensically validated tool for skin colour prediction, and reflects the first forensically validated tool for simultaneous eye, hair and skin colour prediction from DNA. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. A Statistical Atrioventricular Node Model Accounting for Pathway Switching During Atrial Fibrillation.

    PubMed

    Henriksson, Mikael; Corino, Valentina D A; Sornmo, Leif; Sandberg, Frida

    2016-09-01

    The atrioventricular (AV) node plays a central role in atrial fibrillation (AF), as it influences the conduction of impulses from the atria into the ventricles. In this paper, the statistical dual pathway AV node model, previously introduced by us, is modified so that it accounts for atrial impulse pathway switching even if the preceding impulse did not cause a ventricular activation. The proposed change in model structure implies that the number of model parameters subjected to maximum likelihood estimation is reduced from five to four. The model is evaluated using the data acquired in the RATe control in atrial fibrillation (RATAF) study, involving 24-h ECG recordings from 60 patients with permanent AF. When fitting the models to the RATAF database, similar results were obtained for both the present and the previous model, with a median fit of 86%. The results show that the parameter estimates characterizing refractory period prolongation exhibit considerably lower variation when using the present model, a finding that may be ascribed to fewer model parameters. The new model maintains the capability to model RR intervals, while providing more reliable parameters estimates. The model parameters are expected to convey novel clinical information, and may be useful for predicting the effect of rate control drugs.

  15. Stochastic models for regulatory networks of the genetic toggle switch.

    PubMed

    Tian, Tianhai; Burrage, Kevin

    2006-05-30

    Bistability arises within a wide range of biological systems from the lambda phage switch in bacteria to cellular signal transduction pathways in mammalian cells. Changes in regulatory mechanisms may result in genetic switching in a bistable system. Recently, more and more experimental evidence in the form of bimodal population distributions indicates that noise plays a very important role in the switching of bistable systems. Although deterministic models have been used for studying the existence of bistability properties under various system conditions, these models cannot realize cell-to-cell fluctuations in genetic switching. However, there is a lag in the development of stochastic models for studying the impact of noise in bistable systems because of the lack of detailed knowledge of biochemical reactions, kinetic rates, and molecular numbers. In this work, we develop a previously undescribed general technique for developing quantitative stochastic models for large-scale genetic regulatory networks by introducing Poisson random variables into deterministic models described by ordinary differential equations. Two stochastic models have been proposed for the genetic toggle switch interfaced with either the SOS signaling pathway or a quorum-sensing signaling pathway, and we have successfully realized experimental results showing bimodal population distributions. Because the introduced stochastic models are based on widely used ordinary differential equation models, the success of this work suggests that this approach is a very promising one for studying noise in large-scale genetic regulatory networks.

  16. Stochastic models for regulatory networks of the genetic toggle switch

    PubMed Central

    Tian, Tianhai; Burrage, Kevin

    2006-01-01

    Bistability arises within a wide range of biological systems from the λ phage switch in bacteria to cellular signal transduction pathways in mammalian cells. Changes in regulatory mechanisms may result in genetic switching in a bistable system. Recently, more and more experimental evidence in the form of bimodal population distributions indicates that noise plays a very important role in the switching of bistable systems. Although deterministic models have been used for studying the existence of bistability properties under various system conditions, these models cannot realize cell-to-cell fluctuations in genetic switching. However, there is a lag in the development of stochastic models for studying the impact of noise in bistable systems because of the lack of detailed knowledge of biochemical reactions, kinetic rates, and molecular numbers. In this work, we develop a previously undescribed general technique for developing quantitative stochastic models for large-scale genetic regulatory networks by introducing Poisson random variables into deterministic models described by ordinary differential equations. Two stochastic models have been proposed for the genetic toggle switch interfaced with either the SOS signaling pathway or a quorum-sensing signaling pathway, and we have successfully realized experimental results showing bimodal population distributions. Because the introduced stochastic models are based on widely used ordinary differential equation models, the success of this work suggests that this approach is a very promising one for studying noise in large-scale genetic regulatory networks. PMID:16714385

  17. Design and development of second order MEMS sound pressure gradient sensor

    NASA Astrophysics Data System (ADS)

    Albahri, Shehab

    The design and development of a second order MEMS sound pressure gradient sensor is presented in this dissertation. Inspired by the directional hearing ability of the parasitoid fly, Ormia ochracea, a novel first order directional microphone that mimics the mechanical structure of the fly's ears and detects the sound pressure gradient has been developed. While the first order directional microphones can be very beneficial in a large number of applications, there is great potential for remarkable improvements in performance through the use of second order systems. The second order directional microphone is able to provide a theoretical improvement in Sound to Noise ratio (SNR) of 9.5dB, compared to the first-order system that has its maximum SNR of 6dB. Although second order microphone is more sensitive to sound angle of incidence, the nature of the design and fabrication process imposes different factors that could lead to deterioration in its performance. The first Ormia ochracea second order directional microphone was designed in 2004 and fabricated in 2006 at Binghamton University. The results of the tested parts indicate that the Ormia ochracea second order directional microphone performs mostly as an Omni directional microphone. In this work, the previous design is reexamined and analyzed to explain the unexpected results. A more sophisticated tool implementing a finite element package ANSYS is used to examine the previous design response. This new tool is used to study different factors that used to be ignored in the previous design, mainly; response mismatch and fabrication uncertainty. A continuous model using Hamilton's principle is introduced to verify the results using the new method. Both models agree well, and propose a new way for optimizing the second order directional microphone using geometrical manipulation. In this work we also introduce a new fabrication process flow to increase the fabrication yield. The newly suggested method uses the shell layered analysis method in ANSYS. The developed models simulate the fabricated chips at different stages; with the stress at each layer is introduced using thermal loading. The results indicate a new fabrication process flow to increase the rigidity of the composite layers, and countering the deformation caused by the high stress in the thermal oxide layer.

  18. A computational modeling of semantic knowledge in reading comprehension: Integrating the landscape model with latent semantic analysis.

    PubMed

    Yeari, Menahem; van den Broek, Paul

    2016-09-01

    It is a well-accepted view that the prior semantic (general) knowledge that readers possess plays a central role in reading comprehension. Nevertheless, computational models of reading comprehension have not integrated the simulation of semantic knowledge and online comprehension processes under a unified mathematical algorithm. The present article introduces a computational model that integrates the landscape model of comprehension processes with latent semantic analysis representation of semantic knowledge. In three sets of simulations of previous behavioral findings, the integrated model successfully simulated the activation and attenuation of predictive and bridging inferences during reading, as well as centrality estimations and recall of textual information after reading. Analyses of the computational results revealed new theoretical insights regarding the underlying mechanisms of the various comprehension phenomena.

  19. Using travel socialization and underlying motivations to better understand motorcycle usage in Taiwan.

    PubMed

    Chang, Hsin-Li; Lai, Chi-Yen

    2015-06-01

    This study introduces self-determination theory (SDT) to refine previous models of vehicle usage motivation. We add travel socialization theory regarding parental influence on vehicle usage to enhance previous structural models describing motorcycle usage behavior. Our newly developed model was empirically verified in a sample of 721 motorcycle users in Taiwan. In addition to instrumental, symbolic, and affective motivations, perceived parental attitudes (PPAs) towards motorcycle riding were found to have a significant effect on individuals' motorcycle use habits. Additionally, participants who perceived their parents to have more positive attitudes toward motorcycles were found to have more experience being chauffeured on motorcycles by their parents. Based on these results, we suggest means to confront the challenges brought on by the rapid growth of motorcycle usage, especially serious motorcycle traffic accidents. These results improve our understanding motorcycle usage in Taiwan and can be used by transportation professionals who are seeking solutions to the rapid growth of motorcycle usage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. All-optical switch using optically controlled two mode interference coupler.

    PubMed

    Sahu, Partha Pratim

    2012-05-10

    In this paper, we have introduced optically controlled two-mode interference (OTMI) coupler having silicon core and GaAsInP cladding as an all-optical switch. By taking advantage of refractive index modulation by launching optical pulse into cladding region of TMI waveguide, we have shown optically controlled switching operation. We have studied optical pulse-controlled coupling characteristics of the proposed device by using a simple mathematical model on the basis of sinusoidal modes. The device length is less than that of previous work. It is also seen that the cross talk of the OTMI switch is not significantly increased with fabrication tolerances (±δw) in comparison with previous work.

  1. Predicting healthcare trajectories from medical records: A deep learning approach.

    PubMed

    Pham, Trang; Tran, Truyen; Phung, Dinh; Venkatesh, Svetha

    2017-05-01

    Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, stored in electronic medical records are episodic and irregular in time. We introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors and models patient health state trajectories by the memory of historical records. Built on Long Short-Term Memory (LSTM), DeepCare introduces methods to handle irregularly timed events by moderating the forgetting and consolidation of memory. DeepCare also explicitly models medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden - diabetes and mental health - the results show improved prediction accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Aerosol modelling and validation during ESCOMPTE 2001

    NASA Astrophysics Data System (ADS)

    Cousin, F.; Liousse, C.; Cachier, H.; Bessagnet, B.; Guillaume, B.; Rosset, R.

    The ESCOMPTE 2001 programme (Atmospheric Research. 69(3-4) (2004) 241) has resulted in an exhaustive set of dynamical, radiative, gas and aerosol observations (surface and aircraft measurements). A previous paper (Atmospheric Research. (2004) in press) has dealt with dynamics and gas-phase chemistry. The present paper is an extension to aerosol formation, transport and evolution. To account for important loadings of primary and secondary aerosols and their transformation processes in the ESCOMPTE domain, the ORISAM aerosol module (Atmospheric Environment. 35 (2001) 4751) was implemented on-line in the air-quality Meso-NH-C model. Additional developments have been introduced in ORganic and Inorganic Spectral Aerosol Module (ORISAM) to improve the comparison between simulations and experimental surface and aircraft field data. This paper discusses this comparison for a simulation performed during one selected day, 24 June 2001, during the Intensive Observation Period IOP2b. Our work relies on BC and OCp emission inventories specifically developed for ESCOMPTE. This study confirms the need for a fine resolution aerosol inventory with spectral chemical speciation. BC levels are satisfactorily reproduced, thus validating our emission inventory and its processing through Meso-NH-C. However, comparisons for reactive species generally denote an underestimation of concentrations. Organic aerosol levels are rather well simulated though with a trend to underestimation in the afternoon. Inorganic aerosol species are underestimated for several reasons, some of them have been identified. For sulphates, primary emissions were introduced. Improvement was obtained too for modelled nitrate and ammonium levels after introducing heterogeneous chemistry. However, no modelling of terrigeneous particles is probably a major cause for nitrates and ammonium underestimations. Particle numbers and size distributions are well reproduced, but only in the submicrometer range. Our work points out to the need of introducing coarse dust particles to further improve the simulation of PM-10 concentrations and more accurate modelling of gas-particle interactions.

  3. Skin-electrode circuit model for use in optimizing energy transfer in volume conduction systems.

    PubMed

    Hackworth, Steven A; Sun, Mingui; Sclabassi, Robert J

    2009-01-01

    The X-Delta model for through-skin volume conduction systems is introduced and analyzed. This new model has advantages over our previous X model in that it explicitly represents current pathways in the skin. A vector network analyzer is used to take measurements on pig skin to obtain data for use in finding the model's impedance parameters. An optimization method for obtaining this more complex model's parameters is described. Results show the model to accurately represent the impedance behavior of the skin system with error of generally less than one percent. Uses for the model include optimizing energy transfer across the skin in a volume conduction system with appropriate current exposure constraints, and exploring non-linear behavior of the electrode-skin system at moderate voltages (below ten) and frequencies (kilohertz to megahertz).

  4. Spectra, current flow, and wave-function morphology in a model PT -symmetric quantum dot with external interactions

    NASA Astrophysics Data System (ADS)

    Tellander, Felix; Berggren, Karl-Fredrik

    2017-04-01

    In this paper we use numerical simulations to study a two-dimensional (2D) quantum dot (cavity) with two leads for passing currents (electrons, photons, etc.) through the system. By introducing an imaginary potential in each lead the system is made symmetric under parity-time inversion (PT symmetric). This system is experimentally realizable in the form of, e.g., quantum dots in low-dimensional semiconductors, optical and electromagnetic cavities, and other classical wave analogs. The computational model introduced here for studying spectra, exceptional points (EPs), wave-function symmetries and morphology, and current flow includes thousands of interacting states. This supplements previous analytic studies of few interacting states by providing more detail and higher resolution. The Hamiltonian describing the system is non-Hermitian; thus, the eigenvalues are, in general, complex. The structure of the wave functions and probability current densities are studied in detail at and in between EPs. The statistics for EPs is evaluated, and reasons for a gradual dynamical crossover are identified.

  5. Towards a Compositional SPIN

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Giannakopoulou, Dimitra

    2006-01-01

    This paper discusses our initial experience with introducing automated assume-guarantee verification based on learning in the SPIN tool. We believe that compositional verification techniques such as assume-guarantee reasoning could complement the state-reduction techniques that SPIN already supports, thus increasing the size of systems that SPIN can handle. We present a "light-weight" approach to evaluating the benefits of learning-based assume-guarantee reasoning in the context of SPIN: we turn our previous implementation of learning for the LTSA tool into a main program that externally invokes SPIN to provide the model checking-related answers. Despite its performance overheads (which mandate a future implementation within SPIN itself), this approach provides accurate information about the savings in memory. We have experimented with several versions of learning-based assume guarantee reasoning, including a novel heuristic introduced here for generating component assumptions when their environment is unavailable. We illustrate the benefits of learning-based assume-guarantee reasoning in SPIN through the example of a resource arbiter for a spacecraft. Keywords: assume-guarantee reasoning, model checking, learning.

  6. Transcription elongation. Heterogeneous tracking of RNA polymerase and its biological implications.

    PubMed

    Imashimizu, Masahiko; Shimamoto, Nobuo; Oshima, Taku; Kashlev, Mikhail

    2014-01-01

    Regulation of transcription elongation via pausing of RNA polymerase has multiple physiological roles. The pausing mechanism depends on the sequence heterogeneity of the DNA being transcribed, as well as on certain interactions of polymerase with specific DNA sequences. In order to describe the mechanism of regulation, we introduce the concept of heterogeneity into the previously proposed alternative models of elongation, power stroke and Brownian ratchet. We also discuss molecular origins and physiological significances of the heterogeneity.

  7. Micro thermal energy harvester design optimization

    NASA Astrophysics Data System (ADS)

    Trioux, E.; Monfray, S.; Basrour, S.

    2017-11-01

    This paper reports the recent progress of a new technology to scavenge thermal energy, implying a double-step transduction through the thermal buckling of a bilayer aluminum nitride/aluminum bridge and piezoelectric transduction. A completely new scavenger design is presented, with improved performance. The butterfly shape reduces the overall device mechanical rigidity, which leads to a decrease in buckling temperatures compared to previously studied rectangular plates. Firstly, an analytical model exposes the basic principle of the presented device. Then a numerical model completes the explanations by introducing a butterfly shaped structure. Finally the fabrication process is briefly described and both the rectangular and butterfly harvesters are characterized. We compare their performances with an equal thickness of Al and AlN. Secondly, with a thicker Al layer than AlN layer, we will characterize only the butterfly structure in terms of output power and buckling temperatures, and compare it to the previous stack.

  8. An age-specific biokinetic model for iodine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leggett, Richard Wayne

    This study reviews age-specific biokinetic data for iodine in humans and extends to pre-adult ages the baseline parameter values of the author’s previously published model for systemic iodine in adult humans. Compared with the ICRP’s current age-specific model for iodine introduced in Publication 56 (1989), the present model provides a more detailed description of the behavior of iodine in the human body; predicts greater cumulative (integrated) activity in the thyroid for short-lived isotopes of iodine; predicts similar cumulative activity in the thyroid for isotopes with half-time greater than a few hours; and, for most iodine isotopes, predicts much greater cumulativemore » activity in salivary glands, stomach wall, liver, and kidneys.« less

  9. An age-specific biokinetic model for iodine

    DOE PAGES

    Leggett, Richard Wayne

    2017-10-26

    This study reviews age-specific biokinetic data for iodine in humans and extends to pre-adult ages the baseline parameter values of the author’s previously published model for systemic iodine in adult humans. Compared with the ICRP’s current age-specific model for iodine introduced in Publication 56 (1989), the present model provides a more detailed description of the behavior of iodine in the human body; predicts greater cumulative (integrated) activity in the thyroid for short-lived isotopes of iodine; predicts similar cumulative activity in the thyroid for isotopes with half-time greater than a few hours; and, for most iodine isotopes, predicts much greater cumulativemore » activity in salivary glands, stomach wall, liver, and kidneys.« less

  10. Expanding (3+1)-dimensional universe from a lorentzian matrix model for superstring theory in (9+1) dimensions.

    PubMed

    Kim, Sang-Woo; Nishimura, Jun; Tsuchiya, Asato

    2012-01-06

    We reconsider the matrix model formulation of type IIB superstring theory in (9+1)-dimensional space-time. Unlike the previous works in which the Wick rotation was used to make the model well defined, we regularize the Lorentzian model by introducing infrared cutoffs in both the spatial and temporal directions. Monte Carlo studies reveal that the two cutoffs can be removed in the large-N limit and that the theory thus obtained has no parameters other than one scale parameter. Moreover, we find that three out of nine spatial directions start to expand at some "critical time," after which the space has SO(3) symmetry instead of SO(9).

  11. An alpha particle model for Carbon-12

    NASA Astrophysics Data System (ADS)

    Rawlinson, J. I.

    2018-07-01

    We introduce a new model for the Carbon-12 nucleus and compute its lowest energy levels. Our model is inspired by previous work on the rigid body approximation in the B = 12 sector of the Skyrme model. We go beyond this approximation and treat the nucleus as a deformable body, finding several new states. A restricted set of deformations is considered, leading to a configuration space C which has a graph-like structure. We use ideas from quantum graph theory in order to make sense of quantum mechanics on C even though it is not a manifold. This is a new approach to Skyrmion quantisation and the method presented in this paper could be applied to a variety of other problems.

  12. Anisotropic charged generalized polytropic models

    NASA Astrophysics Data System (ADS)

    Nasim, A.; Azam, M.

    2018-06-01

    In this paper, we found some new anisotropic charged models admitting generalized polytropic equation of state with spherically symmetry. An analytic solution of the Einstein-Maxwell field equations is obtained through the transformation introduced by Durgapal and Banerji (Phys. Rev. D 27:328, 1983). The physical viability of solutions corresponding to polytropic index η =1/2, 2/3, 1, 2 is analyzed graphically. For this, we plot physical quantities such as radial and tangential pressure, anisotropy, speed of sound which demonstrated that these models achieve all the considerable physical conditions required for a relativistic star. Further, it is mentioned here that previous results for anisotropic charged matter with linear, quadratic and polytropic equation of state can be retrieved.

  13. Heterogeneity in hedonic modelling of house prices: looking at buyers' household profiles

    NASA Astrophysics Data System (ADS)

    Kestens, Yan; Thériault, Marius; Des Rosiers, François

    2006-03-01

    This paper introduces household-level data into hedonic models in order to measure the heterogeneity of implicit prices regarding household type, age, educational attainment, income, and the previous tenure status of the buyers. Two methods are used for this purpose: a first series of models uses expansion terms, whereas a second series applies Geographically Weighted Regressions. Both methods yield conclusive results, showing that the marginal value given to certain property specifics and location attributes do vary regarding the characteristics of the buyer’s household. Particularly, major findings concern the significant effect of income on the location rent as well as the premium paid by highly-educated households in order to fulfil social homogeneity.

  14. Maximum-entropy description of animal movement.

    PubMed

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  15. Effects of the local structure dependence of evaporation fields on field evaporation behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Lan; Marquis, Emmanuelle A., E-mail: emarq@umich.edu; Withrow, Travis

    2015-12-14

    Accurate three dimensional reconstructions of atomic positions and full quantification of the information contained in atom probe microscopy data rely on understanding the physical processes taking place during field evaporation of atoms from needle-shaped specimens. However, the modeling framework for atom probe microscopy has only limited quantitative justification. Building on the continuum field models previously developed, we introduce a more physical approach with the selection of evaporation events based on density functional theory calculations. This model reproduces key features observed experimentally in terms of sequence of evaporation, evaporation maps, and depth resolution, and provides insights into the physical limit formore » spatial resolution.« less

  16. Economic model for QoS guarantee on the Internet

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; Wei, Jiaolong

    2001-09-01

    This paper describes a QoS guarantee architecture suited for best-effort environments, based on ideas from microeconomics and non-cooperative game theory. First, an analytic model is developed for the study of the resource allocation in the Internet. Then we show that with a simple pricing mechanism (from network implementation and users' points-of-view), we were able to provide QoS guarantee at per flow level without resource allocation or complicated scheduling mechanisms or maintaining per flow state in the core network. Unlike the previous work on this area, we extend the basic model to support inelastic applications which require minimum bandwidth guarantees for a given time period by introducing derivative market.

  17. Allee effects in tritrophic food chains: some insights in pest biological control.

    PubMed

    Costa, Michel Iskin da S; Dos Anjos, Lucas

    2016-12-01

    Release of natural enemies to control pest populations is a common strategy in biological control. However, its effectiveness is supposed to be impaired, among other factors, by Allee effects in the biological control agent and by the fact that introduced pest natural enemies interact with some native species of the ecosystem. In this work, we devise a tritrophic food chain model where the assumptions previously raised are proved correct when a hyperpredator attacks the introduced pest natural enemy by a functional response type 2 or 3. Moreover, success of pest control is shown to be related to the release of large amounts (i.e., inundative releases) of natural enemies. © The authors 2015. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  18. Universal scaling for second-class particles in a one-dimensional misanthrope process

    NASA Astrophysics Data System (ADS)

    Rákos, Attila

    2010-06-01

    We consider the one-dimensional Katz-Lebowitz-Spohn (KLS) model, which is a generalization of the totally asymmetric simple exclusion process (TASEP) with nearest neighbour interaction. Using a powerful mapping, the KLS model can be translated into a misanthrope process. In this model, for the repulsive case, it is possible to introduce second-class particles, the number of which is conserved. We study the distance distribution of second-class particles in this model numerically and find that for large distances it decreases as x-3/2. This agrees with a previous analytical result of Derrida et al (1993) for the TASEP, where the same asymptotic behaviour was found. We also study the dynamical scaling function of the distance distribution and find that it is universal within this family of models.

  19. Gyrokinetic-water-bag modeling of low-frequency instabilities in a laboratory magnetized plasma column

    NASA Astrophysics Data System (ADS)

    Gravier, E.; Klein, R.; Morel, P.; Besse, N.; Bertrand, P.

    2008-12-01

    A new model is presented, named collisional-gyro-water-bag (CGWB), which describes the collisional drift waves and ion-temperature-gradient (ITG) instabilities in a plasma column. This model is based on the kinetic gyro-water-bag approach recently developed [P. Morel et al., Phys. Plasmas 14, 112109 (2007)] to investigate ion-temperature-gradient modes. In CGWB electron-neutral collisions have been introduced and are now taken into account. The model has been validated by comparing CGWB linear analysis with other models previously proposed and experimental results as well. Kinetic effects on collisional drift waves are investigated, resulting in a less effective growth rate, and the transition from collisional drift waves to ITG instability depending on the ion temperature gradient is studied.

  20. Eliminating time dispersion from seismic wave modeling

    NASA Astrophysics Data System (ADS)

    Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik

    2018-04-01

    We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.

  1. CORRECTING FOR MEASUREMENT ERROR IN LATENT VARIABLES USED AS PREDICTORS*

    PubMed Central

    Schofield, Lynne Steuerle

    2015-01-01

    This paper represents a methodological-substantive synergy. A new model, the Mixed Effects Structural Equations (MESE) model which combines structural equations modeling and item response theory is introduced to attend to measurement error bias when using several latent variables as predictors in generalized linear models. The paper investigates racial and gender disparities in STEM retention in higher education. Using the MESE model with 1997 National Longitudinal Survey of Youth data, I find prior mathematics proficiency and personality have been previously underestimated in the STEM retention literature. Pre-college mathematics proficiency and personality explain large portions of the racial and gender gaps. The findings have implications for those who design interventions aimed at increasing the rates of STEM persistence among women and under-represented minorities. PMID:26977218

  2. Gravitationally influenced particle creation models and late-time cosmic acceleration

    NASA Astrophysics Data System (ADS)

    Pan, Supriya; Kumar Pal, Barun; Pramanik, Souvik

    In this work, we focus on the gravitationally influenced adiabatic particle creation process, a mechanism that does not need any dark energy or modified gravity models to explain the current accelerating phase of the universe. Introducing some particle creation models that generalize some previous models in the literature, we constrain the cosmological scenarios using the latest compilation of the Type Ia Supernovae data only, the first indicator of the accelerating universe. Aside from the observational constraints on the models, we examine the models using two model independent diagnoses, namely the cosmography and Om. Further, we establish the general conditions to test the thermodynamic viabilities of any particle creation model. Our analysis shows that at late-time, the models have close resemblance to that of the ΛCDM cosmology, and the models always satisfy the generalized second law of thermodynamics under certain conditions.

  3. A Bayesian model averaging method for improving SMT phrase table

    NASA Astrophysics Data System (ADS)

    Duan, Nan

    2013-03-01

    Previous methods on improving translation quality by employing multiple SMT models usually carry out as a second-pass decision procedure on hypotheses from multiple systems using extra features instead of using features in existing models in more depth. In this paper, we propose translation model generalization (TMG), an approach that updates probability feature values for the translation model being used based on the model itself and a set of auxiliary models, aiming to alleviate the over-estimation problem and enhance translation quality in the first-pass decoding phase. We validate our approach for translation models based on auxiliary models built by two different ways. We also introduce novel probability variance features into the log-linear models for further improvements. We conclude our approach can be developed independently and integrated into current SMT pipeline directly. We demonstrate BLEU improvements on the NIST Chinese-to-English MT tasks for single-system decodings.

  4. Designing Free Energy Surfaces That Match Experimental Data with Metadynamics

    DOE PAGES

    White, Andrew D.; Dama, James F.; Voth, Gregory A.

    2015-04-30

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. Previously we introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. We also introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psimore » angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. Finally, the example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.« less

  5. Jordan recurrent neural network versus IHACRES in modelling daily streamflows

    NASA Astrophysics Data System (ADS)

    Carcano, Elena Carla; Bartolini, Paolo; Muselli, Marco; Piroddi, Luigi

    2008-12-01

    SummaryA study of possible scenarios for modelling streamflow data from daily time series, using artificial neural networks (ANNs), is presented. Particular emphasis is devoted to the reconstruction of drought periods where water resource management and control are most critical. This paper considers two connectionist models: a feedforward multilayer perceptron (MLP) and a Jordan recurrent neural network (JNN), comparing network performance on real world data from two small catchments (192 and 69 km 2 in size) with irregular and torrential regimes. Several network configurations are tested to ensure a good combination of input features (rainfall and previous streamflow data) that capture the variability of the physical processes at work. Tapped delayed line (TDL) and memory effect techniques are introduced to recognize and reproduce temporal dependence. Results show a poor agreement when using TDL only, but a remarkable improvement can be obtained with JNN and its memory effect procedures, which are able to reproduce the system memory over a catchment in a more effective way. Furthermore, the IHACRES conceptual model, which relies on both rainfall and temperature input data, is introduced for comparative study. The results suggest that when good input data is unavailable, metric models perform better than conceptual ones and, in general, it is difficult to justify substantial conceptualization of complex processes.

  6. Designing free energy surfaces that match experimental data with metadynamics.

    PubMed

    White, Andrew D; Dama, James F; Voth, Gregory A

    2015-06-09

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. We previously introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. In this work, we introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. The example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.

  7. Calorie Changes in Large Chain Restaurants: Declines in New Menu Items but Room for Improvement.

    PubMed

    Bleich, Sara N; Wolfson, Julia A; Jarlenski, Marian P

    2016-01-01

    Large chain restaurants reduced the number of calories in newly introduced menu items in 2013 by about 60 calories (or 12%) relative to 2012. This paper describes trends in calories available in large U.S. chain restaurants to understand whether previously documented patterns persist. Data (a census of items for included restaurants) were obtained from the MenuStat project. This analysis included 66 of the 100 largest U.S. restaurants that are available in all three of the data years (2012-2014; N=23,066 items). Generalized linear models were used to examine: (1) per-item calorie changes from 2012 to 2014 among items on the menu in all years; and (2) mean calories in new items in 2013 and 2014 compared with items on the menu in 2012 only. Data were analyzed in 2014. Overall, calories in newly introduced menu items declined by 71 (or 15%) from 2012 to 2013 (p=0.001) and by 69 (or 14%) from 2012 to 2014 (p=0.03). These declines were concentrated mainly in new main course items (85 fewer calories in 2013 and 55 fewer calories in 2014; p=0.01). Although average calories in newly introduced menu items are declining, they are higher than items common to the menu in all 3 years. No differences in mean calories among items on menus in 2012, 2013, or 2014 were found. The previously observed declines in newly introduced menu items among large restaurant chains have been maintained, which suggests the beginning of a trend toward reducing calories. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  8. Framework based on stochastic L-Systems for modeling IP traffic with multifractal behavior

    NASA Astrophysics Data System (ADS)

    Salvador, Paulo S.; Nogueira, Antonio; Valadas, Rui

    2003-08-01

    In a previous work we have introduced a multifractal traffic model based on so-called stochastic L-Systems, which were introduced by biologist A. Lindenmayer as a method to model plant growth. L-Systems are string rewriting techniques, characterized by an alphabet, an axiom (initial string) and a set of production rules. In this paper, we propose a novel traffic model, and an associated parameter fitting procedure, which describes jointly the packet arrival and the packet size processes. The packet arrival process is modeled through a L-System, where the alphabet elements are packet arrival rates. The packet size process is modeled through a set of discrete distributions (of packet sizes), one for each arrival rate. In this way the model is able to capture correlations between arrivals and sizes. We applied the model to measured traffic data: the well-known pOct Bellcore, a trace of aggregate WAN traffic and two traces of specific applications (Kazaa and Operation Flashing Point). We assess the multifractality of these traces using Linear Multiscale Diagrams. The suitability of the traffic model is evaluated by comparing the empirical and fitted probability mass and autocovariance functions; we also compare the packet loss ratio and average packet delay obtained with the measured traces and with traces generated from the fitted model. Our results show that our L-System based traffic model can achieve very good fitting performance in terms of first and second order statistics and queuing behavior.

  9. State-to-State Internal Energy Relaxation Following the Quantum-Kinetic Model in DSMC

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2014-01-01

    A new model for chemical reactions, the Quantum-Kinetic (Q-K) model of Bird, has recently been introduced that does not depend on macroscopic rate equations or values of local flow field data. Subsequently, the Q-K model has been extended to include reactions involving charged species and electronic energy level transitions. Although this is a phenomenological model, it has been shown to accurately reproduce both equilibrium and non-equilibrium reaction rates. The usefulness of this model becomes clear as local flow conditions either exceed the conditions used to build previous models or when they depart from an equilibrium distribution. Presently, the applicability of the relaxation technique is investigated for the vibrational internal energy mode. The Forced Harmonic Oscillator (FHO) theory for vibrational energy level transitions is combined with the Q-K energy level transition model to accurately reproduce energy level transitions at a reduced computational cost compared to the older FHO models.

  10. A compact quantum correction model for symmetric double gate metal-oxide-semiconductor field-effect transistor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Edward Namkyu; Shin, Yong Hyeon; Yun, Ilgu, E-mail: iyun@yonsei.ac.kr

    2014-11-07

    A compact quantum correction model for a symmetric double gate (DG) metal-oxide-semiconductor field-effect transistor (MOSFET) is investigated. The compact quantum correction model is proposed from the concepts of the threshold voltage shift (ΔV{sub TH}{sup QM}) and the gate capacitance (C{sub g}) degradation. First of all, ΔV{sub TH}{sup QM} induced by quantum mechanical (QM) effects is modeled. The C{sub g} degradation is then modeled by introducing the inversion layer centroid. With ΔV{sub TH}{sup QM} and the C{sub g} degradation, the QM effects are implemented in previously reported classical model and a comparison between the proposed quantum correction model and numerical simulationmore » results is presented. Based on the results, the proposed quantum correction model can be applicable to the compact model of DG MOSFET.« less

  11. Directly comparing gravitational wave data to numerical relativity simulations: systematics

    NASA Astrophysics Data System (ADS)

    Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Zlochower, Yosef; Shoemaker, Deirdre; Lovelace, Geoffrey; Pankow, Christopher; Brady, Patrick; Scheel, Mark; Pfeiffer, Harald; Ossokine, Serguei

    2017-01-01

    We compare synthetic data directly to complete numerical relativity simulations of binary black holes. In doing so, we circumvent ad-hoc approximations introduced in semi-analytical models previously used in gravitational wave parameter estimation and compare the data against the most accurate waveforms including higher modes. In this talk, we focus on the synthetic studies that test potential sources of systematic errors. We also run ``end-to-end'' studies of intrinsically different synthetic sources to show we can recover parameters for different systems.

  12. Surface tension profiles in vertical soap films

    NASA Astrophysics Data System (ADS)

    Adami, N.; Caps, H.

    2015-01-01

    Surface tension profiles in vertical soap films are experimentally investigated. Measurements are performed by introducing deformable elastic objets in the films. The shape adopted by those objects once set in the film is related to the surface tension value at a given vertical position by numerically solving the adapted elasticity equations. We show that the observed dependency of the surface tension versus the vertical position is predicted by simple modeling that takes into account the mechanical equilibrium of the films coupled to previous thickness measurements.

  13. Assessing the utility of frequency dependent nudging for reducing biases in biogeochemical models

    NASA Astrophysics Data System (ADS)

    Lagman, Karl B.; Fennel, Katja; Thompson, Keith R.; Bianucci, Laura

    2014-09-01

    Bias errors, resulting from inaccurate boundary and forcing conditions, incorrect model parameterization, etc. are a common problem in environmental models including biogeochemical ocean models. While it is important to correct bias errors wherever possible, it is unlikely that any environmental model will ever be entirely free of such errors. Hence, methods for bias reduction are necessary. A widely used technique for online bias reduction is nudging, where simulated fields are continuously forced toward observations or a climatology. Nudging is robust and easy to implement, but suppresses high-frequency variability and introduces artificial phase shifts. As a solution to this problem Thompson et al. (2006) introduced frequency dependent nudging where nudging occurs only in prescribed frequency bands, typically centered on the mean and the annual cycle. They showed this method to be effective for eddy resolving ocean circulation models. Here we add a stability term to the previous form of frequency dependent nudging which makes the method more robust for non-linear biological models. Then we assess the utility of frequency dependent nudging for biological models by first applying the method to a simple predator-prey model and then to a 1D ocean biogeochemical model. In both cases we only nudge in two frequency bands centered on the mean and the annual cycle, and then assess how well the variability in higher frequency bands is recovered. We evaluate the effectiveness of frequency dependent nudging in comparison to conventional nudging and find significant improvements with the former.

  14. Human mobility in a continuum approach.

    PubMed

    Simini, Filippo; Maritan, Amos; Néda, Zoltán

    2013-01-01

    Human mobility is investigated using a continuum approach that allows to calculate the probability to observe a trip to any arbitrary region, and the fluxes between any two regions. The considered description offers a general and unified framework, in which previously proposed mobility models like the gravity model, the intervening opportunities model, and the recently introduced radiation model are naturally resulting as special cases. A new form of radiation model is derived and its validity is investigated using observational data offered by commuting trips obtained from the United States census data set, and the mobility fluxes extracted from mobile phone data collected in a western European country. The new modeling paradigm offered by this description suggests that the complex topological features observed in large mobility and transportation networks may be the result of a simple stochastic process taking place on an inhomogeneous landscape.

  15. Human Mobility in a Continuum Approach

    PubMed Central

    Simini, Filippo; Maritan, Amos; Néda, Zoltán

    2013-01-01

    Human mobility is investigated using a continuum approach that allows to calculate the probability to observe a trip to any arbitrary region, and the fluxes between any two regions. The considered description offers a general and unified framework, in which previously proposed mobility models like the gravity model, the intervening opportunities model, and the recently introduced radiation model are naturally resulting as special cases. A new form of radiation model is derived and its validity is investigated using observational data offered by commuting trips obtained from the United States census data set, and the mobility fluxes extracted from mobile phone data collected in a western European country. The new modeling paradigm offered by this description suggests that the complex topological features observed in large mobility and transportation networks may be the result of a simple stochastic process taking place on an inhomogeneous landscape. PMID:23555885

  16. ATLS Hypovolemic Shock Classification by Prediction of Blood Loss in Rats Using Regression Models.

    PubMed

    Choi, Soo Beom; Choi, Joon Yul; Park, Jee Soo; Kim, Deok Won

    2016-07-01

    In our previous study, our input data set consisted of 78 rats, the blood loss in percent as a dependent variable, and 11 independent variables (heart rate, systolic blood pressure, diastolic blood pressure, mean arterial pressure, pulse pressure, respiration rate, temperature, perfusion index, lactate concentration, shock index, and new index (lactate concentration/perfusion)). The machine learning methods for multicategory classification were applied to a rat model in acute hemorrhage to predict the four Advanced Trauma Life Support (ATLS) hypovolemic shock classes for triage in our previous study. However, multicategory classification is much more difficult and complicated than binary classification. We introduce a simple approach for classifying ATLS hypovolaemic shock class by predicting blood loss in percent using support vector regression and multivariate linear regression (MLR). We also compared the performance of the classification models using absolute and relative vital signs. The accuracies of support vector regression and MLR models with relative values by predicting blood loss in percent were 88.5% and 84.6%, respectively. These were better than the best accuracy of 80.8% of the direct multicategory classification using the support vector machine one-versus-one model in our previous study for the same validation data set. Moreover, the simple MLR models with both absolute and relative values could provide possibility of the future clinical decision support system for ATLS classification. The perfusion index and new index were more appropriate with relative changes than absolute values.

  17. Computer modeling of batteries from nonlinear circuit elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waaben, S.; Dyer, C.K.; Federico, J.

    1985-06-01

    Circuit analogs for a single battery cell have previously been composed of resistors, capacitors, and inductors. This work introduces a nonlinear circuit model for cell behavior. The circuit is configured around the PIN junction diode, whose charge-storage behavior has features similar to those of electrochemical cells. A user-friendly integrated circuit simulation computer program has reproduced a variety of complex cell responses including electrica isolation effects causing capacity loss, as well as potentiodynamic peaks and discharge phenomena hitherto thought to be thermodynamic in origin. However, in this work, they are shown to be simply due to spatial distribution of stored chargemore » within a practical electrode.« less

  18. The shaped pulses control and operation on the SG-III prototype facility

    NASA Astrophysics Data System (ADS)

    Ping, Li; Wei, Wang; Sai, Jin; Wanqing, Huang; Wenyi, Wang; Jingqin, Su; Runchang, Zhao

    2018-04-01

    The laser driven inertial confined fusion experiments require careful temporal shape control of the laser pulse. Two approaches are introduced to improve the accuracy and efficiency of the close loop feedback system for long term operation in TIL; the first one is a statistical model to analyze the variation of the parameters obtained from previous shots, the other is a matrix algorithm proposed to relate the electrical signal and the impulse amplitudes. With the model and algorithm applied in the pulse shaping in TIL, a variety of shaped pulses were produced with a 10% precision in half an hour for almost three years under different circumstance.

  19. Directional x-ray dark-field imaging of strongly ordered systems

    NASA Astrophysics Data System (ADS)

    Jensen, Torben Haugaard; Bech, Martin; Zanette, Irene; Weitkamp, Timm; David, Christian; Deyhle, Hans; Rutishauser, Simon; Reznikova, Elena; Mohr, Jürgen; Feidenhans'L, Robert; Pfeiffer, Franz

    2010-12-01

    Recently a novel grating based x-ray imaging approach called directional x-ray dark-field imaging was introduced. Directional x-ray dark-field imaging yields information about the local texture of structures smaller than the pixel size of the imaging system. In this work we extend the theoretical description and data processing schemes for directional dark-field imaging to strongly scattering systems, which could not be described previously. We develop a simple scattering model to account for these recent observations and subsequently demonstrate the model using experimental data. The experimental data includes directional dark-field images of polypropylene fibers and a human tooth slice.

  20. New leads in speculative behavior

    NASA Astrophysics Data System (ADS)

    Kindler, A.; Bourgeois-Gironde, S.; Lefebvre, G.; Solomon, S.

    2017-02-01

    The Kiyotaki and Wright (1989) (henceforth KW) model of money emergence as a medium of exchange has been studied from various perspectives in recent papers. In the present work we propose a minimalistic model for the behavior of agents in the KW framework, which may either reproduce the theoretical predictions of Kiyotaki and Wright (1989) on the emerging Nash equilibria, or (less closely) the empirical results of Brown (1996), Duffy and Ochs (1999) and our own, introduced in a first part of the present paper. The main import is the systematic computer scanning of speculative monetary equilibria under drastic bounded rationality of agents, based on behavior previously observed in the lab.

  1. Cascades on a class of clustered random networks

    NASA Astrophysics Data System (ADS)

    Hackett, Adam; Melnik, Sergey; Gleeson, James P.

    2011-05-01

    We present an analytical approach to determining the expected cascade size in a broad range of dynamical models on the class of random networks with arbitrary degree distribution and nonzero clustering introduced previously in [M. E. J. Newman, Phys. Rev. Lett. PRLTAO0031-900710.1103/PhysRevLett.103.058701103, 058701 (2009)]. A condition for the existence of global cascades is derived as well as a general criterion that determines whether increasing the level of clustering will increase, or decrease, the expected cascade size. Applications, examples of which are provided, include site percolation, bond percolation, and Watts’ threshold model; in all cases analytical results give excellent agreement with numerical simulations.

  2. Simulation of the present-day climate with the climate model INMCM5

    NASA Astrophysics Data System (ADS)

    Volodin, E. M.; Mortikov, E. V.; Kostrykin, S. V.; Galin, V. Ya.; Lykossov, V. N.; Gritsun, A. S.; Diansky, N. A.; Gusev, A. V.; Iakovlev, N. G.

    2017-12-01

    In this paper we present the fifth generation of the INMCM climate model that is being developed at the Institute of Numerical Mathematics of the Russian Academy of Sciences (INMCM5). The most important changes with respect to the previous version (INMCM4) were made in the atmospheric component of the model. Its vertical resolution was increased to resolve the upper stratosphere and the lower mesosphere. A more sophisticated parameterization of condensation and cloudiness formation was introduced as well. An aerosol module was incorporated into the model. The upgraded oceanic component has a modified dynamical core optimized for better implementation on parallel computers and has two times higher resolution in both horizontal directions. Analysis of the present-day climatology of the INMCM5 (based on the data of historical run for 1979-2005) shows moderate improvements in reproduction of basic circulation characteristics with respect to the previous version. Biases in the near-surface temperature and precipitation are slightly reduced compared with INMCM4 as well as biases in oceanic temperature, salinity and sea surface height. The most notable improvement over INMCM4 is the capability of the new model to reproduce the equatorial stratospheric quasi-biannual oscillation and statistics of sudden stratospheric warmings.

  3. Hypothesis testing on the fractal structure of behavioral sequences: the Bayesian assessment of scaling methodology.

    PubMed

    Moscoso del Prado Martín, Fermín

    2013-12-01

    I introduce the Bayesian assessment of scaling (BAS), a simple but powerful Bayesian hypothesis contrast methodology that can be used to test hypotheses on the scaling regime exhibited by a sequence of behavioral data. Rather than comparing parametric models, as typically done in previous approaches, the BAS offers a direct, nonparametric way to test whether a time series exhibits fractal scaling. The BAS provides a simpler and faster test than do previous methods, and the code for making the required computations is provided. The method also enables testing of finely specified hypotheses on the scaling indices, something that was not possible with the previously available methods. I then present 4 simulation studies showing that the BAS methodology outperforms the other methods used in the psychological literature. I conclude with a discussion of methodological issues on fractal analyses in experimental psychology. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  4. Wave dynamics in an extended macroscopic traffic flow model with periodic boundaries

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Qing; Chu, Xing-Jian; Zhou, Chao-Fan; Yan, Bo-Wen; Jia, Bin; Fang, Chen-Hao

    2018-06-01

    Motivated by the previous traffic flow model considering the real-time traffic state, a modified macroscopic traffic flow model is established. The periodic boundary condition is applied to the car-following model. Besides, the traffic state factor R is defined in order to correct the real traffic conditions in a more reasonable way. It is a key step that we introduce the relaxation time as a density-dependent function and provide corresponding evolvement of traffic flow. Three different typical initial densities, namely the high density, the medium one and the low one, are intensively investigated. It can be found that the hysteresis loop exists in the proposed periodic-boundary system. Furthermore, the linear and nonlinear stability analyses are performed in order to test the robustness of the system.

  5. Conformal mapping in optical biosensor applications.

    PubMed

    Zumbrum, Matthew E; Edwards, David A

    2015-09-01

    Optical biosensors are devices used to investigate surface-volume reaction kinetics. Current mathematical models for reaction dynamics rely on the assumption of unidirectional flow within these devices. However, new devices, such as the Flexchip, include a geometry that introduces two-dimensional flow, complicating the depletion of the volume reactant. To account for this, a previous mathematical model is extended to include two-dimensional flow, and the Schwarz-Christoffel mapping is used to relate the physical device geometry to that for a device with unidirectional flow. Mappings for several Flexchip dimensions are considered, and the ligand depletion effect is investigated for one of these mappings. Estimated rate constants are produced for simulated data to quantify the inclusion of two-dimensional flow in the mathematical model.

  6. Antineutrino Charged-Current Reactions on Hydrocarbon with Low Momentum Transfer

    NASA Astrophysics Data System (ADS)

    Gran, R.; Betancourt, M.; Elkins, M.; Rodrigues, P. A.; Akbar, F.; Aliaga, L.; Andrade, D. A.; Bashyal, A.; Bellantoni, L.; Bercellie, A.; Bodek, A.; Bravar, A.; Budd, H.; Vera, G. F. R. Caceres; Cai, T.; Carneiro, M. F.; Coplowe, D.; da Motta, H.; Dytman, S. A.; Díaz, G. A.; Felix, J.; Fields, L.; Fine, R.; Gallagher, H.; Ghosh, A.; Haider, H.; Han, J. Y.; Harris, D. A.; Henry, S.; Jena, D.; Kleykamp, J.; Kordosky, M.; Le, T.; Leistico, J. R.; Lovlein, A.; Lu, X.-G.; Maher, E.; Manly, S.; Mann, W. A.; Marshall, C. M.; McFarland, K. S.; McGowan, A. M.; Messerly, B.; Miller, J.; Mislivec, A.; Morfín, J. G.; Mousseau, J.; Naples, D.; Nelson, J. K.; Nguyen, C.; Norrick, A.; Nuruzzaman, Olivier, A.; Paolone, V.; Patrick, C. E.; Perdue, G. N.; Ramírez, M. A.; Ransome, R. D.; Ray, H.; Ren, L.; Rimal, D.; Ruterbories, D.; Schellman, H.; Salinas, C. J. Solano; Su, H.; Sultana, M.; Falero, S. Sánchez; Valencia, E.; Wolcott, J.; Wospakrik, M.; Yaeggy, B.; Minerva Collaboration

    2018-06-01

    We report on multinucleon effects in low momentum transfer (<0.8 GeV /c ) antineutrino interactions on plastic (CH) scintillator. These data are from the 2010-2011 antineutrino phase of the MINERvA experiment at Fermilab. The hadronic energy spectrum of this inclusive sample is well described when a screening effect at a low energy transfer and a two-nucleon knockout process are added to a relativistic Fermi gas model of quasielastic, Δ resonance, and higher resonance processes. In this analysis, model elements introduced to describe previously published neutrino results have quantitatively similar benefits for this antineutrino sample. We present the results as a double-differential cross section to accelerate the investigation of alternate models for antineutrino scattering off nuclei.

  7. The impact of short-term stochastic variability in solar irradiance on optimal microgrid design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schittekatte, Tim; Stadler, Michael; Cardoso, Gonçalo

    2016-07-01

    This paper proposes a new methodology to capture the impact of fast moving clouds on utility power demand charges observed in microgrids with photovoltaic (PV) arrays, generators, and electrochemical energy storage. It consists of a statistical approach to introduce sub-hourly events in the hourly economic accounting process. The methodology is implemented in the Distributed Energy Resources Customer Adoption Model (DER-CAM), a state of the art mixed integer linear model used to optimally size DER in decentralized energy systems. Results suggest that previous iterations of DER-CAM could undersize battery capacities. The improved model depicts more accurately the economic value of PVmore » as well as the synergistic benefits of pairing PV with storage.« less

  8. Antineutrino Charged-Current Reactions on Hydrocarbon with Low Momentum Transfer.

    PubMed

    Gran, R; Betancourt, M; Elkins, M; Rodrigues, P A; Akbar, F; Aliaga, L; Andrade, D A; Bashyal, A; Bellantoni, L; Bercellie, A; Bodek, A; Bravar, A; Budd, H; Vera, G F R Caceres; Cai, T; Carneiro, M F; Coplowe, D; da Motta, H; Dytman, S A; Díaz, G A; Felix, J; Fields, L; Fine, R; Gallagher, H; Ghosh, A; Haider, H; Han, J Y; Harris, D A; Henry, S; Jena, D; Kleykamp, J; Kordosky, M; Le, T; Leistico, J R; Lovlein, A; Lu, X-G; Maher, E; Manly, S; Mann, W A; Marshall, C M; McFarland, K S; McGowan, A M; Messerly, B; Miller, J; Mislivec, A; Morfín, J G; Mousseau, J; Naples, D; Nelson, J K; Nguyen, C; Norrick, A; Nuruzzaman; Olivier, A; Paolone, V; Patrick, C E; Perdue, G N; Ramírez, M A; Ransome, R D; Ray, H; Ren, L; Rimal, D; Ruterbories, D; Schellman, H; Salinas, C J Solano; Su, H; Sultana, M; Falero, S Sánchez; Valencia, E; Wolcott, J; Wospakrik, M; Yaeggy, B

    2018-06-01

    We report on multinucleon effects in low momentum transfer (<0.8  GeV/c) antineutrino interactions on plastic (CH) scintillator. These data are from the 2010-2011 antineutrino phase of the MINERvA experiment at Fermilab. The hadronic energy spectrum of this inclusive sample is well described when a screening effect at a low energy transfer and a two-nucleon knockout process are added to a relativistic Fermi gas model of quasielastic, Δ resonance, and higher resonance processes. In this analysis, model elements introduced to describe previously published neutrino results have quantitatively similar benefits for this antineutrino sample. We present the results as a double-differential cross section to accelerate the investigation of alternate models for antineutrino scattering off nuclei.

  9. An Undergraduate Research Experience on Studying Variable Stars

    NASA Astrophysics Data System (ADS)

    Amaral, A.; Percy, J. R.

    2016-06-01

    We describe and evaluate a summer undergraduate research project and experience by one of us (AA), under the supervision of the other (JP). The aim of the project was to sample current approaches to analyzing variable star data, and topics related to the study of Mira variable stars and their astrophysical importance. This project was done through the Summer Undergraduate Research Program (SURP) in astronomy at the University of Toronto. SURP allowed undergraduate students to explore and learn about many topics within astronomy and astrophysics, from instrumentation to cosmology. SURP introduced students to key skills which are essential for students hoping to pursue graduate studies in any scientific field. Variable stars proved to be an excellent topic for a research project. For beginners to independent research, it introduces key concepts in research such as critical thinking and problem solving, while illuminating previously learned topics in stellar physics. The focus of this summer project was to compare observations with structural and evolutionary models, including modelling the random walk behavior exhibited in the (O-C) diagrams of most Mira stars. We found that the random walk could be modelled by using random fluctuations of the period. This explanation agreed well with observations.

  10. Evaluation of an imputed pitch velocity model of the auditory tau effect.

    PubMed

    Henry, Molly J; McAuley, J Devin; Zaleha, Marta

    2009-08-01

    This article extends an imputed pitch velocity model of the auditory kappa effect proposed by Henry and McAuley (2009a) to the auditory tau effect. Two experiments were conducted using an AXB design in which listeners judged the relative pitch of a middle target tone (X) in ascending and descending three-tone sequences. In Experiment 1, sequences were isochronous, establishing constant fast, medium, and slow velocity conditions. No systematic distortions in perceived target pitch were observed, and thresholds were similar across velocity conditions. Experiment 2 introduced to-be-ignored variations in target timing. Variations in target timing that deviated from constant velocity conditions introduced systematic distortions in perceived target pitch, indicative of a robust auditory tau effect. Consistent with an auditory motion hypothesis, the magnitude of the tau effect was larger at faster velocities. In addition, the tau effect was generally stronger for descending sequences than for ascending sequences. Combined with previous work on the auditory kappa effect, the imputed velocity model and associated auditory motion hypothesis provide a unified quantitative account of both auditory tau and kappa effects. In broader terms, these findings add support to the view that pitch and time relations in auditory patterns are fundamentally interdependent.

  11. Analytic Couple Modeling Introducing Device Design Factor, Fin Factor, Thermal Diffusivity Factor, and Inductance Factor

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    A set of convenient thermoelectric device solutions have been derived in order to capture a number of factors which are previously only resolved with numerical techniques. The concise conversion efficiency equations derived from governing equations provide intuitive and straight-forward design guidelines. These guidelines allow for better device design without requiring detailed numerical modeling. The analytical modeling accounts for factors such as i) variable temperature boundary conditions, ii) lateral heat transfer, iii) temperature variable material properties, and iv) transient operation. New dimensionless parameters, similar to the figure of merit, are introduced including the device design factor, fin factor, thermal diffusivity factor, and inductance factor. These new device factors allow for the straight-forward description of phenomenon generally only captured with numerical work otherwise. As an example a device design factor of 0.38, which accounts for thermal resistance of the hot and cold shoes, can be used to calculate a conversion efficiency of 2.28 while the ideal conversion efficiency based on figure of merit alone would be 6.15. Likewise an ideal couple with efficiency of 6.15 will be reduced to 5.33 when lateral heat is accounted for with a fin factor of 1.0.

  12. Modeling the defrost process in complex geometries - Part 1: Development of a one-dimensional defrost model

    NASA Astrophysics Data System (ADS)

    van Buren, Simon; Hertle, Ellen; Figueiredo, Patric; Kneer, Reinhold; Rohlfs, Wilko

    2017-11-01

    Frost formation is a common, often undesired phenomenon in heat exchanges such as air coolers. Thus, air coolers have to be defrosted periodically, causing significant energy consumption. For the design and optimization, prediction of defrosting by a CFD tool is desired. This paper presents a one-dimensional transient model approach suitable to be used as a zero-dimensional wall-function in CFD for modeling the defrost process at the fin and tube interfaces. In accordance to previous work a multi stage defrost model is introduced (e.g. [1, 2]). In the first instance the multi stage model is implemented and validated using MATLAB. The defrost process of a one-dimensional frost segment is investigated. Fixed boundary conditions are provided at the frost interfaces. The simulation results verify the plausibility of the designed model. The evaluation of the simulated defrost process shows the expected convergent behavior of the three-stage sequence.

  13. Stochastic Car-Following Model for Explaining Nonlinear Traffic Phenomena

    NASA Astrophysics Data System (ADS)

    Meng, Jianping; Song, Tao; Dong, Liyun; Dai, Shiqiang

    There is a common time parameter for representing the sensitivity or the lag (response) time of drivers in many car-following models. In the viewpoint of traffic psychology, this parameter could be considered as the perception-response time (PRT). Generally, this parameter is set to be a constant in previous models. However, PRT is actually not a constant but a random variable described by the lognormal distribution. Thus the probability can be naturally introduced into car-following models by recovering the probability of PRT. For demonstrating this idea, a specific stochastic model is constructed based on the optimal velocity model. By conducting simulations under periodic boundary conditions, it is found that some important traffic phenomena, such as the hysteresis and phantom traffic jams phenomena, can be reproduced more realistically. Especially, an interesting experimental feature of traffic jams, i.e., two moving jams propagating in parallel with constant speed stably and sustainably, is successfully captured by the present model.

  14. On a viscous critical-stress model of martensitic phase transitions

    NASA Astrophysics Data System (ADS)

    Weatherwax, John; Vaynblat, Dimitri; Bruno, Oscar; Rosales, Ruben

    2007-09-01

    The solid-to-solid phase transitions that result from shock loading of certain materials, such as the graphite-to-diamond transition and the α-ɛ transition in iron, have long been subjects of a substantial theoretical and experimental literature. Recently a model for such transitions was introduced which, based on a CS condition (CS) and without use of fitting parameters, accounts quantitatively for existing observations in a number of systems [Bruno and Vaynblat, Proc. R. Soc. London, Ser. A 457, 2871 (2001)]. While the results of the CS model match the main features of the available experimental data, disagreements in some details between the predictions of this model and experiment, attributable to an ideal character of the CS model, do exist. In this article we present a version of the CS model, the viscous CS model (vCS), as well as a numerical method for its solution. This model and the corresponding solver results in a much improved overall CS modeling capability. The innovations we introduce include: (1) Enhancement of the model by inclusion of viscous phase-transition effects; as well as a numerical solver that allows for a fully rigorous treatment of both, the (2) Rarefaction fans (which had previously been approximated by "rarefaction discontinuities"), and (3) viscous phase-transition effects, that are part of the vCS model. In particular we show that the vCS model accounts accurately for well known "gradual" rises in the α-ɛ transition which, in the original CS model, were somewhat crudely approximated as jump discontinuities.

  15. Adding Temporal Characteristics to Geographical Schemata and Instances: A General Framework

    NASA Astrophysics Data System (ADS)

    Ota, Morishige

    2018-05-01

    This paper proposes the temporal general feature model (TGFM) as a meta-model for application schemata representing changes of real-world phenomena. It is not very easy to determine history directly from the current application schemata, even if the revision notes are attached to the specification. To solve this problem, the rules for description of the succession between previous and posterior components are added to the general feature model, thus resulting in TGFM. After discussing the concepts associated with the new model, simple examples of application schemata are presented as instances of TGFM. Descriptors for changing properties, the succession of changing properties in moving features, and the succession of features and associations are introduced. The modeling methods proposed in this paper will contribute to the acquisition of consistent and reliable temporal geospatial data.

  16. A transmission-line model of back-cavity dynamics for in-plane pressure-differential microphones.

    PubMed

    Kim, Donghwan; Kuntzman, Michael L; Hall, Neal A

    2014-11-01

    Pressure-differential microphones inspired by the hearing mechanism of a special parasitoid fly have been described previously. The designs employ a beam structure that rotates about two pivots over an enclosed back volume. The back volume is only partially enclosed due to open slits around the perimeter of the beam. The open slits enable incoming sound waves to affect the pressure profile in the microphone's back volume. The goal of this work is to study the net moment applied to pressure-differential microphones by an incoming sound wave, which in-turn requires modeling the acoustic pressure distribution within the back volume. A lumped-element distributed transmission-line model of the back volume is introduced for this purpose. It is discovered that the net applied moment follows a low-pass filter behavior such that, at frequencies below a corner frequency depending on geometrical parameters of the design, the applied moment is unaffected by the open slits. This is in contrast to the high-pass filter behavior introduced by barometric pressure vents in conventional omnidirectional microphones. The model accurately predicts observed curvature in the frequency response of a prototype pressure-differential microphone 2 mm × 1 mm × 0.5 mm in size and employing piezoelectric readout.

  17. MARTI: man-machine animation real-time interface

    NASA Astrophysics Data System (ADS)

    Jones, Christian M.; Dlay, Satnam S.

    1997-05-01

    The research introduces MARTI (man-machine animation real-time interface) for the realization of natural human-machine interfacing. The system uses simple vocal sound-tracks of human speakers to provide lip synchronization of computer graphical facial models. We present novel research in a number of engineering disciplines, which include speech recognition, facial modeling, and computer animation. This interdisciplinary research utilizes the latest, hybrid connectionist/hidden Markov model, speech recognition system to provide very accurate phone recognition and timing for speaker independent continuous speech, and expands on knowledge from the animation industry in the development of accurate facial models and automated animation. The research has many real-world applications which include the provision of a highly accurate and 'natural' man-machine interface to assist user interactions with computer systems and communication with one other using human idiosyncrasies; a complete special effects and animation toolbox providing automatic lip synchronization without the normal constraints of head-sets, joysticks, and skilled animators; compression of video data to well below standard telecommunication channel bandwidth for video communications and multi-media systems; assisting speech training and aids for the handicapped; and facilitating player interaction for 'video gaming' and 'virtual worlds.' MARTI has introduced a new level of realism to man-machine interfacing and special effect animation which has been previously unseen.

  18. Assessing the Impact of Electrostatic Drag on Processive Molecular Motor Transport.

    PubMed

    Smith, J Darby; McKinley, Scott A

    2018-06-04

    The bidirectional movement of intracellular cargo is usually described as a tug-of-war among opposite-directed families of molecular motors. While tug-of-war models have enjoyed some success, recent evidence suggests underlying motor interactions are more complex than previously understood. For example, these tug-of-war models fail to predict the counterintuitive phenomenon that inhibiting one family of motors can decrease the functionality of opposite-directed transport. In this paper, we use a stochastic differential equations modeling framework to explore one proposed physical mechanism, called microtubule tethering, that could play a role in this "co-dependence" among antagonistic motors. This hypothesis includes the possibility of a trade-off: weakly bound trailing molecular motors can serve as tethers for cargoes and processing motors, thereby enhancing motor-cargo run lengths along microtubules; however, this introduces a cost of processing at a lower mean velocity. By computing the small- and large-time mean-squared displacement of our theoretical model and comparing our results to experimental observations of dynein and its "helper protein" dynactin, we find some supporting evidence for microtubule tethering interactions. We extrapolate these findings to predict how dynein-dynactin might interact with the opposite-directed kinesin motors and introduce a criterion for when the trade-off is beneficial in simple systems.

  19. Design and damping force characterization of a new magnetorheological damper activated by permanent magnet flux dispersion

    NASA Astrophysics Data System (ADS)

    Lee, Tae-Hoon; Han, Chulhee; Choi, Seung-Bok

    2018-01-01

    This work proposes a novel type of tunable magnetorheological (MR) damper operated based solely on the location of a permanent magnet incorporated into the piston. To create a larger damping force variation in comparison with the previous model, a different design configuration of the permanent-magnet-based MR (PMMR) damper is introduced to provide magnetic flux dispersion in two magnetic circuits by utilizing two materials with different magnetic reluctance. After discussing the design configuration and some advantages of the newly designed mechanism, the magnetic dispersion principle is analyzed through both the formulated analytical model of the magnetic circuit and the computer simulation based on the magnetic finite element method. Sequentially, the principal design parameters of the damper are determined and fabricated. Then, experiments are conducted to evaluate the variation in damping force depending on the location of the magnet. It is demonstrated that the new design and magnetic dispersion concept are valid showing higher damping force than the previous model. In addition, a curved structure of the two materials is further fabricated and tested to realize the linearity of the damping force variation.

  20. Diurnal forcing of planetary atmospheres

    NASA Technical Reports Server (NTRS)

    Houben, Howard C.

    1991-01-01

    A free convection parameterization has been introduced into the Mars Planetary Boundary Layer Model (MPBL). Previously, the model would fail to generate turbulence under conditions of zero wind shear, even when statically unstable. This in turn resulted in erroneous results at the equator, for example, when the lack of Coriolis forcing allowed zero wind conditions. The underlying cause of these failures was the level 2 second-order turbulence closure scheme which derived diffusivities as algebraic functions of the Richardson number (the ratio of static stability to wind shear). In the previous formulation, the diffusivities were scaled by the wind shear--a convenient parameter since it is non-negative. This was the drawback that all diffusivities are zero under conditions of zero shear (viz., the free convection case). The new scheme tests for the condition of zero shear in conjunction with static instability and recalculates the diffusivities using a static stability scaling. The results for a simulation of the equatorial boundary layer at autumnal equinox are presented. (Note that after some wind shear is generated, the model reverts to the traditional diffusivity calculation.)

  1. Motion-adaptive model-assisted compatible coding with spatiotemporal scalability

    NASA Astrophysics Data System (ADS)

    Lee, JaeBeom; Eleftheriadis, Alexandros

    1997-01-01

    We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.

  2. Effects of Mass Media and Cultural Drift in a Model for Social Influence

    NASA Astrophysics Data System (ADS)

    Mazzitello, Karina I.; Candia, Julián; Dossetti, Víctor

    In the context of an extension of Axelrod's model for social influence, we study the interplay and competition between the cultural drift, represented as random perturbations, and mass media, introduced by means of an external homogeneous field. Unlike previous studies [J. C. González-Avella et al., Phys. Rev. E 72, 065102(R) (2005)], the mass media coupling proposed here is capable of affecting the cultural traits of any individual in the society, including those who do not share any features with the external message. A noise-driven transition is found: for large noise rates, both the ordered (culturally polarized) phase and the disordered (culturally fragmented) phase are observed, while, for lower noise rates, the ordered phase prevails. In the former case, the external field is found to induce cultural ordering, a behavior opposite to that reported in previous studies using a different prescription for the mass media interaction. We compare the predictions of this model to statistical data measuring the impact of a mass media vasectomy promotion campaign in Brazil.

  3. Modelling eye movements in a categorical search task

    PubMed Central

    Zelinsky, Gregory J.; Adeli, Hossein; Peng, Yifan; Samaras, Dimitris

    2013-01-01

    We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. PMID:24018720

  4. Descriptive Modeling of the Dynamical Systems and Determination of Feedback Homeostasis at Different Levels of Life Organization.

    PubMed

    Zholtkevych, G N; Nosov, K V; Bespalov, Yu G; Rak, L I; Abhishek, M; Vysotskaya, E V

    2018-05-24

    The state-of-art research in the field of life's organization confronts the need to investigate a number of interacting components, their properties and conditions of sustainable behaviour within a natural system. In biology, ecology and life sciences, the performance of such stable system is usually related to homeostasis, a property of the system to actively regulate its state within a certain allowable limits. In our previous work, we proposed a deterministic model for systems' homeostasis. The model was based on dynamical system's theory and pairwise relationships of competition, amensalism and antagonism taken from theoretical biology and ecology. However, the present paper proposes a different dimension to our previous results based on the same model. In this paper, we introduce the influence of inter-component relationships in a system, wherein the impact is characterized by direction (neutral, positive, or negative) as well as its (absolute) value, or strength. This makes the model stochastic which, in our opinion, is more consistent with real-world elements affected by various random factors. The case study includes two examples from areas of hydrobiology and medicine. The models acquired for these cases enabled us to propose a convincing explanation for corresponding phenomena identified by different types of natural systems.

  5. Modeling recent economic debates

    NASA Astrophysics Data System (ADS)

    Skiadas, Christos H.

    The previous years' disaster in the stock markets all over the world and the resulting economic crisis lead to serious criticisms of the various models used. It was evident that large fluctuations and sudden losses may occur even in the case of a well organized and supervised context as it looks to be the European Union. In order to explain the economic systems, we explore models of interacting and conflicting populations. The populations are conflicting into the same environment (a Stock Market or a Group of Countries as the EU). Three models where introduced 1) the Lotka-Volterra 2) the Lanchester or the Richardson model and 3) a new model for two conflicting populations. These models assume immediate interaction between the two conflicting populations. This is usually not the case in a stock market or between countries as delays in the information process arise. The main rules present include mutual interaction between adopters, potential adopters, word-of-mouth communication and of course by taking into consideration the innovation diffusion process. In a previous paper (Skiadas, 2010 [9]) we had proposed and analyzed a model including mutual interaction with delays due to the innovation diffusion process. The model characteristics where expressed by third order terms providing four characteristic symmetric stationary points. In this paper we summarize the previous results and we analyze the case of a non-symmetric case where the leading part receives the information immediately while the second part receives the information following a delay mechanism due to the innovation diffusion process (the spread of information) which can be expressed by a third order term. In the later case the non-symmetric process leads to gains of the leading part while the second part oscillates between gains and losses during time.

  6. Integration of RAM-SCB into the Space Weather Modeling Framework

    DOE PAGES

    Welling, Daniel; Toth, Gabor; Jordanova, Vania Koleva; ...

    2018-02-07

    We present that numerical simulations of the ring current are a challenging endeavor. They require a large set of inputs, including electric and magnetic fields and plasma sheet fluxes. Because the ring current broadly affects the magnetosphere-ionosphere system, the input set is dependent on the ring current region itself. This makes obtaining a set of inputs that are self-consistent with the ring current difficult. To overcome this challenge, researchers have begun coupling ring current models to global models of the magnetosphere-ionosphere system. This paper describes the coupling between the Ring current Atmosphere interaction Model with Self-Consistent Magnetic field (RAM-SCB) tomore » the models within the Space Weather Modeling Framework. Full details on both previously introduced and new coupling mechanisms are defined. Finally, the impact of self-consistently including the ring current on the magnetosphere-ionosphere system is illustrated via a set of example simulations.« less

  7. Dementia Grief: A Theoretical Model of a Unique Grief Experience

    PubMed Central

    Blandin, Kesstan; Pepin, Renee

    2016-01-01

    Previous literature reveals a high prevalence of grief in dementia caregivers before physical death of the person with dementia that is associated with stress, burden, and depression. To date, theoretical models and therapeutic interventions with grief in caregivers have not adequately considered the grief process, but instead have focused on grief as a symptom that manifests within the process of caregiving. The Dementia Grief Model explicates the unique process of pre-death grief in dementia caregivers. In this paper we introduce the Dementia Grief Model, describe the unique characteristics dementia grief, and present the psychological states associated with the process of dementia grief. The model explicates an iterative grief process involving three states – separation, liminality, and re-emergence – each with a dynamic mechanism that facilitates or hinders movement through the dementia grief process. Finally, we offer potential applied research questions informed by the model. PMID:25883036

  8. Integration of RAM-SCB into the Space Weather Modeling Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welling, Daniel; Toth, Gabor; Jordanova, Vania Koleva

    We present that numerical simulations of the ring current are a challenging endeavor. They require a large set of inputs, including electric and magnetic fields and plasma sheet fluxes. Because the ring current broadly affects the magnetosphere-ionosphere system, the input set is dependent on the ring current region itself. This makes obtaining a set of inputs that are self-consistent with the ring current difficult. To overcome this challenge, researchers have begun coupling ring current models to global models of the magnetosphere-ionosphere system. This paper describes the coupling between the Ring current Atmosphere interaction Model with Self-Consistent Magnetic field (RAM-SCB) tomore » the models within the Space Weather Modeling Framework. Full details on both previously introduced and new coupling mechanisms are defined. Finally, the impact of self-consistently including the ring current on the magnetosphere-ionosphere system is illustrated via a set of example simulations.« less

  9. A discrete model to study reaction-diffusion-mechanics systems.

    PubMed

    Weise, Louis D; Nash, Martyn P; Panfilov, Alexander V

    2011-01-01

    This article introduces a discrete reaction-diffusion-mechanics (dRDM) model to study the effects of deformation on reaction-diffusion (RD) processes. The dRDM framework employs a FitzHugh-Nagumo type RD model coupled to a mass-lattice model, that undergoes finite deformations. The dRDM model describes a material whose elastic properties are described by a generalized Hooke's law for finite deformations (Seth material). Numerically, the dRDM approach combines a finite difference approach for the RD equations with a Verlet integration scheme for the equations of the mass-lattice system. Using this framework results were reproduced on self-organized pacemaking activity that have been previously found with a continuous RD mechanics model. Mechanisms that determine the period of pacemakers and its dependency on the medium size are identified. Finally it is shown how the drift direction of pacemakers in RDM systems is related to the spatial distribution of deformation and curvature effects.

  10. A Discrete Model to Study Reaction-Diffusion-Mechanics Systems

    PubMed Central

    Weise, Louis D.; Nash, Martyn P.; Panfilov, Alexander V.

    2011-01-01

    This article introduces a discrete reaction-diffusion-mechanics (dRDM) model to study the effects of deformation on reaction-diffusion (RD) processes. The dRDM framework employs a FitzHugh-Nagumo type RD model coupled to a mass-lattice model, that undergoes finite deformations. The dRDM model describes a material whose elastic properties are described by a generalized Hooke's law for finite deformations (Seth material). Numerically, the dRDM approach combines a finite difference approach for the RD equations with a Verlet integration scheme for the equations of the mass-lattice system. Using this framework results were reproduced on self-organized pacemaking activity that have been previously found with a continuous RD mechanics model. Mechanisms that determine the period of pacemakers and its dependency on the medium size are identified. Finally it is shown how the drift direction of pacemakers in RDM systems is related to the spatial distribution of deformation and curvature effects. PMID:21804911

  11. In silico modeling of the yeast protein and protein family interaction network

    NASA Astrophysics Data System (ADS)

    Goh, K.-I.; Kahng, B.; Kim, D.

    2004-03-01

    Understanding of how protein interaction networks of living organisms have evolved or are organized can be the first stepping stone in unveiling how life works on a fundamental ground. Here we introduce an in silico ``coevolutionary'' model for the protein interaction network and the protein family network. The essential ingredient of the model includes the protein family identity and its robustness under evolution, as well as the three previously proposed: gene duplication, divergence, and mutation. This model produces a prototypical feature of complex networks in a wide range of parameter space, following the generalized Pareto distribution in connectivity. Moreover, we investigate other structural properties of our model in detail with some specific values of parameters relevant to the yeast Saccharomyces cerevisiae, showing excellent agreement with the empirical data. Our model indicates that the physical constraints encoded via the domain structure of proteins play a crucial role in protein interactions.

  12. Assessment of mid-latitude atmospheric variability in CMIP5 models using a process oriented-metric

    NASA Astrophysics Data System (ADS)

    Di Biagio, Valeria; Calmanti, Sandro; Dell'Aquila, Alessandro; Ruti, Paolo

    2013-04-01

    We compare, for the period 1962-2000, an estimate of the northern hemisphere mid-latitude winter atmospheric variability according several global climate models included in the fifth phase of the Climate Model Intercomparison Project (CMIP5) with the results of the models belonging to the previous CMIP3 and with the NCEP-NCAR reanalysis. We use the space-time Hayashi spectra of the 500hPa geopotential height fields to characterize the variability of atmospheric circulation regimes and we introduce an ad hoc integral measure of the variability observed in the Northern Hemisphere on different spectral sub-domains. The overall performance of each model is evaluated by considering the total wave variability as a global scalar measure of the statistical properties of different types of atmospheric disturbances. The variability associated to eastward propagating baroclinic waves and to planetary waves is instead used to describe the performance of each model in terms of specific physical processes. We find that the two model ensembles (CMIP3 and CMIP5) do not show substantial differences in the description of northern hemisphere winter mid-latitude atmospheric variability, although some CMIP5 models display performances superior to their previous versions implemented in CMIP3. Preliminary results for the 21th century RCP 4.5 scenario will be also discussed for the CMIP5 models.

  13. Bayesian hierarchical Poisson models with a hidden Markov structure for the detection of influenza epidemic outbreaks.

    PubMed

    Conesa, D; Martínez-Beneito, M A; Amorós, R; López-Quílez, A

    2015-04-01

    Considerable effort has been devoted to the development of statistical algorithms for the automated monitoring of influenza surveillance data. In this article, we introduce a framework of models for the early detection of the onset of an influenza epidemic which is applicable to different kinds of surveillance data. In particular, the process of the observed cases is modelled via a Bayesian Hierarchical Poisson model in which the intensity parameter is a function of the incidence rate. The key point is to consider this incidence rate as a normal distribution in which both parameters (mean and variance) are modelled differently, depending on whether the system is in an epidemic or non-epidemic phase. To do so, we propose a hidden Markov model in which the transition between both phases is modelled as a function of the epidemic state of the previous week. Different options for modelling the rates are described, including the option of modelling the mean at each phase as autoregressive processes of order 0, 1 or 2. Bayesian inference is carried out to provide the probability of being in an epidemic state at any given moment. The methodology is applied to various influenza data sets. The results indicate that our methods outperform previous approaches in terms of sensitivity, specificity and timeliness. © The Author(s) 2011 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  14. Simplified ISCCP cloud regimes for evaluating cloudiness in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin

    2017-01-01

    We take advantage of ISCCP simulator data available for many models that participated in CMIP5, in order to introduce a framework for comparing model cloud output with corresponding ISCCP observations based on the cloud regime (CR) concept. Simplified global CRs are employed derived from the co-variations of three variables, namely cloud optical thickness, cloud top pressure and cloud fraction ( τ, p c , CF). Following evaluation criteria established in a companion paper of ours (Jin et al. 2016), we assess model cloud simulation performance based on how well the simplified CRs are simulated in terms of similarity of centroids, global values and map correlations of relative-frequency-of-occurrence, and long-term total cloud amounts. Mirroring prior results, modeled clouds tend to be too optically thick and not as extensive as in observations. CRs with high-altitude clouds from storm activity are not as well simulated here compared to the previous study, but other regimes containing near-overcast low clouds show improvement. Models that have performed well in the companion paper against CRs defined by joint τ- p c histograms distinguish themselves again here, but improvements for previously underperforming models are also seen. Averaging across models does not yield a drastically better picture, except for cloud geographical locations. Cloud evaluation with simplified regimes seems thus more forgiving than that using histogram-based CRs while still strict enough to reveal model weaknesses.

  15. A Bridge from Optical to Infrared Galaxies: Explaining Local Properties, Predicting Galaxy Counts and the Cosmic Background Radiation

    NASA Astrophysics Data System (ADS)

    Totani, T.; Takeuchi, T. T.

    2001-12-01

    A new model of infrared galaxy counts and the cosmic background radiation (CBR) is developed by extending a model for optical/near-infrared galaxies. Important new characteristics of this model are that mass scale dependence of dust extinction is introduced based on the size-luminosity relation of optical galaxies, and that the big grain dust temperature T dust is calculated based on a physical consideration for energy balance, rather than using the empirical relation between T dust and total infrared luminosity L IR found in local galaxies, which has been employed in most of previous works. Consequently, the local properties of infrared galaxies, i.e., optical/infrared luminosity ratios, L IR-T dust correlation, and infrared luminosity function are outputs predicted by the model, while these have been inputs in a number of previous models. Our model indeed reproduces these local properties reasonably well. Then we make predictions for faint infrared counts (in 15, 60, 90, 170, 450, and 850 μ m) and CBR by this model. We found considerably different results from most of previous works based on the empirical L IR-T dust relation; especially, it is shown that the dust temperature of starbursting primordial elliptical galaxies is expected to be very high (40--80K). This indicates that intense starbursts of forming elliptical galaxies should have occurred at z ~ 2--3, in contrast to the previous results that significant starbursts beyond z ~ 1 tend to overproduce the far-infrared (FIR) CBR detected by COBE/FIRAS. On the other hand, our model predicts that the mid-infrared (MIR) flux from warm/nonequilibrium dust is relatively weak in such galaxies making FIR CBR, and this effect reconciles the prima facie conflict between the upper limit on MIR CBR from TeV gamma-ray observations and the COBE\\ detections of FIR CBR. The authors thank the financial support by the Japan Society for Promotion of Science.

  16. Parallel kinetic Monte Carlo simulation framework incorporating accurate models of adsorbate lateral interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nielsen, Jens; D’Avezac, Mayeul; Hetherington, James

    2013-12-14

    Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. Moremore » recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.« less

  17. Multiresolution saliency map based object segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Wang, Xin; Dai, ZhenYou

    2015-11-01

    Salient objects' detection and segmentation are gaining increasing research interest in recent years. A saliency map can be obtained from different models presented in previous studies. Based on this saliency map, the most salient region (MSR) in an image can be extracted. This MSR, generally a rectangle, can be used as the initial parameters for object segmentation algorithms. However, to our knowledge, all of those saliency maps are represented in a unitary resolution although some models have even introduced multiscale principles in the calculation process. Furthermore, some segmentation methods, such as the well-known GrabCut algorithm, need more iteration time or additional interactions to get more precise results without predefined pixel types. A concept of a multiresolution saliency map is introduced. This saliency map is provided in a multiresolution format, which naturally follows the principle of the human visual mechanism. Moreover, the points in this map can be utilized to initialize parameters for GrabCut segmentation by labeling the feature pixels automatically. Both the computing speed and segmentation precision are evaluated. The results imply that this multiresolution saliency map-based object segmentation method is simple and efficient.

  18. Lattice model for self-assembly with application to the formation of cytoskeletal-like structures

    NASA Astrophysics Data System (ADS)

    Stewman, Shannon F.; Dinner, Aaron R.

    2007-07-01

    We introduce a stochastic approach for self-assembly in systems far from equilibrium. The building blocks are represented by a lattice of discrete variables (Potts-like spins), and physically meaningful mechanisms are obtained by restricting transitions through spatially local rules based on experimental data. We use the method to study nucleation of filopodia-like bundles in a system consisting of purified actin, fascin, actin-related protein 2/3 , and beads coated with Wiskott-Aldrich syndrome protein. Consistent with previous speculation based on static experimental images, we find that bundles derive from Λ -precursor-like patterns of spins on the lattice. The ratcheting of the actin network relative to the surface that represents beads plays an important role in determining the number and orientation of bundles due to the fact that branching is the primary means for generating barbed ends pointed in directions that allow rapid filament growth. By enabling the de novo formation of coexisting morphologies without the computational cost of explicit representation of proteins, the approach introduced complements earlier models of cytoskeletal behavior in vitro and in vivo.

  19. The results of marine electromagnetic sounding with a high-power remote source in the Kola Bay in the Barents Sea

    NASA Astrophysics Data System (ADS)

    Grigoriev, V. F.; Korotaev, S. M.; Kruglyakov, M. S.; Orekhova, D. A.; Popova, I. V.; Tereshchenko, E. D.; Tereshchenko, P. E.; Schors, Yu. G.

    2013-05-01

    The first Russian six-component seafloor electromagnetic (EM) receivers were tested in an experiment carried out in Kola Bay in the Barents Sea. The signals transmitted by a remote high-power ELF source at several frequencies in the decahertz range were recorded by six receivers deployed on the seafloor along the profile crossing the Kola Bay. Although not all the stations successfully recorded all the six components due to technical failures, the quality of the data overall is quite suitable for interpretation. The interpretation was carried out by the three-dimensional (3D) modeling of an electromagnetic field with neural network inversion. The a priori geoelectrical model of Kola Bay, which was reconstructed by generalizing the previous geological and geophysical data, including the data of the ground magnetotelluric sounding and magnetovariational profiling, provided the EM fields that are far from those measured in the experiment. However, by a step-by-step modification of the initial model, we achieved quite a satisfactory fit. The resulting model provides the basis for introducing the corrections into the previous notions concerning the regional geological and geophysical structure of the region and particularly the features associated with fault tectonics.

  20. Simulation of C. elegans thermotactic behavior in a linear thermal gradient using a simple phenomenological motility model.

    PubMed

    Matsuoka, Tomohiro; Gomi, Sohei; Shingai, Ryuzo

    2008-01-21

    The nematode Caenorhabditis elegans has been reported to exhibit thermotaxis, a sophisticated behavioral response to temperature. However, there appears to be some inconsistency among previous reports. The results of population-level thermotaxis investigations suggest that C. elegans can navigate to the region of its cultivation temperature from nearby regions of higher or lower temperature. However, individual C. elegans nematodes appear to show only cryophilic tendencies above their cultivation temperature. A Monte-Carlo style simulation using a simple individual model of C. elegans provides insight into clarifying apparent inconsistencies among previous findings. The simulation using the thermotaxis model that includes the cryophilic tendencies, isothermal tracking and thermal adaptation was conducted. As a result of the random walk property of locomotion of C. elegans, only cryophilic tendencies above the cultivation temperature result in population-level thermophilic tendencies. Isothermal tracking, a period of active pursuit of an isotherm around regions of temperature near prior cultivation temperature, can strengthen the tendencies of these worms to gather around near-cultivation-temperature regions. A statistical index, the thermotaxis (TTX) L-skewness, was introduced and was useful in analyzing the population-level thermotaxis of model worms.

  1. Development of a coupled hydrological - hydrodynamic model for probabilistic catchment flood inundation modelling

    NASA Astrophysics Data System (ADS)

    Quinn, Niall; Freer, Jim; Coxon, Gemma; Dunne, Toby; Neal, Jeff; Bates, Paul; Sampson, Chris; Smith, Andy; Parkin, Geoff

    2017-04-01

    Computationally efficient flood inundation modelling systems capable of representing important hydrological and hydrodynamic flood generating processes over relatively large regions are vital for those interested in flood preparation, response, and real time forecasting. However, such systems are currently not readily available. This can be particularly important where flood predictions from intense rainfall are considered as the processes leading to flooding often involve localised, non-linear spatially connected hillslope-catchment responses. Therefore, this research introduces a novel hydrological-hydraulic modelling framework for the provision of probabilistic flood inundation predictions across catchment to regional scales that explicitly account for spatial variability in rainfall-runoff and routing processes. Approaches have been developed to automate the provision of required input datasets and estimate essential catchment characteristics from freely available, national datasets. This is an essential component of the framework as when making predictions over multiple catchments or at relatively large scales, and where data is often scarce, obtaining local information and manually incorporating it into the model quickly becomes infeasible. An extreme flooding event in the town of Morpeth, NE England, in 2008 was used as a first case study evaluation of the modelling framework introduced. The results demonstrated a high degree of prediction accuracy when comparing modelled and reconstructed event characteristics for the event, while the efficiency of the modelling approach used enabled the generation of relatively large ensembles of realisations from which uncertainty within the prediction may be represented. This research supports previous literature highlighting the importance of probabilistic forecasting, particularly during extreme events, which can be often be poorly characterised or even missed by deterministic predictions due to the inherent uncertainty in any model application. Future research will aim to further evaluate the robustness of the approaches introduced by applying the modelling framework to a variety of historical flood events across UK catchments. Furthermore, the flexibility and efficiency of the framework is ideally suited to the examination of the propagation of errors through the model which will help gain a better understanding of the dominant sources of uncertainty currently impacting flood inundation predictions.

  2. Translating ubuntu to Spanish: Convivencia as a framework for re-centring education as a moral enterprise

    NASA Astrophysics Data System (ADS)

    Luschei, Thomas F.

    2016-02-01

    In this essay, the author introduces the concept of " convivencia" (peaceful coexistence) as a framework for re-centring education as a moral enterprise. He discusses convivencia within the context of education and society in Colombia, paying special attention to the Colombian rural school model Escuela Nueva (New School). This discussion draws on both previous evidence and the author's own research on the implementation of the Escuela Nueva model in urban areas of Colombia. He discusses several facets of convivencia and parallels with the ideas and ideals of ubuntu. Using convivencia as an organising principle, he presents insights for educational practitioners and researchers related to re-centring education as a moral enterprise.

  3. Effects of Pump-turbine S-shaped Characteristics on Transient Behaviours: Experimental Investigation

    NASA Astrophysics Data System (ADS)

    Zeng, Wei; Yang, Jiandong; Hu, Jinhong; Tang, Renbo

    2017-05-01

    A pumped storage stations model was set up and introduced in the previous paper. In the model station, the S-shaped characteristic curves was measured at the load rejection condition with the guide vanes stalling. Load rejection tests where guide-vane closed linearly were performed to validate the effect of the S-shaped characteristics on hydraulic transients. Load rejection experiments with different guide vane closing schemes were also performed to determine a suitable scheme considering the S-shaped characteristics. The condition of one pump turbine rejecting its load after another defined as one-after-another (OAA) load rejection was performed to validate the possibility of S-induced extreme draft tube pressure.

  4. Requiem for the max rule?

    PubMed Central

    Ma, Wei Ji; Shen, Shan; Dziugaite, Gintare; van den Berg, Ronald

    2015-01-01

    In tasks such as visual search and change detection, a key question is how observers integrate noisy measurements from multiple locations to make a decision. Decision rules proposed to model this process haven fallen into two categories: Bayes-optimal (ideal observer) rules and ad-hoc rules. Among the latter, the maximum-of-outputs (max) rule has been most prominent. Reviewing recent work and performing new model comparisons across a range of paradigms, we find that in all cases except for one, the optimal rule describes human data as well as or better than every max rule either previously proposed or newly introduced here. This casts doubt on the utility of the max rule for understanding perceptual decision-making. PMID:25584425

  5. Super-elite plasma rings and the orbits of planets and satellites isomorphic to the orbits of electrons in the Bohr's model of the hydrogen atom

    NASA Astrophysics Data System (ADS)

    Rabinovich, B. I.

    2007-10-01

    This paper continues the series of papers [1 5] and generalizes the previous results to a proto-ring of magnetized plasma whose density decreases in the radial direction. The problem of quantization of the sector and orbital velocities, and of the radii and periods of revolution of elite plasma rings is considered. A new concept of super-elite rings is introduced. Their isomorphism with the orbits of the planets and planetary satellites in the Solar System is proved. This isomorphism also extends to the orbits of electrons in the Bohr’s model of the hydrogen atom.

  6. Linear stability analysis of the Vlasov-Poisson equations in high density plasmas in the presence of crossed fields and density gradients

    NASA Technical Reports Server (NTRS)

    Kaup, D. J.; Hansen, P. J.; Choudhury, S. Roy; Thomas, Gary E.

    1986-01-01

    The equations for the single-particle orbits in a nonneutral high density plasma in the presence of inhomogeneous crossed fields are obtained. Using these orbits, the linearized Vlasov equation is solved as an expansion in the orbital radii in the presence of inhomogeneities and density gradients. A model distribution function is introduced whose cold-fluid limit is exactly the same as that used in many previous studies of the cold-fluid equations. This model function is used to reduce the linearized Vlasov-Poisson equations to a second-order ordinary differential equation for the linearized electrostatic potential whose eigenvalue is the perturbation frequency.

  7. Moments of action provide insight into critical times for advection-diffusion-reaction processes.

    PubMed

    Ellery, Adam J; Simpson, Matthew J; McCue, Scott W; Baker, Ruth E

    2012-09-01

    Berezhkovskii and co-workers introduced the concept of local accumulation time as a finite measure of the time required for the transient solution of a reaction-diffusion equation to effectively reach steady state [Biophys J. 99, L59 (2010); Phys. Rev. E 83, 051906 (2011)]. Berezhkovskii's approach is a particular application of the concept of mean action time (MAT) that was introduced previously by McNabb [IMA J. Appl. Math. 47, 193 (1991)]. Here, we generalize these previous results by presenting a framework to calculate the MAT, as well as the higher moments, which we call the moments of action. The second moment is the variance of action time, the third moment is related to the skew of action time, and so on. We consider a general transition from some initial condition to an associated steady state for a one-dimensional linear advection-diffusion-reaction partial differential equation (PDE). Our results indicate that it is possible to solve for the moments of action exactly without requiring the transient solution of the PDE. We present specific examples that highlight potential weaknesses of previous studies that have considered the MAT alone without considering higher moments. Finally, we also provide a meaningful interpretation of the moments of action by presenting simulation results from a discrete random-walk model together with some analysis of the particle lifetime distribution. This work shows that the moments of action are identical to the moments of the particle lifetime distribution for certain transitions.

  8. Including irrigation in niche modelling of the invasive wasp Vespula germanica (Fabricius) improves model fit to predict potential for further spread

    PubMed Central

    Kriticos, Darren J.; Veldtman, Ruan

    2017-01-01

    The European wasp, Vespula germanica (Fabricius) (Hymenoptera: Vespidae), is of Palaearctic origin, being native to Europe, northern Africa and Asia, and introduced into North America, Chile, Argentina, Iceland, Ascension Island, South Africa, Australia and New Zealand. Due to its polyphagous nature and scavenging behaviour, V. germanica threatens agriculture and silviculture, and negatively affects biodiversity, while its aggressive nature and venomous sting pose a health risk to humans. In areas with warmer winters and longer summers, queens and workers can survive the winter months, leading to the build-up of large nests during the following season; thereby increasing the risk posed by this species. To prevent or prepare for such unwanted impacts it is important to know where the wasp may be able to establish, either through natural spread or through introduction as a result of human transport. Distribution data from Argentina and Australia, and seasonal phenology data from Argentina were used to determine the potential distribution of V. germanica using CLIMEX modelling. In contrast to previous models, the influence of irrigation on its distribution was also investigated. Under a natural rainfall scenario, the model showed similarities to previous models. When irrigation is applied, dry stress is alleviated, leading to larger areas modelled climatically suitable compared with previous models, which provided a better fit with the actual distribution of the species. The main areas at risk of invasion by V. germanica include western USA, Mexico, small areas in Central America and in the north-western region of South America, eastern Brazil, western Russia, north-western China, Japan, the Mediterranean coastal regions of North Africa, and parts of southern and eastern Africa. PMID:28715452

  9. Stochastic Ocean Eddy Perturbations in a Coupled General Circulation Model.

    NASA Astrophysics Data System (ADS)

    Howe, N.; Williams, P. D.; Gregory, J. M.; Smith, R. S.

    2014-12-01

    High-resolution ocean models, which are eddy permitting and resolving, require large computing resources to produce centuries worth of data. Also, some previous studies have suggested that increasing resolution does not necessarily solve the problem of unresolved scales, because it simply introduces a new set of unresolved scales. Applying stochastic parameterisations to ocean models is one solution that is expected to improve the representation of small-scale (eddy) effects without increasing run-time. Stochastic parameterisation has been shown to have an impact in atmosphere-only models and idealised ocean models, but has not previously been studied in ocean general circulation models. Here we apply simple stochastic perturbations to the ocean temperature and salinity tendencies in the low-resolution coupled climate model, FAMOUS. The stochastic perturbations are implemented according to T(t) = T(t-1) + (ΔT(t) + ξ(t)), where T is temperature or salinity, ΔT is the corresponding deterministic increment in one time step, and ξ(t) is Gaussian noise. We use high-resolution HiGEM data coarse-grained to the FAMOUS grid to provide information about the magnitude and spatio-temporal correlation structure of the noise to be added to the lower resolution model. Here we present results of adding white and red noise, showing the impacts of an additive stochastic perturbation on mean climate state and variability in an AOGCM.

  10. A Hybrid EAV-Relational Model for Consistent and Scalable Capture of Clinical Research Data.

    PubMed

    Khan, Omar; Lim Choi Keung, Sarah N; Zhao, Lei; Arvanitis, Theodoros N

    2014-01-01

    Many clinical research databases are built for specific purposes and their design is often guided by the requirements of their particular setting. Not only does this lead to issues of interoperability and reusability between research groups in the wider community but, within the project itself, changes and additions to the system could be implemented using an ad hoc approach, which may make the system difficult to maintain and even more difficult to share. In this paper, we outline a hybrid Entity-Attribute-Value and relational model approach for modelling data, in light of frequently changing requirements, which enables the back-end database schema to remain static, improving the extensibility and scalability of an application. The model also facilitates data reuse. The methods used build on the modular architecture previously introduced in the CURe project.

  11. Modeling of transmission line exposure to direct lightning strokes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rizk, F.A.M.

    1990-10-01

    The paper introduces a new model for assessing the exposure of free-standing structures and horizontal conductors above flat ground to direct lightning strokes. The starting point of this work is a recently developed criterion for positive leader inception, modified to account for positive leaders initiated under the influence of a negative descending lightning stroke. Subsequent propagation of the positive leader is analyzed to define the point of encounter of the two leaders which determines the attractive radius of a structure or the attractive lateral distance of a conductor. These parameters are investigated for a wide range of heights and return-strokemore » currents. A method for analyzing shielding failure and determining the critical shielding angle is also described. The predictions of the model are compared with field observations and previously developed models.« less

  12. Photometric Lunar Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Nefian, Ara V.; Alexandrov, Oleg; Morattlo, Zachary; Kim, Taemin; Beyer, Ross A.

    2013-01-01

    Accurate photometric reconstruction of the Lunar surface is important in the context of upcoming NASA robotic missions to the Moon and in giving a more accurate understanding of the Lunar soil composition. This paper describes a novel approach for joint estimation of Lunar albedo, camera exposure time, and photometric parameters that utilizes an accurate Lunar-Lambertian reflectance model and previously derived Lunar topography of the area visualized during the Apollo missions. The method introduced here is used in creating the largest Lunar albedo map (16% of the Lunar surface) at the resolution of 10 meters/pixel.

  13. The spread of 'Post Abortion Syndrome' as social diagnosis.

    PubMed

    Kelly, Kimberly

    2014-02-01

    This paper examines the content of Post Abortion Syndrome (PAS) claims, the social actors involved and how this social diagnosis bypassed professional dissent and diffused into public policy in the United States. Previous works on the spread of PAS focus on almost exclusively on anti-abortion think tanks and policymakers. Missing from these analyses, however, is an emphasis on the grassroots-level actions undertaken by evangelical crisis pregnancy center (CPC) activists in introducing and circulating PAS claims. The CPC movement introduced PAS claims and provided the fodder for anti-abortion think tanks to construct evidence of pro-life claims. Despite dissent from health professionals and academic researchers, CPC PAS claims successfully diffused into federal and state abortion policy. I draw upon Brown et al.'s social diagnosis framework and Armstrong's five-stage model of diagnosis development to frame this account. Copyright © 2013. Published by Elsevier Ltd.

  14. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  15. Dibaryons with Strangeness in Quark Models

    NASA Astrophysics Data System (ADS)

    Chen, Mei; Gong, Fang; Huang, Hongxia; Ping, Jialun

    The extended quark delocalization color screening model, which incorporates Goldstone-boson-exchange with soft cutoff, and chiral quark model are employed to do a systematic dynamical calculation of six-quark systems with strangeness. The two models give similar results, although they have different attraction mechanisms. Comparing with the previous calculation of the extended quark delocalization color screening model, in which the Goldstone-bosons are introduced with hard cutoff, the present calculation obtains a little large binding energies for most of the states. However, the conclusions are the same. The calculations show that NΩ state with IJ = 1/2, 2 is a good dibaryon candidate with narrow width, and ΩΩ state with IJ = 00 is a stable dibaryon against the strong interaction. The calculations also reveal several other possible dibaryon candidates with high angular momentum, ΔΣ*(1/2, 3), ΔΞ*(1, 3), etc. These states may have too wide width to be observed experimentally.

  16. A simple generative model of collective online behavior.

    PubMed

    Gleeson, James P; Cellai, Davide; Onnela, Jukka-Pekka; Porter, Mason A; Reed-Tsochas, Felix

    2014-07-22

    Human activities increasingly take place in online environments, providing novel opportunities for relating individual behaviors to population-level outcomes. In this paper, we introduce a simple generative model for the collective behavior of millions of social networking site users who are deciding between different software applications. Our model incorporates two distinct mechanisms: one is associated with recent decisions of users, and the other reflects the cumulative popularity of each application. Importantly, although various combinations of the two mechanisms yield long-time behavior that is consistent with data, the only models that reproduce the observed temporal dynamics are those that strongly emphasize the recent popularity of applications over their cumulative popularity. This demonstrates--even when using purely observational data without experimental design--that temporal data-driven modeling can effectively distinguish between competing microscopic mechanisms, allowing us to uncover previously unidentified aspects of collective online behavior.

  17. A simple generative model of collective online behavior

    PubMed Central

    Gleeson, James P.; Cellai, Davide; Onnela, Jukka-Pekka; Porter, Mason A.; Reed-Tsochas, Felix

    2014-01-01

    Human activities increasingly take place in online environments, providing novel opportunities for relating individual behaviors to population-level outcomes. In this paper, we introduce a simple generative model for the collective behavior of millions of social networking site users who are deciding between different software applications. Our model incorporates two distinct mechanisms: one is associated with recent decisions of users, and the other reflects the cumulative popularity of each application. Importantly, although various combinations of the two mechanisms yield long-time behavior that is consistent with data, the only models that reproduce the observed temporal dynamics are those that strongly emphasize the recent popularity of applications over their cumulative popularity. This demonstrates—even when using purely observational data without experimental design—that temporal data-driven modeling can effectively distinguish between competing microscopic mechanisms, allowing us to uncover previously unidentified aspects of collective online behavior. PMID:25002470

  18. Enantioresolution in electrokinetic chromatography-complete filling technique using sulfated gamma-cyclodextrin. Software-free topological anticipation.

    PubMed

    Escuder-Gilabert, Laura; Martín-Biosca, Yolanda; Medina-Hernández, María José; Sagrado, Salvador

    2016-10-07

    Few papers have tried to predict the resolution ability of chiral selectors in capillary electrophoresis for the separation of the enantiomers of chiral compounds. In a previous work, we have used molecular information available on-line to establish enantioresolution levels of basic compounds using highly sulfated β-CD (HS-β-CD) as chiral selector in electrokinetic chromatography-complete filling technique (EKC-CFT). The present study is a continuation of this previous work, introducing some novelties. In this work, the ability of sulfated γ-cyclodextrin (S-γ-CD) as chiral selector in EKC-CFT is modelled for the first time. Thirty-three structurally unrelated cationic and neutral compounds (drugs and pesticides) are studied. Categorical enantioresolution levels (RsC, 0 or 1) are assigned from experimental enantioresolution values obtained at different S-γ-CD concentrations. Novel topological parameters connected to the chiral carbon (C * -parameters) are introduced. Four C * -parameters and a topological parameter of the whole molecule (aromatic atom count) are the most important variables according to a discriminant partial least squares-variable selection process. It suggests the preponderance of the topology adjacent to the chiral carbon to anticipate the RsC levels. A software-free anticipation protocol for new molecules is proposed. Over the current set of molecules evaluated, 100% of correct anticipations (resolved and non-resolved compounds) are obtained, while anticipation of some compounds remains undetermined. A criterion is introduced to alert on compounds which should not be anticipated. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Extremism without extremists: Deffuant model with emotions

    NASA Astrophysics Data System (ADS)

    Sobkowicz, Pawel

    2015-03-01

    The frequent occurrence of extremist views in many social contexts, often growing from small minorities to almost total majority, poses a significant challenge for democratic societies. The phenomenon can be described within the sociophysical paradigm. We present a modified version of the continuous bounded confidence opinion model, including a simple description of the influence of emotions on tolerances, and eventually on the evolution of opinions. Allowing for psychologically based correlation between the extreme opinions, high emotions and low tolerance for other people's views leads to quick dominance of the extreme views within the studied model, without introducing a special class of agents, as has been done in previous works. This dominance occurs even if the initial numbers of people with extreme opinions is very small. Possible suggestions related to mitigation of the process are briefly discussed.

  20. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    PubMed

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  1. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    PubMed Central

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263

  2. Antineutrino Charged-Current Reactions on Hydrocarbon with Low Momentum Transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gran, R.; Betancourt, M.; Elkins, M.

    We report on multi-nucleon effects in low momentum transfer (more » $< 0.8$ GeV/c) anti-neutrino interactions on scintillator. These data are from the 2010-11 anti-neutrino phase of the MINERvA experiment at Fermilab. The hadronic energy spectrum of this inclusive sample is well-described when a screening effect at low energy transfer and a two-nucleon knockout process are added to a relativistic Fermi gas model of quasi-elastic, $$\\Delta$$ resonance, and higher resonance processes. In this analysis, model elements introduced to describe previously published neutrino results have quantitatively similar benefits for this anti-neutrino sample. We present the results as a double-differential cross section to accelerate investigation of alternate models for anti-neutrino scattering off nuclei.« less

  3. Anti-Neutrino Charged-Current Reactions on Scintillator with Low Momentum Transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gran, R.; et al.

    2018-03-25

    We report on multi-nucleon effects in low momentum transfer (more » $< 0.8$ GeV/c) anti-neutrino interactions on scintillator. These data are from the 2010-11 anti-neutrino phase of the MINERvA experiment at Fermilab. The hadronic energy spectrum of this inclusive sample is well-described when a screening effect at low energy transfer and a two-nucleon knockout process are added to a relativistic Fermi gas model of quasi-elastic, $$\\Delta$$ resonance, and higher resonance processes. In this analysis, model elements introduced to describe previously published neutrino results have quantitatively similar benefits for this anti-neutrino sample. We present the results as a double-differential cross section to accelerate investigation of alternate models for anti-neutrino scattering off nuclei.« less

  4. The pH Dependence of Saccharides' Influence on Thermal Denaturation of Two Model Proteins Supports an Excluded Volume Model for Stabilization Generalized to Allow for Intramolecular Electrostatic Interactions*

    PubMed Central

    Beg, Ilyas; Islam, Asimul; Hassan, Md. Imtaiyaz; Ahmad, Faizan

    2017-01-01

    The reversible thermal denaturation of apo α-lactalbumin (α-LA) and lysozyme was measured in the absence and presence of multiple concentrations of each of seven saccharides (glucose, galactose, fructose, sucrose, trehalose, raffinose, and stachyose) at multiple pH values. It was observed that with increasing pH, the absolute stability of α-LA decreased, whereas the stabilizing effect per mole of all saccharides increased, and that the absolute stability of lysozyme increased, whereas the stabilizing effect per mole of all saccharides decreased. All of the data may be accounted for quantitatively by straightforward electrostatic generalization of a previously introduced coarse-grained model for stabilization of proteins by sugars. PMID:27909048

  5. Anti-Neutrino Charged-Current Reactions on Scintillator with Low Momentum Transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gran, R.; et al.

    2018-06-01

    We report on multi-nucleon effects in low momentum transfer (more » $< 0.8$ GeV/c) anti-neutrino interactions on scintillator. These data are from the 2010-11 anti-neutrino phase of the MINERvA experiment at Fermilab. The hadronic energy spectrum of this inclusive sample is well-described when a screening effect at low energy transfer and a two-nucleon knockout process are added to a relativistic Fermi gas model of quasi-elastic, $$\\Delta$$ resonance, and higher resonance processes. In this analysis, model elements introduced to describe previously published neutrino results have quantitatively similar benefits for this anti-neutrino sample. We present the results as a double-differential cross section to accelerate investigation of alternate models for anti-neutrino scattering off nuclei.« less

  6. Antineutrino Charged-Current Reactions on Hydrocarbon with Low Momentum Transfer

    DOE PAGES

    Gran, R.; Betancourt, M.; Elkins, M.; ...

    2018-06-01

    We report on multi-nucleon effects in low momentum transfer (more » $< 0.8$ GeV/c) anti-neutrino interactions on scintillator. These data are from the 2010-11 anti-neutrino phase of the MINERvA experiment at Fermilab. The hadronic energy spectrum of this inclusive sample is well-described when a screening effect at low energy transfer and a two-nucleon knockout process are added to a relativistic Fermi gas model of quasi-elastic, $$\\Delta$$ resonance, and higher resonance processes. In this analysis, model elements introduced to describe previously published neutrino results have quantitatively similar benefits for this anti-neutrino sample. We present the results as a double-differential cross section to accelerate investigation of alternate models for anti-neutrino scattering off nuclei.« less

  7. A method to calculate fission-fragment yields Y(Z,N) versus proton and neutron number in the Brownian shape-motion model

    DOE PAGES

    Moller, Peter; Ichikawa, Takatoshi

    2015-12-23

    In this study, we propose a method to calculate the two-dimensional (2D) fission-fragment yield Y(Z,N) versus both proton and neutron number, with inclusion of odd-even staggering effects in both variables. The approach is to use the Brownian shape-motion on a macroscopic-microscopic potential-energy surface which, for a particular compound system is calculated versus four shape variables: elongation (quadrupole moment Q 2), neck d, left nascent fragment spheroidal deformation ϵ f1, right nascent fragment deformation ϵ f2 and two asymmetry variables, namely proton and neutron numbers in each of the two fragments. The extension of previous models 1) introduces a method tomore » calculate this generalized potential-energy function and 2) allows the correlated transfer of nucleon pairs in one step, in addition to sequential transfer. In the previous version the potential energy was calculated as a function of Z and N of the compound system and its shape, including the asymmetry of the shape. We outline here how to generalize the model from the “compound-system” model to a model where the emerging fragment proton and neutron numbers also enter, over and above the compound system composition.« less

  8. Discrete-Time Mapping for an Impulsive Goodwin Oscillator with Three Delays

    NASA Astrophysics Data System (ADS)

    Churilov, Alexander N.; Medvedev, Alexander; Zhusubaliyev, Zhanybai T.

    A popular biomathematics model of the Goodwin oscillator has been previously generalized to a more biologically plausible construct by introducing three time delays to portray the transport phenomena arising due to the spatial distribution of the model states. The present paper addresses a similar conversion of an impulsive version of the Goodwin oscillator that has found application in mathematical modeling, e.g. in endocrine systems with pulsatile hormone secretion. While the cascade structure of the linear continuous part pertinent to the Goodwin oscillator is preserved in the impulsive Goodwin oscillator, the static nonlinear feedback of the former is substituted with a pulse modulation mechanism thus resulting in hybrid dynamics of the closed-loop system. To facilitate the analysis of the mathematical model under investigation, a discrete mapping propagating the continuous state variables through the firing times of the impulsive feedback is derived. Due to the presence of multiple time delays in the considered model, previously developed mapping derivation approaches are not applicable here and a novel technique is proposed and applied. The mapping captures the dynamics of the original hybrid system and is instrumental in studying complex nonlinear phenomena arising in the impulsive Goodwin oscillator. A simulation example is presented to demonstrate the utility of the proposed approach in bifurcation analysis.

  9. A generalized multi-dimensional mathematical model for charging and discharging processes in a supercapacitor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allu, Srikanth; Velamur Asokan, Badri; Shelton, William A

    A generalized three dimensional computational model based on unied formulation of electrode- electrolyte-electrode system of a electric double layer supercapacitor has been developed. The model accounts for charge transport across the solid-liquid system. This formulation based on volume averaging process is a widely used concept for the multiphase ow equations ([28] [36]) and is analogous to porous media theory typically employed for electrochemical systems [22] [39] [12]. This formulation is extended to the electrochemical equations for a supercapacitor in a consistent fashion, which allows for a single-domain approach with no need for explicit interfacial boundary conditions as previously employed ([38]).more » In this model it is easy to introduce the spatio-temporal variations, anisotropies of physical properties and it is also conducive for introducing any upscaled parameters from lower length{scale simulations and experiments. Due to the irregular geometric congurations including porous electrode, the charge transport and subsequent performance characteristics of the super-capacitor can be easily captured in higher dimensions. A generalized model of this nature also provides insight into the applicability of 1D models ([38]) and where multidimensional eects need to be considered. In addition, simple sensitivity analysis on key input parameters is performed in order to ascertain the dependence of the charge and discharge processes on these parameters. Finally, we demonstarted how this new formulation can be applied to non-planar supercapacitors« less

  10. Coupling fast fluid dynamics and multizone airflow models in Modelica Buildings library to simulate the dynamics of HVAC systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Wei; Sevilla, Thomas Alonso; Zuo, Wangda

    Historically, multizone models are widely used in building airflow and energy performance simulations due to their fast computing speed. However, multizone models assume that the air in a room is well mixed, consequently limiting their application. In specific rooms where this assumption fails, the use of computational fluid dynamics (CFD) models may be an alternative option. Previous research has mainly focused on coupling CFD models and multizone models to study airflow in large spaces. While significant, most of these analyses did not consider the coupled simulation of the building airflow with the building's Heating, Ventilation, and Air-Conditioning (HVAC) systems. Thismore » paper tries to fill the gap by integrating the models for HVAC systems with coupled multizone and CFD simulations for airflows, using the Modelica simul ation platform. To improve the computational efficiency, we incorporated a simplified CFD model named fast fluid dynamics (FFD). We first introduce the data synchronization strategy and implementation in Modelica. Then, we verify the implementation using two case studies involving an isothermal and a non-isothermal flow by comparing model simulations to experiment data. Afterward, we study another three cases that are deemed more realistic. This is done by attaching a variable air volume (VAV) terminal box and a VAV system to previous flows to assess the capability of the models in studying the dynamic control of HVAC systems. Finally, we discuss further research needs on the coupled simulation using the models.« less

  11. A mixed SIR-SIS model to contain a virus spreading through networks with two degrees

    NASA Astrophysics Data System (ADS)

    Essouifi, Mohamed; Achahbar, Abdelfattah

    Due to the fact that the “nodes” and “links” of real networks are heterogeneous, to model computer viruses prevalence throughout the Internet, we borrow the idea of the reduced scale free network which was introduced recently. The purpose of this paper is to extend the previous deterministic two subchains of Susceptible-Infected-Susceptible (SIS) model into a mixed Susceptible-Infected-Recovered and Susceptible-Infected-Susceptible (SIR-SIS) model to contain the computer virus spreading over networks with two degrees. Moreover, we develop its stochastic counterpart. Due to the high protection and security taken for hubs class, we suggest to treat it by using SIR epidemic model rather than the SIS one. The analytical study reveals that the proposed model admits a stable viral equilibrium. Thus, it is shown numerically that the mean dynamic behavior of the stochastic model is in agreement with the deterministic one. Unlike the infection densities i2 and i which both tend to a viral equilibrium for both approaches as in the previous study, i1 tends to the virus-free equilibrium. Furthermore, since a proportion of infectives are recovered, the global infection density i is minimized. Therefore, the permanent presence of viruses in the network due to the lower-degree nodes class. Many suggestions are put forward for containing viruses propagation and minimizing their damages.

  12. Biology, host instar suitability and susceptibility, and interspecific competition of three introduced parasitoids of Paracoccus marginatus (Hemiptera: Pseudococcidae)

    USDA-ARS?s Scientific Manuscript database

    Biology, host stage suitability and susceptibility, and interspecific competition of three previously introduced parasitoids (Acerophagus papayae, Anagyrus loecki, and Pseudleptomastix mexicana) (Hymenoptera: Encyrtidae) of Paracoccus marginatus were studied in the laboratory. Compared to P. mexica...

  13. The Impact of Water Loading on Estimates of Postglacial Decay Times in Hudson Bay

    NASA Astrophysics Data System (ADS)

    Han, H. K.; Gomez, N. A.

    2016-12-01

    Ongoing glacial isostatic adjustment (GIA) due to surface loading (ice and water) variations since the Last Glacial Maximum (LGM) has been contributing to sea level changes globally throughout the Holocene, especially in regions like the Canada that were heavily glaciated during the LGM. The spatial and temporal distribution of GIA and relative sea level change are attributed to the ice history and the rheological structure of the solid Earth, both of which are uncertain. It has been shown that relative sea level curves in previously glaciated regions follow an exponential-like form, and the post glacial decay times associated with that form have weak sensitivity to the details of the ice loading history (Andrews 1970, Walcott 1980, Mitrovica & Peltier 1995). Post glacial decay time estimates may therefore be used to constrain the Earth's structure and improve GIA predictions. However, estimates of decay times in Hudson Bay in the literature differ significantly due to a number of sources of uncertainty and bias (Mitrovica et al. 2000). Previous decay time analyses have not considered the potential bias that surface loading associated with Holocene sea level changes can introduce in decay time estimates derived from nearby relative sea level observations. We explore the spatial patterns of post glacial decay time predictions in previously glaciated regions, and their sensitivity to ice and water loading history. We compute post glacial sea level changes over the last deglaciation from 21ka to the modern associated with the ICE5G (Peltier, 2004) and ICE6G (Argus et al. 2014, Peltier et al. 2015) ice history models. We fit exponential curves to the modeled relative sea level changes, and compute maps of post glacial decay time predictions across North America and the Arctic. In addition, we decompose the modeled relative sea level changes into contributions from water and ice loading effects, and compute the impact of water loading redistribution since the LGM on present day decay times. We show that Holocene water loading in the Hudson Bay may introduce significant bias in decay time estimates and we highlight locations where biases are minimized.

  14. Modeling of thermal storage systems in MILP distributed energy resource models

    DOE PAGES

    Steen, David; Stadler, Michael; Cardoso, Gonçalo; ...

    2014-08-04

    Thermal energy storage (TES) and distributed generation technologies, such as combined heat and power (CHP) or photovoltaics (PV), can be used to reduce energy costs and decrease CO 2 emissions from buildings by shifting energy consumption to times with less emissions and/or lower energy prices. To determine the feasibility of investing in TES in combination with other distributed energy resources (DER), mixed integer linear programming (MILP) can be used. Such a MILP model is the well-established Distributed Energy Resources Customer Adoption Model (DER-CAM); however, it currently uses only a simplified TES model to guarantee linearity and short run-times. Loss calculationsmore » are based only on the energy contained in the storage. This paper presents a new DER-CAM TES model that allows improved tracking of losses based on ambient and storage temperatures, and compares results with the previous version. A multi-layer TES model is introduced that retains linearity and avoids creating an endogenous optimization problem. The improved model increases the accuracy of the estimated storage losses and enables use of heat pumps for low temperature storage charging. Ultimately,results indicate that the previous model overestimates the attractiveness of TES investments for cases without possibility to invest in heat pumps and underestimates it for some locations when heat pumps are allowed. Despite a variation in optimal technology selection between the two models, the objective function value stays quite stable, illustrating the complexity of optimal DER sizing problems in buildings and microgrids.« less

  15. Theoretical extension and experimental demonstration of spectral compression in second-harmonic generation by Fresnel-inspired binary phase shaping

    NASA Astrophysics Data System (ADS)

    Li, Baihong; Dong, Ruifang; Zhou, Conghua; Xiang, Xiao; Li, Yongfang; Zhang, Shougang

    2018-05-01

    Selective two-photon microscopy and high-precision nonlinear spectroscopy rely on efficient spectral compression at the desired frequency. Previously, a Fresnel-inspired binary phase shaping (FIBPS) method was theoretically proposed for spectral compression of two-photon absorption and second-harmonic generation (SHG) with a square-chirped pulse. Here, we theoretically show that the FIBPS can introduce a negative quadratic frequency phase (negative chirp) by analogy with the spatial-domain phase function of Fresnel zone plate. Thus, the previous theoretical model can be extended to the case where the pulse can be transformed limited and in any symmetrical spectral shape. As an example, we experimentally demonstrate spectral compression in SHG by FIBPS for a Gaussian transform-limited pulse and show good agreement with the theory. Given the fundamental pulse bandwidth, a narrower SHG bandwidth with relatively high intensity can be obtained by simply increasing the number of binary phases. The experimental results also verify that our method is superior to that proposed in [Phys. Rev. A 46, 2749 (1992), 10.1103/PhysRevA.46.2749]. This method will significantly facilitate the applications of selective two-photon microscopy and spectroscopy. Moreover, as it can introduce negative dispersion, hence it can also be generalized to other applications in the field of dispersion compensation.

  16. Tests of the Grobner Basis Solution for Lightning Ground Flash Fraction Retrieval

    NASA Technical Reports Server (NTRS)

    Koshak, William; Solakiewicz, Richard; Attele, Rohan

    2011-01-01

    Satellite lightning imagers such as the NASA Tropical Rainfall Measuring Mission Lightning Imaging Sensor (TRMM/LIS) and the future GOES-R Geostationary Lightning Mapper (GLM) are designed to detect total lightning (ground flashes + cloud flashes). However, there is a desire to discriminate ground flashes from cloud flashes from the vantage point of space since this would enhance the overall information content of the satellite lightning data and likely improve its operational and scientific applications (e.g., in severe weather warning, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method was previously introduced for retrieving the fraction of ground flashes in a set of flashes observed from a satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters (one of which is the ground flash fraction), a scalar function was minimized by a numerical method. In order to improve this optimization, a Grobner basis solution was introduced to obtain analytic representations of the model parameters that serve as a refined initialization scheme to the numerical optimization. In this study, we test the efficacy of the Grobner basis initialization using actual lightning imager measurements and ground flash truth derived from the national lightning network.

  17. Comparison of dynamic treatment regimes via inverse probability weighting.

    PubMed

    Hernán, Miguel A; Lanoy, Emilie; Costagliola, Dominique; Robins, James M

    2006-03-01

    Appropriate analysis of observational data is our best chance to obtain answers to many questions that involve dynamic treatment regimes. This paper describes a simple method to compare dynamic treatment regimes by artificially censoring subjects and then using inverse probability weighting (IPW) to adjust for any selection bias introduced by the artificial censoring. The basic strategy can be summarized in four steps: 1) define two regimes of interest, 2) artificially censor individuals when they stop following one of the regimes of interest, 3) estimate inverse probability weights to adjust for the potential selection bias introduced by censoring in the previous step, 4) compare the survival of the uncensored individuals under each regime of interest by fitting an inverse probability weighted Cox proportional hazards model with the dichotomous regime indicator and the baseline confounders as covariates. In the absence of model misspecification, the method is valid provided data are available on all time-varying and baseline joint predictors of survival and regime discontinuation. We present an application of the method to compare the AIDS-free survival under two dynamic treatment regimes in a large prospective study of HIV-infected patients. The paper concludes by discussing the relative advantages and disadvantages of censoring/IPW versus g-estimation of nested structural models to compare dynamic regimes.

  18. Guided exploration in virtual environments

    NASA Astrophysics Data System (ADS)

    Beckhaus, Steffi; Eckel, Gerhard; Strothotte, Thomas

    2001-06-01

    We describe an application supporting alternating interaction and animation for the purpose of exploration in a surround- screen projection-based virtual reality system. The exploration of an environment is a highly interactive and dynamic process in which the presentation of objects of interest can give the user guidance while exploring the scene. Previous systems for automatic presentation of models or scenes need either cinematographic rules, direct human interaction, framesets or precalculation (e.g. precalculation of paths to a predefined goal). We report on the development of a system that can deal with rapidly changing user interest in objects of a scene or model as well as with dynamic models and changes of the camera position introduced interactively by the user. It is implemented as a potential-field based camera data generating system. In this paper we describe the implementation of our approach in a virtual art museum on the CyberStage, our surround-screen projection-based stereoscopic display. The paradigm of guided exploration is introduced describing the freedom of the user to explore the museum autonomously. At the same time, if requested by the user, guided exploration provides just-in-time navigational support. The user controls this support by specifying the current field of interest in high-level search criteria. We also present an informal user study evaluating this approach.

  19. Statistical physics of the spatial Prisoner's Dilemma with memory-aware agents

    NASA Astrophysics Data System (ADS)

    Javarone, Marco Alberto

    2016-02-01

    We introduce an analytical model to study the evolution towards equilibrium in spatial games, with `memory-aware' agents, i.e., agents that accumulate their payoff over time. In particular, we focus our attention on the spatial Prisoner's Dilemma, as it constitutes an emblematic example of a game whose Nash equilibrium is defection. Previous investigations showed that, under opportune conditions, it is possible to reach, in the evolutionary Prisoner's Dilemma, an equilibrium of cooperation. Notably, it seems that mechanisms like motion may lead a population to become cooperative. In the proposed model, we map agents to particles of a gas so that, on varying the system temperature, they randomly move. In doing so, we are able to identify a relation between the temperature and the final equilibrium of the population, explaining how it is possible to break the classical Nash equilibrium in the spatial Prisoner's Dilemma when considering agents able to increase their payoff over time. Moreover, we introduce a formalism to study order-disorder phase transitions in these dynamics. As result, we highlight that the proposed model allows to explain analytically how a population, whose interactions are based on the Prisoner's Dilemma, can reach an equilibrium far from the expected one; opening also the way to define a direct link between evolutionary game theory and statistical physics.

  20. Computational method for analysis of polyethylene biodegradation

    NASA Astrophysics Data System (ADS)

    Watanabe, Masaji; Kawai, Fusako; Shibata, Masaru; Yokoyama, Shigeo; Sudate, Yasuhiro

    2003-12-01

    In a previous study concerning the biodegradation of polyethylene, we proposed a mathematical model based on two primary factors: the direct consumption or absorption of small molecules and the successive weight loss of large molecules due to β-oxidation. Our model is an initial value problem consisting of a differential equation whose independent variable is time. Its unknown variable represents the total weight of all the polyethylene molecules that belong to a molecular-weight class specified by a parameter. In this paper, we describe a numerical technique to introduce experimental results into analysis of our model. We first establish its mathematical foundation in order to guarantee its validity, by showing that the initial value problem associated with the differential equation has a unique solution. Our computational technique is based on a linear system of differential equations derived from the original problem. We introduce some numerical results to illustrate our technique as a practical application of the linear approximation. In particular, we show how to solve the inverse problem to determine the consumption rate and the β-oxidation rate numerically, and illustrate our numerical technique by analyzing the GPC patterns of polyethylene wax obtained before and after 5 weeks cultivation of a fungus, Aspergillus sp. AK-3. A numerical simulation based on these degradation rates confirms that the primary factors of the polyethylene biodegradation posed in modeling are indeed appropriate.

  1. Measures of Biochemical Sociology

    ERIC Educational Resources Information Center

    Snell, Joel; Marsh, Mitchell

    2008-01-01

    In a previous article, the authors introduced a new sub field in sociology that we labeled "biochemical sociology." We introduced the definition of a sociology that encompasses sociological measures, psychological measures, and biological indicators Snell & Marsh (2003). In this article, we want to demonstrate a research strategy that would assess…

  2. Developmental time, longevity, and lifetime fertility of three introduced parasitoids of the mealybug Paracoccus marginatus (Hemiptera: Pseudoccidae)

    USDA-ARS?s Scientific Manuscript database

    Developmental time, longevity, and lifetime fertility of three previously introduced parasitoids (Acerophagus papayae Noyes and Schauff, Anagyrus loecki Noyes and Menezes, and Pseudleptomastix mexicana Noyes and Schauff) (Hymenoptera: Encyrtidae) of the mealybug Paracoccus marginatus Williams and Gr...

  3. Effect of 3-D heterogeneous-earth on rheology inference of postseismic model following the 2012 Indian Ocean earthquake

    NASA Astrophysics Data System (ADS)

    Pratama, C.; Ito, T.; Sasajima, R.; Tabei, T.; Kimata, F.; Gunawan, E.; Ohta, Y.; Yamashina, T.; Ismail, N.; Muksin, U.; Maulida, P.; Meilano, I.; Nurdin, I.; Sugiyanto, D.; Efendi, J.

    2017-12-01

    Postseismic deformation following the 2012 Indian Ocean earthquake has been modeled by several studies (Han et al. 2015, Hu et al. 2016, Masuti et al. 2016). Although each study used different method and dataset, the previous studies constructed a significant difference of earth structure. Han et al. (2015) ignored subducting slab beneath Sumatra while Masuti et al. (2016) neglect sphericity of the earth. Hu et al. (2016) incorporated elastic slab and spherical earth but used uniform rigidity in each layer of the model. As a result, Han et al. (2015) model estimated one order higher Maxwell viscosity than the Hu et al. (2016) and half order lower Kelvin viscosity than the Masuti et al. (2016) model predicted. In the present study, we conduct a quantitative analysis of each heterogeneous geometry and parameter effect on rheology inference. We develop heterogeneous three-dimensional spherical-earth finite element models. We investigate the effect of subducting slab, spherical earth, and three-dimensional earth rigidity on estimated lithosphere-asthenosphere rheology beneath the Indian Ocean. A wide range of viscosity structure from time constant rheology to time dependent rheology was chosen as previous studies have been modeled. In order to evaluate actual displacement, we compared the model to the Global Navigation Satellite System (GNSS) observation. We incorporate the GNSS data from previous studies and introduce new GNSS site as a part of the Indonesian Continuously Operating Reference Stations (InaCORS) located in Sumatra that has not been used in the last analysis. As a preliminary result, we obtained the effect of the spherical earth and elastic slab when we assumed burgers rheology. The model that incorporates the sphericity of the earth needs a one third order lower viscosity than the model that neglects earth curvature. The model that includes elastic slab needs half order lower viscosity than the model that excluding the elastic slab.

  4. Enhanced stability of car-following model upon incorporation of short-term driving memory

    NASA Astrophysics Data System (ADS)

    Liu, Da-Wei; Shi, Zhong-Ke; Ai, Wen-Huan

    2017-06-01

    Based on the full velocity difference model, a new car-following model is developed to investigate the effect of short-term driving memory on traffic flow in this paper. Short-term driving memory is introduced as the influence factor of driver's anticipation behavior. The stability condition of the newly developed model is derived and the modified Korteweg-de Vries (mKdV) equation is constructed to describe the traffic behavior near the critical point. Via numerical method, evolution of a small perturbation is investigated firstly. The results show that the improvement of this new car-following model over the previous ones lies in the fact that the new model can improve the traffic stability. Starting and breaking processes of vehicles in the signalized intersection are also investigated. The numerical simulations illustrate that the new model can successfully describe the driver's anticipation behavior, and that the efficiency and safety of the vehicles passing through the signalized intersection are improved by considering short-term driving memory.

  5. Non-local gravity and comparison with observational datasets. II. Updated results and Bayesian model comparison with ΛCDM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dirian, Yves; Foffa, Stefano; Kunz, Martin

    We present a comprehensive and updated comparison with cosmological observations of two non-local modifications of gravity previously introduced by our group, the so called RR and RT models. We implement the background evolution and the cosmological perturbations of the models in a modified Boltzmann code, using CLASS. We then test the non-local models against the Planck 2015 TT, TE, EE and Cosmic Microwave Background (CMB) lensing data, isotropic and anisotropic Baryonic Acoustic Oscillations (BAO) data, JLA supernovae, H {sub 0} measurements and growth rate data, and we perform Bayesian parameter estimation. We then compare the RR, RT and ΛCDM models,more » using the Savage-Dickey method. We find that the RT model and ΛCDM perform equally well, while the performance of the RR model with respect to ΛCDM depends on whether or not we include a prior on H {sub 0} based on local measurements.« less

  6. Computational modeling of high performance steel fiber reinforced concrete using a micromorphic approach

    NASA Astrophysics Data System (ADS)

    Huespe, A. E.; Oliver, J.; Mora, D. F.

    2013-12-01

    A finite element methodology for simulating the failure of high performance fiber reinforced concrete composites (HPFRC), with arbitrarily oriented short fibers, is presented. The composite material model is based on a micromorphic approach. Using the framework provided by this theory, the body configuration space is described through two kinematical descriptors. At the structural level, the displacement field represents the standard kinematical descriptor. Additionally, a morphological kinematical descriptor, the micromorphic field, is introduced. It describes the fiber-matrix relative displacement, or slipping mechanism of the bond, observed at the mesoscale level. In the first part of this paper, we summarize the model formulation of the micromorphic approach presented in a previous work by the authors. In the second part, and as the main contribution of the paper, we address specific issues related to the numerical aspects involved in the computational implementation of the model. The developed numerical procedure is based on a mixed finite element technique. The number of dofs per node changes according with the number of fiber bundles simulated in the composite. Then, a specific solution scheme is proposed to solve the variable number of unknowns in the discrete model. The HPFRC composite model takes into account the important effects produced by concrete fracture. A procedure for simulating quasi-brittle fracture is introduced into the model and is described in the paper. The present numerical methodology is assessed by simulating a selected set of experimental tests which proves its viability and accuracy to capture a number of mechanical phenomenon interacting at the macro- and mesoscale and leading to failure of HPFRC composites.

  7. Earth horizon modeling and application to static Earth sensors on TRMM spacecraft

    NASA Technical Reports Server (NTRS)

    Keat, J.; Challa, M.; Tracewell, D.; Galal, K.

    1995-01-01

    Data from Earth sensor assemblies (ESA's) often are used in the attitude determination (AD) for both spinning and Earth-pointing spacecraft. The ESA's on previous such spacecraft for which the ground-based AD operation was performed by the Flight Dynamics Division (FDD) used the Earth scanning method. AD on such spacecraft requires a model of the shape of the Earth disk as seen from the spacecraft. AD accuracy requirements often are too severe to permit Earth oblateness to be ignored when modeling disk shape. Section 2 of this paper reexamines and extends the methods for Earth disk shape modeling employed in AD work at FDD for the past decade. A new formulation, based on a more convenient Earth flatness parameter, is introduced, and the geometric concepts are examined in detail. It is shown that the Earth disk can be approximated as an ellipse in AD computations. Algorithms for introducing Earth oblateness into the AD process for spacecraft carrying scanning ESA's have been developed at FDD and implemented into the support systems. The Tropical Rainfall Measurement Mission (TRMM) will be the first spacecraft with AD operation performed at FDD that uses a different type of ESA - namely, a static one - containing four fixed detectors D(sub i) (i = 1 to 4). Section 3 of this paper considers the effect of Earth oblateness on AD accuracy for TRMM. This effect ideally will not induce AD errors on TRMM when data from all four D(sub i) are present. When data from only two or three D(sub i) are available, however, a spherical Earth approximation can introduce errors of 0.05 to 0.30 deg on TRMM. These oblateness-induced errors are eliminated by a new algorithm that uses the results of Section 2 to model the Earth disk as an ellipse.

  8. A position-dependent mass model for the Thomas–Fermi potential: Exact solvability and relation to δ-doped semiconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze-Halberg, Axel, E-mail: xbataxel@gmail.com; García-Ravelo, Jesús; Pacheco-García, Christian

    We consider the Schrödinger equation in the Thomas–Fermi field, a model that has been used for describing electron systems in δ-doped semiconductors. It is shown that the problem becomes exactly-solvable if a particular effective (position-dependent) mass distribution is incorporated. Orthogonal sets of normalizable bound state solutions are constructed in explicit form, and the associated energies are determined. We compare our results with the corresponding findings on the constant-mass problem discussed by Ioriatti (1990) [13]. -- Highlights: ► We introduce an exactly solvable, position-dependent mass model for the Thomas–Fermi potential. ► Orthogonal sets of solutions to our model are constructed inmore » closed form. ► Relation to delta-doped semiconductors is discussed. ► Explicit subband bottom energies are calculated and compared to results obtained in a previous study.« less

  9. Information matrix estimation procedures for cognitive diagnostic models.

    PubMed

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  10. Evaporation in Capillary Porous Media at the Perfect Piston-Like Invasion Limit: Evidence of Nonlocal Equilibrium Effects

    NASA Astrophysics Data System (ADS)

    Attari Moghaddam, Alireza; Prat, Marc; Tsotsas, Evangelos; Kharaghani, Abdolreza

    2017-12-01

    The classical continuum modeling of evaporation in capillary porous media is revisited from pore network simulations of the evaporation process. The computed moisture diffusivity is characterized by a minimum corresponding to the transition between liquid and vapor transport mechanisms confirming previous interpretations. Also the study suggests an explanation for the scattering generally observed in the moisture diffusivity obtained from experimental data. The pore network simulations indicate a noticeable nonlocal equilibrium effect leading to a new interpretation of the vapor pressure-saturation relationship classically introduced to obtain the one-equation continuum model of evaporation. The latter should not be understood as a desorption isotherm as classically considered but rather as a signature of a nonlocal equilibrium effect. The main outcome of this study is therefore that nonlocal equilibrium two-equation model must be considered for improving the continuum modeling of evaporation.

  11. Noise-induced multistability in the regulation of cancer by genes and pseudogenes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrosyan, K. G., E-mail: pkaren@phys.sinica.edu.tw; Hu, Chin-Kun, E-mail: huck@phys.sinica.edu.tw; National Center for Theoretical Sciences, National Tsing Hua University, Hsinchu 30013, Taiwan

    2016-07-28

    We extend a previously introduced model of stochastic gene regulation of cancer to a nonlinear case having both gene and pseudogene messenger RNAs (mRNAs) self-regulated. The model consists of stochastic Boolean genetic elements and possesses noise-induced multistability (multimodality). We obtain analytical expressions for probabilities for the case of constant but finite number of microRNA molecules which act as a noise source for the competing gene and pseudogene mRNAs. The probability distribution functions display both the global bistability regime as well as even-odd number oscillations for a certain range of model parameters. Statistical characteristics of the mRNA’s level fluctuations are evaluated.more » The obtained results of the extended model advance our understanding of the process of stochastic gene and pseudogene expressions that is crucial in regulation of cancer.« less

  12. A step function model to evaluate the real monetary value of man-sievert with real GDP.

    PubMed

    Na, Seong H; Kim, Sun G

    2009-01-01

    For use in a cost-benefit analysis to establish optimum levels of radiation protection in Korea under the ALARA principle, we introduce a discrete step function model to evaluate man-sievert monetary value in the real economic value. The model formula, which is unique and country-specific, is composed of real GDP, the nominal risk coefficient for cancer and hereditary effects, the aversion factor against radiation exposure, and average life expectancy. Unlike previous researches on alpha-value assessment, we show different alpha values in the real term, differentiated with respect to the range of individual doses, which would be more realistic and informative for application to the radiation protection practices. GDP deflators of economy can reflect the society's situations. Finally, we suggest that the Korean model can be generalized simply to other countries without normalizing any country-specific factors.

  13. Image-Based Reverse Engineering and Visual Prototyping of Woven Cloth.

    PubMed

    Schroder, Kai; Zinke, Arno; Klein, Reinhard

    2015-02-01

    Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.

  14. Some Aspects of Advanced Tokamak Modeling in DIII-D

    NASA Astrophysics Data System (ADS)

    St John, H. E.; Petty, C. C.; Murakami, M.; Kinsey, J. E.

    2000-10-01

    We extend previous work(M. Murakami, et al., General Atomics Report GA-A23310 (1999).) done on time dependent DIII-D advanced tokamak simulations by introducing theoretical confinement models rather than relying on power balance derived transport coefficients. We explore using NBCD and off axis ECCD together with a self-consistent aligned bootstrap current, driven by the internal transport barrier dynamics generated with the GLF23 confinement model, to shape the hollow current profile and to maintain MHD stable conditions. Our theoretical modeling approach uses measured DIII-D initial conditions to start off the simulations in a smooth consistent manner. This mitigates the troublesome long lived perturbations in the ohmic current profile that is normally caused by inconsistent initial data. To achieve this goal our simulation uses a sequence of time dependent eqdsks generated autonomously by the EFIT MHD equilibrium code in analyzing experimental data to supply the history for the simulation.

  15. Study on the high speed scramjet characteristics at Mach 10 to 15 flight condition

    NASA Astrophysics Data System (ADS)

    Takahashi, M.; Itoh, K.; Tanno, H.; Komuro, T.; Sunami, T.; Sato, K.; Ueda, S.

    A scramjet engine model, designed to establish steady and strong combustion at free-stream conditions corresponding to Mach 12 flight, was tested in a large free-piston driven shock tunnel. Combustion tests of a previous engine model showed that combustion heat release obtained in the combustor was not sufficient to maintain strong combustion. For a new scramjet engine model, the inlet compression ratio was increased to raise the static temperature and density of the flow at the combustor entrance. As a result of the aerodynamic design change, the pressure rise due to combustion increased and the duration of strong combustion conditions in the combustor was extended. A hyper-mixer injector designed to enhance mixing and combustion by introducing streamwise vortices was applied to the new engine model. The results showed that the hyper mixer injector was very effective in promoting combustion heat release and establishing steady and strong combustion in the combustor.

  16. A Temperature-Dependent Phase-Field Model for Phase Separation and Damage

    NASA Astrophysics Data System (ADS)

    Heinemann, Christian; Kraus, Christiane; Rocca, Elisabetta; Rossi, Riccarda

    2017-07-01

    In this paper we study a model for phase separation and damage in thermoviscoelastic materials. The main novelty of the paper consists in the fact that, in contrast with previous works in the literature concerning phase separation and damage processes in elastic media, in our model we encompass thermal processes, nonlinearly coupled with the damage, concentration and displacement evolutions. More particularly, we prove the existence of "entropic weak solutions", resorting to a solvability concept first introduced in Feireisl (Comput Math Appl 53:461-490, 2007) in the framework of Fourier-Navier-Stokes systems and then recently employed in Feireisl et al. (Math Methods Appl Sci 32:1345-1369, 2009) and Rocca and Rossi (Math Models Methods Appl Sci 24:1265-1341, 2014) for the study of PDE systems for phase transition and damage. Our global-in-time existence result is obtained by passing to the limit in a carefully devised time-discretization scheme.

  17. Mergers of Black-Hole Binaries with Aligned Spins: Waveform Characteristics

    NASA Technical Reports Server (NTRS)

    Kelly, Bernard J.; Baker, John G.; vanMeter, James R.; Boggs, William D.; McWilliams, Sean T.; Centrella, Joan

    2011-01-01

    "We apply our gravitational-waveform analysis techniques, first presented in the context of nonspinning black holes of varying mass ratio [1], to the complementary case of equal-mass spinning black-hole binary systems. We find that, as with the nonspinning mergers, the dominant waveform modes phases evolve together in lock-step through inspiral and merger, supporting the previous model of the binary system as an adiabatically rigid rotator driving gravitational-wave emission - an implicit rotating source (IRS). We further apply the late-merger model for the rotational frequency introduced in [1], along with a new mode amplitude model appropriate for the dominant (2, plus or minus 2) modes. We demonstrate that this seven-parameter model performs well in matches with the original numerical waveform for system masses above - 150 solar mass, both when the parameters are freely fit, and when they are almost completely constrained by physical considerations."

  18. The random field Blume-Capel model revisited

    NASA Astrophysics Data System (ADS)

    Santos, P. V.; da Costa, F. A.; de Araújo, J. M.

    2018-04-01

    We have revisited the mean-field treatment for the Blume-Capel model under the presence of a discrete random magnetic field as introduced by Kaufman and Kanner (1990). The magnetic field (H) versus temperature (T) phase diagrams for given values of the crystal field D were recovered in accordance to Kaufman and Kanner original work. However, our main goal in the present work was to investigate the distinct structures of the crystal field versus temperature phase diagrams as the random magnetic field is varied because similar models have presented reentrant phenomenon due to randomness. Following previous works we have classified the distinct phase diagrams according to five different topologies. The topological structure of the phase diagrams is maintained for both H - T and D - T cases. Although the phase diagrams exhibit a richness of multicritical phenomena we did not found any reentrant effect as have been seen in similar models.

  19. Touchscreen learning deficits in Ube3a, Ts65Dn and Mecp2 mouse models of neurodevelopmental disorders with intellectual disabilities.

    PubMed

    Leach, P T; Crawley, J N

    2017-12-20

    Mutant mouse models of neurodevelopmental disorders with intellectual disabilities provide useful translational research tools, especially in cases where robust cognitive deficits are reproducibly detected. However, motor, sensory and/or health issues consequent to the mutation may introduce artifacts that preclude testing in some standard cognitive assays. Touchscreen learning and memory tasks in small operant chambers have the potential to circumvent these confounds. Here we use touchscreen visual discrimination learning to evaluate performance in the maternally derived Ube3a mouse model of Angelman syndrome, the Ts65Dn trisomy mouse model of Down syndrome, and the Mecp2 Bird mouse model of Rett syndrome. Significant deficits in acquisition of a 2-choice visual discrimination task were detected in both Ube3a and Ts65Dn mice. Procedural control measures showed no genotype differences during pretraining phases or during acquisition. Mecp2 males did not survive long enough for touchscreen training, consistent with previous reports. Most Mecp2 females failed on pretraining criteria. Significant impairments on Morris water maze spatial learning were detected in both Ube3a and Ts65Dn, replicating previous findings. Abnormalities on rotarod in Ube3a, and on open field in Ts65Dn, replicating previous findings, may have contributed to the observed acquisition deficits and swim speed abnormalities during water maze performance. In contrast, these motor phenotypes do not appear to have affected touchscreen procedural abilities during pretraining or visual discrimination training. Our findings of slower touchscreen learning in 2 mouse models of neurodevelopmental disorders with intellectual disabilities indicate that operant tasks offer promising outcome measures for the preclinical discovery of effective pharmacological therapeutics. © 2017 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.

  20. AdaBoost-based on-line signature verifier

    NASA Astrophysics Data System (ADS)

    Hongo, Yasunori; Muramatsu, Daigo; Matsumoto, Takashi

    2005-03-01

    Authentication of individuals is rapidly becoming an important issue. The authors previously proposed a Pen-input online signature verification algorithm. The algorithm considers a writer"s signature as a trajectory of pen position, pen pressure, pen azimuth, and pen altitude that evolve over time, so that it is dynamic and biometric. Many algorithms have been proposed and reported to achieve accuracy for on-line signature verification, but setting the threshold value for these algorithms is a problem. In this paper, we introduce a user-generic model generated by AdaBoost, which resolves this problem. When user- specific models (one model for each user) are used for signature verification problems, we need to generate the models using only genuine signatures. Forged signatures are not available because imposters do not give forged signatures for training in advance. However, we can make use of another's forged signature in addition to the genuine signatures for learning by introducing a user generic model. And Adaboost is a well-known classification algorithm, making final decisions depending on the sign of the output value. Therefore, it is not necessary to set the threshold value. A preliminary experiment is performed on a database consisting of data from 50 individuals. This set consists of western-alphabet-based signatures provide by a European research group. In this experiment, our algorithm gives an FRR of 1.88% and an FAR of 1.60%. Since no fine-tuning was done, this preliminary result looks very promising.

  1. Edge grouping combining boundary and region information.

    PubMed

    Stahl, Joachim S; Wang, Song

    2007-10-01

    This paper introduces a new edge-grouping method to detect perceptually salient structures in noisy images. Specifically, we define a new grouping cost function in a ratio form, where the numerator measures the boundary proximity of the resulting structure and the denominator measures the area of the resulting structure. This area term introduces a preference towards detecting larger-size structures and, therefore, makes the resulting edge grouping more robust to image noise. To find the optimal edge grouping with the minimum grouping cost, we develop a special graph model with two different kinds of edges and then reduce the grouping problem to finding a special kind of cycle in this graph with a minimum cost in ratio form. This optimal cycle-finding problem can be solved in polynomial time by a previously developed graph algorithm. We implement this edge-grouping method, test it on both synthetic data and real images, and compare its performance against several available edge-grouping and edge-linking methods. Furthermore, we discuss several extensions of the proposed method, including the incorporation of the well-known grouping cues of continuity and intensity homogeneity, introducing a factor to balance the contributions from the boundary and region information, and the prevention of detecting self-intersecting boundaries.

  2. Efficiency and establishment of three introduced parasitoids of the mealybug Paracoccus marginatus (Hemiptera: Pseudococcidae)

    USDA-ARS?s Scientific Manuscript database

    A study on the efficiency and establishment of three previously introduced parasitoids (Acerophagus papayae, Anagyrus loecki, and Pseudleptomastix mexicana) to control the mealybug Paracoccus marginatus was made in 2005 and 2006, at three locations in Homestead (Miami-Dade County), Florida. In each ...

  3. 32 CFR 935.104 - Sentence after a plea of guilty.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... counsel may make any reasonable statement he wishes in mitigation or of previous good character. The prosecution may introduce evidence in aggravation, or of bad character if the accused has introduced evidence of good character. The Court shall then impose any lawful sentence that it considers proper. ...

  4. Durability of Capped Wood Plastic Composites

    Treesearch

    Mark Mankowski; Mark J. Manning; Damien P. Slowik

    2015-01-01

    Manufacturers of wood plastic composites (WPCs) have recently introduced capped decking to their product lines. These new materials have begun to take market share from the previous generation of uncapped products that possessed a homogenous composition throughout the thickness of their cross-section. These capped offerings have been introduced with claims that the...

  5. Requiem for the max rule?

    PubMed

    Ma, Wei Ji; Shen, Shan; Dziugaite, Gintare; van den Berg, Ronald

    2015-11-01

    In tasks such as visual search and change detection, a key question is how observers integrate noisy measurements from multiple locations to make a decision. Decision rules proposed to model this process have fallen into two categories: Bayes-optimal (ideal observer) rules and ad-hoc rules. Among the latter, the maximum-of-outputs (max) rule has been the most prominent. Reviewing recent work and performing new model comparisons across a range of paradigms, we find that in all cases except for one, the optimal rule describes human data as well as or better than every max rule either previously proposed or newly introduced here. This casts doubt on the utility of the max rule for understanding perceptual decision-making. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. The Effect of Tick Size on Trading Volume Share in Two Competing Stock Markets

    NASA Astrophysics Data System (ADS)

    Nagumo, Shota; Shimada, Takashi; Yoshioka, Naoki; Ito, Nobuyasu

    2017-01-01

    The relationship between tick sizes and trading volume shares in competing markets is studied theoretically. By introducing a simple model which is equipped with two markets and non-strategic traders, we analytically calculate the steady states. It is shown that a market with a larger tick size is generally deprived of its share by the competing market. However, if traders' preference for the present market because of its major share is strong enough, the market with a larger tick size has a chance to keep a major share in the steady state. These findings are consistent with the previous results obtained from a more complicated artificial market model and also provide a clear understanding of the basic mechanism of market competition.

  7. Resolution of the threshold fracture energy paradox for solid particle erosion

    NASA Astrophysics Data System (ADS)

    Peck, Daniel; Volkov, Grigory; Mishuris, Gennady; Petrov, Yuri

    2016-12-01

    Previous models of a single erosion impact, for a rigid axisymmetric indenter defined by the shape function ?, have shown that a critical shape parameter ? exists which determines the behaviour of the threshold fracture energy. However, repeated investigations into this parameter have found no physical explanation for its value. Again utilising the notion of incubation time prior to fracture, this paper attempts to provide a physical explanation of this phenomena by introducing a supersonic stage into the model. The final scheme allows for the effect of waves along the indenters contact area to be taken into account. The effect of this physical characteristic of the impact on the threshold fracture energy and critical shape parameter ? are investigated and discussed.

  8. Robust Flutter Margin Analysis that Incorporates Flight Data

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Martin J.

    1998-01-01

    An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, mu, computes a stability margin that directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The mu margins are robust margins that indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 Systems Research Aircraft using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.

  9. Efficiency at maximum power of a laser quantum heat engine enhanced by noise-induced coherence

    NASA Astrophysics Data System (ADS)

    Dorfman, Konstantin E.; Xu, Dazhi; Cao, Jianshu

    2018-04-01

    Quantum coherence has been demonstrated in various systems including organic solar cells and solid state devices. In this article, we report the lower and upper bounds for the performance of quantum heat engines determined by the efficiency at maximum power. Our prediction based on the canonical three-level Scovil and Schulz-Dubois maser model strongly depends on the ratio of system-bath couplings for the hot and cold baths and recovers the theoretical bounds established previously for the Carnot engine. Further, introducing a fourth level to the maser model can enhance the maximal power and its efficiency, thus demonstrating the importance of quantum coherence in the thermodynamics and operation of the heat engines beyond the classical limit.

  10. Measurements of vocal fold tissue viscoelasticity: Approaching the male phonatory frequency range

    NASA Astrophysics Data System (ADS)

    Chan, Roger W.

    2004-06-01

    Viscoelastic shear properties of human vocal fold tissues have been reported previously. However, data have only been obtained at very low frequencies (<=15 Hz). This necessitates data extrapolation to the frequency range of phonation based on constitutive modeling and time-temperature superposition. This study attempted to obtain empirical measurements at higher frequencies with the use of a controlled strain torsional rheometer, with a design of directly controlling input strain that introduced significantly smaller system inertial errors compared to controlled stress rheometry. Linear viscoelastic shear properties of the vocal fold mucosa (cover) from 17 canine larynges were quantified at frequencies of up to 50 Hz. Consistent with previous data, results showed that the elastic shear modulus (G'), viscous shear modulus (G''), and damping ratio (ζ) of the vocal fold mucosa were relatively constant across 0.016-50 Hz, whereas the dynamic viscosity (ɛ') decreased monotonically with frequency. Constitutive characterization of the empirical data by a quasilinear viscoelastic model and a statistical network model demonstrated trends of viscoelastic behavior at higher frequencies generally following those observed at lower frequencies. These findings supported the use of controlled strain rheometry for future investigations of the viscoelasticity of vocal fold tissues and phonosurgical biomaterials at phonatory frequencies.

  11. Aeroacoustic Simulations of a Nose Landing Gear Using FUN3D on Pointwise Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Rhoads, John; Lockard, David P.

    2015-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise(TradeMark) grid generation software are used for these simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these simulations. Solutions are also presented for a wall function model coupled to the standard turbulence model. Time-averaged and instantaneous solutions obtained on these Pointwise grids are compared with the measured data and previous numerical solutions. The resulting CFD solutions are used as input to a Ffowcs Williams-Hawkings noise propagation code to compute the farfield noise levels in the flyover and sideline directions. The computed noise levels compare well with previous CFD solutions and experimental data.

  12. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection

    PubMed Central

    Vesperini, Fabio; Schuller, Björn

    2017-01-01

    In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases. PMID:28182121

  13. Assessing Economic Modulation of Future Critical Materials Use: The Case of Automotive-Related Platinum Group Metals.

    PubMed

    Zhang, Jingshu; Everson, Mark P; Wallington, Timothy J; Field, Frank R; Roth, Richard; Kirchain, Randolph E

    2016-07-19

    Platinum-group metals (PGMs) are technological and economic enablers of many industrial processes. This important role, coupled with their limited geographic availability, has led to PGMs being labeled as "critical materials". Studies of future PGM flows have focused on trends within material flows or macroeconomic indicators. We complement the previous work by introducing a novel technoeconomic model of substitution among PGMs within the automotive sector (the largest user of PGMs) reflecting the rational response of firms to changing prices. The results from the model support previous conclusions that PGM use is likely to grow, in some cases strongly, by 2030 (approximately 45% for Pd and 5% for Pt), driven by the increasing sales of automobiles. The model also indicates that PGM-demand growth will be significantly influenced by the future Pt-to-Pd price ratio, with swings of Pt and Pd demand of as much as 25% if the future price ratio shifts higher or lower even if it stays within the historic range. Fortunately, automotive catalysts are one of the more effectively recycled metals. As such, with proper policy support, recycling can serve to meet some of this growing demand.

  14. Statistical Patterns of Ionospheric Convection Derived From Mid-Latitude, High-Latitude, and Polar SuperDARN HF Radar Observations

    NASA Astrophysics Data System (ADS)

    Thomas, E. G.; Shepherd, S. G.

    2017-12-01

    Global patterns of ionospheric convection have been widely studied in terms of the interplanetary magnetic field (IMF) magnitude and orientation in both the Northern and Southern Hemispheres using observations from the Super Dual Auroral Radar Network (SuperDARN). The dynamic range of driving conditions under which existing SuperDARN statistical models are valid is currently limited to periods when the high-latitude convection pattern remains above about 60° geomagnetic latitude. Cousins and Shepherd [2010] found this to correspond to intervals when the solar wind electric field Esw < 4.1 mV/m and IMF Bz is negative. Conversely, under northward IMF conditions (Bz > 0) the high-latitude radars often experience difficulties in measuring convection above about 85° geomagnetic latitude. In this presentation, we introduce a new statistical model of ionospheric convection which is valid for much more dominant IMF Bz conditions than was previously possible by including velocity measurements from the newly constructed tiers of radars in the Northern Hemisphere at midlatitudes and in the polar cap. This new model (TS17) is compared to previous statistical models derived from high-latitude SuperDARN observations (RG96, PSR10, CS10) and its impact on instantaneous Map Potential solutions is examined.

  15. Determination of Realistic Fire Scenarios in Spacecraft

    NASA Technical Reports Server (NTRS)

    Dietrich, Daniel L.; Ruff, Gary A.; Urban, David

    2013-01-01

    This paper expands on previous work that examined how large a fire a crew member could successfully survive and extinguish in the confines of a spacecraft. The hazards to the crew and equipment during an accidental fire include excessive pressure rise resulting in a catastrophic rupture of the vehicle skin, excessive temperatures that burn or incapacitate the crew (due to hyperthermia), carbon dioxide build-up or accumulation of other combustion products (e.g. carbon monoxide). The previous work introduced a simplified model that treated the fire primarily as a source of heat and combustion products and sink for oxygen prescribed (input to the model) based on terrestrial standards. The model further treated the spacecraft as a closed system with no capability to vent to the vacuum of space. The model in the present work extends this analysis to more realistically treat the pressure relief system(s) of the spacecraft, include more combustion products (e.g. HF) in the analysis and attempt to predict the fire spread and limiting fire size (based on knowledge of terrestrial fires and the known characteristics of microgravity fires) rather than prescribe them in the analysis. Including the characteristics of vehicle pressure relief systems has a dramatic mitigating effect by eliminating vehicle overpressure for all but very large fires and reducing average gas-phase temperatures.

  16. The Interrelationship between Promoter Strength, Gene Expression, and Growth Rate

    PubMed Central

    Klesmith, Justin R.; Detwiler, Emily E.; Tomek, Kyle J.; Whitehead, Timothy A.

    2014-01-01

    In exponentially growing bacteria, expression of heterologous protein impedes cellular growth rates. Quantitative understanding of the relationship between expression and growth rate will advance our ability to forward engineer bacteria, important for metabolic engineering and synthetic biology applications. Recently, a work described a scaling model based on optimal allocation of ribosomes for protein translation. This model quantitatively predicts a linear relationship between microbial growth rate and heterologous protein expression with no free parameters. With the aim of validating this model, we have rigorously quantified the fitness cost of gene expression by using a library of synthetic constitutive promoters to drive expression of two separate proteins (eGFP and amiE) in E. coli in different strains and growth media. In all cases, we demonstrate that the fitness cost is consistent with the previous findings. We expand upon the previous theory by introducing a simple promoter activity model to quantitatively predict how basal promoter strength relates to growth rate and protein expression. We then estimate the amount of protein expression needed to support high flux through a heterologous metabolic pathway and predict the sizable fitness cost associated with enzyme production. This work has broad implications across applied biological sciences because it allows for prediction of the interplay between promoter strength, protein expression, and the resulting cost to microbial growth rates. PMID:25286161

  17. Simulation and fabrication of thin film bulk acoustic wave resonator

    NASA Astrophysics Data System (ADS)

    Xixi, Han; Yi, Ou; Zhigang, Li; Wen, Ou; Dapeng, Chen; Tianchun, Ye

    2016-07-01

    In this paper, we present the simulation and fabrication of a thin film bulk acoustic resonator (FBAR). In order to improve the accuracy of simulation, an improved Mason model was introduced to design the resonator by taking the coupling effect between electrode and substrate into consideration. The resonators were fabricated by the eight inch CMOS process, and the measurements show that the improved Mason model is more accurate than a simple Mason model. The Q s (Q at series resonance), Q p (Q at parallel resonance), Q max and k t 2 of the FBAR were measured to be 695, 814, 1049, and 7.01% respectively, showing better performance than previous reports. Project supported by the National Natural Science Foundation of China (Nos. 61274119, 61306141, 61335008) and the Natural Science Foundation of Jiangsu Province (No. BK20131099).

  18. Axial geometrical aberration correction up to 5th order with N-SYLC.

    PubMed

    Hoque, Shahedul; Ito, Hiroyuki; Takaoka, Akio; Nishi, Ryuji

    2017-11-01

    We present N-SYLC (N-fold symmetric line currents) models to correct 5th order axial geometrical aberrations in electron microscopes. In our previous paper, we showed that 3rd order spherical aberration can be corrected by 3-SYLC doublet. After that, mainly the 5th order aberrations remain to limit the resolution. In this paper, we extend the doublet to quadruplet models also including octupole and dodecapole fields for correcting these higher order aberrations, without introducing any new unwanted ones. We prove the validity of our models by analytical calculations. Also by computer simulations, we show that for beam energy of 5keV and initial angle 10mrad at the corrector object plane, beam size of less than 0.5nm is achieved at the corrector image plane. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Model Equation for Acoustic Nonlinear Measurement of Dispersive Specimens at High Frequency

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Kushibiki, Junichi; Zou, Wei

    2006-10-01

    We present a theoretical model for acoustic nonlinearity measurement of dispersive specimens at high frequency. The nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation governs the nonlinear propagation in the SiO2/specimen/SiO2 multi-layer medium. The dispersion effect is considered in a special manner by introducing the frequency-dependant sound velocity in the KZK equation. Simple analytic solutions are derived by applying the superposition technique of Gaussian beams. The solutions are used to correct the diffraction and dispersion effects in the measurement of acoustic nonlinearity of cottonseed oil in the frequency range of 33-96 MHz. Regarding two different ultrasonic devices, the accuracies of the measurements are improved to ±2.0% and ±1.3% in comparison with ±9.8% and ±2.9% obtained from the previous plane wave model.

  20. Ward Identity and Scattering Amplitudes for Nonlinear Sigma Models

    NASA Astrophysics Data System (ADS)

    Low, Ian; Yin, Zhewei

    2018-02-01

    We present a Ward identity for nonlinear sigma models using generalized nonlinear shift symmetries, without introducing current algebra or coset space. The Ward identity constrains correlation functions of the sigma model such that the Adler's zero is guaranteed for S -matrix elements, and gives rise to a subleading single soft theorem that is valid at the quantum level and to all orders in the Goldstone decay constant. For tree amplitudes, the Ward identity leads to a novel Berends-Giele recursion relation as well as an explicit form of the subleading single soft factor. Furthermore, interactions of the cubic biadjoint scalar theory associated with the single soft limit, which was previously discovered using the Cachazo-He-Yuan representation of tree amplitudes, can be seen to emerge from matrix elements of conserved currents corresponding to the generalized shift symmetry.

  1. The mass media destabilizes the cultural homogenous regime in Axelrod's model

    NASA Astrophysics Data System (ADS)

    Peres, Lucas R.; Fontanari, José F.

    2010-02-01

    An important feature of Axelrod's model for culture dissemination or social influence is the emergence of many multicultural absorbing states, despite the fact that the local rules that specify the agents interactions are explicitly designed to decrease the cultural differences between agents. Here we re-examine the problem of introducing an external, global interaction—the mass media—in the rules of Axelrod's model: in addition to their nearest neighbors, each agent has a certain probability p to interact with a virtual neighbor whose cultural features are fixed from the outset. Most surprisingly, this apparently homogenizing effect actually increases the cultural diversity of the population. We show that, contrary to previous claims in the literature, even a vanishingly small value of p is sufficient to destabilize the homogeneous regime for very large lattice sizes.

  2. Optimization of vascular-targeting drugs in a computational model of tumor growth

    NASA Astrophysics Data System (ADS)

    Gevertz, Jana

    2012-04-01

    A biophysical tool is introduced that seeks to provide a theoretical basis for helping drug design teams assess the most promising drug targets and design optimal treatment strategies. The tool is grounded in a previously validated computational model of the feedback that occurs between a growing tumor and the evolving vasculature. In this paper, the model is particularly used to explore the therapeutic effectiveness of two drugs that target the tumor vasculature: angiogenesis inhibitors (AIs) and vascular disrupting agents (VDAs). Using sensitivity analyses, the impact of VDA dosing parameters is explored, as is the effects of administering a VDA with an AI. Further, a stochastic optimization scheme is utilized to identify an optimal dosing schedule for treatment with an AI and a chemotherapeutic. The treatment regimen identified can successfully halt simulated tumor growth, even after the cessation of therapy.

  3. Prediction of clinical behaviour and treatment for cancers.

    PubMed

    Futschik, Matthias E; Sullivan, Mike; Reeve, Anthony; Kasabov, Nikola

    2003-01-01

    Prediction of clinical behaviour and treatment for cancers is based on the integration of clinical and pathological parameters. Recent reports have demonstrated that gene expression profiling provides a powerful new approach for determining disease outcome. If clinical and microarray data each contain independent information then it should be possible to combine these datasets to gain more accurate prognostic information. Here, we have used existing clinical information and microarray data to generate a combined prognostic model for outcome prediction for diffuse large B-cell lymphoma (DLBCL). A prediction accuracy of 87.5% was achieved. This constitutes a significant improvement compared to the previously most accurate prognostic model with an accuracy of 77.6%. The model introduced here may be generally applicable to the combination of various types of molecular and clinical data for improving medical decision support systems and individualising patient care.

  4. Like-charged protein-polyelectrolyte complexation driven by charge patches

    NASA Astrophysics Data System (ADS)

    Yigit, Cemil; Heyda, Jan; Ballauff, Matthias; Dzubiella, Joachim

    2015-08-01

    We study the pair complexation of a single, highly charged polyelectrolyte (PE) chain (of 25 or 50 monomers) with like-charged patchy protein models (CPPMs) by means of implicit-solvent, explicit-salt Langevin dynamics computer simulations. Our previously introduced set of CPPMs embraces well-defined zero-, one-, and two-patched spherical globules each of the same net charge and (nanometer) size with mono- and multipole moments comparable to those of globular proteins with similar size. We observe large binding affinities between the CPPM and the like-charged PE in the tens of the thermal energy, kBT, that are favored by decreasing salt concentration and increasing charge of the patch(es). Our systematic analysis shows a clear correlation between the distance-resolved potentials of mean force, the number of ions released from the PE, and CPPM orientation effects. In particular, we find a novel two-site binding behavior for PEs in the case of two-patched CPPMs, where intermediate metastable complex structures are formed. In order to describe the salt-dependence of the binding affinity for mainly dipolar (one-patched) CPPMs, we introduce a combined counterion-release/Debye-Hückel model that quantitatively captures the essential physics of electrostatic complexation in our systems.

  5. Nowcasting sunshine number using logistic modeling

    NASA Astrophysics Data System (ADS)

    Brabec, Marek; Badescu, Viorel; Paulescu, Marius

    2013-04-01

    In this paper, we present a formalized approach to statistical modeling of the sunshine number, binary indicator of whether the Sun is covered by clouds introduced previously by Badescu (Theor Appl Climatol 72:127-136, 2002). Our statistical approach is based on Markov chain and logistic regression and yields fully specified probability models that are relatively easily identified (and their unknown parameters estimated) from a set of empirical data (observed sunshine number and sunshine stability number series). We discuss general structure of the model and its advantages, demonstrate its performance on real data and compare its results to classical ARIMA approach as to a competitor. Since the model parameters have clear interpretation, we also illustrate how, e.g., their inter-seasonal stability can be tested. We conclude with an outlook to future developments oriented to construction of models allowing for practically desirable smooth transition between data observed with different frequencies and with a short discussion of technical problems that such a goal brings.

  6. Studying the effect of weather conditions on daily crash counts using a discrete time-series model.

    PubMed

    Brijs, Tom; Karlis, Dimitris; Wets, Geert

    2008-05-01

    In previous research, significant effects of weather conditions on car crashes have been found. However, most studies use monthly or yearly data and only few studies are available analyzing the impact of weather conditions on daily car crash counts. Furthermore, the studies that are available on a daily level do not explicitly model the data in a time-series context, hereby ignoring the temporal serial correlation that may be present in the data. In this paper, we introduce an integer autoregressive model for modelling count data with time interdependencies. The model is applied to daily car crash data, metereological data and traffic exposure data from the Netherlands aiming at examining the risk impact of weather conditions on the observed counts. The results show that several assumptions related to the effect of weather conditions on crash counts are found to be significant in the data and that if serial temporal correlation is not accounted for in the model, this may produce biased results.

  7. Application of dynamic flux balance analysis to an industrial Escherichia coli fermentation.

    PubMed

    Meadows, Adam L; Karnik, Rahi; Lam, Harry; Forestell, Sean; Snedecor, Brad

    2010-03-01

    We have developed a reactor-scale model of Escherichia coli metabolism and growth in a 1000 L process for the production of a recombinant therapeutic protein. The model consists of two distinct parts: (1) a dynamic, process specific portion that describes the time evolution of 37 process variables of relevance and (2) a flux balance based, 123-reaction metabolic model of E. coli metabolism. This model combines several previously reported modeling approaches including a growth rate-dependent biomass composition, maximum growth rate objective function, and dynamic flux balancing. In addition, we introduce concentration-dependent boundary conditions of transport fluxes, dynamic maintenance demands, and a state-dependent cellular objective. This formulation was able to describe specific runs with high-fidelity over process conditions including rich media, simultaneous acetate and glucose consumption, glucose minimal media, and phosphate depleted media. Furthermore, the model accurately describes the effect of process perturbations--such as glucose overbatching and insufficient aeration--on growth, metabolism, and titer. (c) 2009 Elsevier Inc. All rights reserved.

  8. Determination of wind tunnel constraint effects by a unified pressure signature method. Part 2: Application to jet-in-crossflow

    NASA Technical Reports Server (NTRS)

    Hackett, J. E.; Sampath, S.; Phillips, C. G.

    1981-01-01

    The development of an improved jet-in-crossflow model for estimating wind tunnel blockage and angle-of-attack interference is described. Experiments showed that the simpler existing models fall seriously short of representing far-field flows properly. A new, vortex-source-doublet (VSD) model was therefore developed which employs curved trajectories and experimentally-based singularity strengths. The new model is consistent with existing and new experimental data and it predicts tunnel wall (i.e. far-field) pressures properly. It is implemented as a preprocessor to the wall-pressure-signature-based tunnel interference predictor. The supporting experiments and theoretical studies revealed some new results. Comparative flow field measurements with 1-inch "free-air" and 3-inch impinging jets showed that vortex penetration into the flow, in diameters, was almost unaltered until 'hard' impingement occurred. In modeling impinging cases, a 'plume redirection' term was introduced which is apparently absent in previous models. The effects of this term were found to be very significant.

  9. Statistical modeling of the Internet traffic dynamics: To which extent do we need long-term correlations?

    NASA Astrophysics Data System (ADS)

    Markelov, Oleg; Nguyen Duc, Viet; Bogachev, Mikhail

    2017-11-01

    Recently we have suggested a universal superstatistical model of user access patterns and aggregated network traffic. The model takes into account the irregular character of end user access patterns on the web via the non-exponential distributions of the local access rates, but neglects the long-term correlations between these rates. While the model is accurate for quasi-stationary traffic records, its performance under highly variable and especially non-stationary access dynamics remains questionable. In this paper, using an example of the traffic patterns from a highly loaded network cluster hosting the website of the 1998 FIFA World Cup, we suggest a generalization of the previously suggested superstatistical model by introducing long-term correlations between access rates. Using queueing system simulations, we show explicitly that this generalization is essential for modeling network nodes with highly non-stationary access patterns, where neglecting long-term correlations leads to the underestimation of the empirical average sojourn time by several decades under high throughput utilization.

  10. Finite-Difference Lattice Boltzmann Scheme for High-Speed Compressible Flow: Two-Dimensional Case

    NASA Astrophysics Data System (ADS)

    Gan, Yan-Biao; Xu, Ai-Guo; Zhang, Guang-Cai; Zhang, Ping; Zhang, Lei; Li, Ying-Jun

    2008-07-01

    Lattice Boltzmann (LB) modeling of high-speed compressible flows has long been attempted by various authors. One common weakness of most of previous models is the instability problem when the Mach number of the flow is large. In this paper we present a finite-difference LB model, which works for flows with flexible ratios of specific heats and a wide range of Mach number, from 0 to 30 or higher. Besides the discrete-velocity-model by Watari [Physica A 382 (2007) 502], a modified Lax Wendroff finite difference scheme and an artificial viscosity are introduced. The combination of the finite-difference scheme and the adding of artificial viscosity must find a balance of numerical stability versus accuracy. The proposed model is validated by recovering results of some well-known benchmark tests: shock tubes and shock reflections. The new model may be used to track shock waves and/or to study the non-equilibrium procedure in the transition between the regular and Mach reflections of shock waves, etc.

  11. Modeling of porosity loss during compaction and cementation of sandstones

    NASA Astrophysics Data System (ADS)

    Lemée, Claire; Guéguen, Yves

    1996-10-01

    Irreversible inelastic processes are responsible for mechanical and chemical compaction of sedimentary rocks at the time of burying. Our purpose is to describe the inelastic response of the rock at large time scales. In order to do this, we build a model that describes how porosity progressively decreases at depth. We use a previous geometrical model for the compaction process of a sandstone by grain interpenetration that is restricted to the case of mass conservation. In addition, we introduce a compaction equilibrium concept. Solid grains can support stresses up to a critical effective stress, σc, before plastic flow occurs. This critical stress depends on temperature and is derived from the pressure-solution deformation law. Pressure solution is the plastic deformation mechanism implemented during compaction. Our model predicts a porosity destruction at a depth of about 3 km. This model has the property to define a range of compaction curves. We investigate the sensitivity of the model to the main input parameters: liquid film thickness, grain size, temperature gradient, and activation energy.

  12. Scheduler Design Criteria: Requirements and Considerations

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong

    2016-01-01

    This presentation covers fundamental requirements and considerations for developing schedulers in airport operations. We first introduce performance and functional requirements for airport surface schedulers. Among various optimization problems in airport operations, we focus on airport surface scheduling problem, including runway and taxiway operations. We then describe a basic methodology for airport surface scheduling such as node-link network model and scheduling algorithms previously developed. Next, we explain how to design a mathematical formulation in more details, which consists of objectives, decision variables, and constraints. Lastly, we review other considerations, including optimization tools, computational performance, and performance metrics for evaluation.

  13. ENSO related sea surface salinity variability in the equatorial Pacific

    NASA Astrophysics Data System (ADS)

    Qu, T.

    2016-12-01

    Recently available satellite and Argo data have shown coherent, large-scale sea surface salinity (SSS) variability in the equatorial Pacific. Based on this variability, several SSS indices of El Nino have been introduced by previous studies. Combining results from an ocean general circulation model with available satellite and in-situ observations, this study investigates the SSS variability and its associated SSS indices in the equatorial Pacific. The ocean's role and in particular the vertical entrainment of subtropical waters in this variability are discussed, which suggests that the SSS variability in the equatorial Pacific may play some active role in ENSO evolution.

  14. Deducing the multi-trader population driving a financial market

    NASA Astrophysics Data System (ADS)

    Gupta, Nachi; Hauser, Raphael; Johnson, Neil

    2005-12-01

    We have previously laid out a basic framework for predicting financial movements and pockets of predictability by tracking the distribution of a multi-trader population playing on an artificial financial market model. This work explores extensions to this basic framework. We allow for more intelligent agents with a richer strategy set, and we no longer constrain the distribution over these agents to a probability space. We then introduce a fusion scheme which accounts for multiple runs of randomly chosen sets of possible agent types. We also discuss a mechanism for bias removal on the estimates.

  15. On I/O Virtualization Management

    NASA Astrophysics Data System (ADS)

    Danciu, Vitalian A.; Metzker, Martin G.

    The quick adoption of virtualization technology in general and the advent of the Cloud business model entail new requirements on the structure and the configuration of back-end I/O systems. Several approaches to virtualization of I/O links are being introduced, which aim at implementing a more flexible I/O channel configuration without compromising performance. While previously the management of I/O devices could be limited to basic technical requirments (e.g. the establishment and termination of fixed-point links), the additional flexibility carries in its wake additional management requirements on the representation and control of I/O sub-systems.

  16. Prediction and analysis of beta-turns in proteins by support vector machine.

    PubMed

    Pham, Tho Hoan; Satou, Kenji; Ho, Tu Bao

    2003-01-01

    Tight turn has long been recognized as one of the three important features of proteins after the alpha-helix and beta-sheet. Tight turns play an important role in globular proteins from both the structural and functional points of view. More than 90% tight turns are beta-turns. Analysis and prediction of beta-turns in particular and tight turns in general are very useful for the design of new molecules such as drugs, pesticides, and antigens. In this paper, we introduce a support vector machine (SVM) approach to prediction and analysis of beta-turns. We have investigated two aspects of applying SVM to the prediction and analysis of beta-turns. First, we developed a new SVM method, called BTSVM, which predicts beta-turns of a protein from its sequence. The prediction results on the dataset of 426 non-homologous protein chains by sevenfold cross-validation technique showed that our method is superior to the other previous methods. Second, we analyzed how amino acid positions support (or prevent) the formation of beta-turns based on the "multivariable" classification model of a linear SVM. This model is more general than the other ones of previous statistical methods. Our analysis results are more comprehensive and easier to use than previously published analysis results.

  17. The (Mathematical) Modeling Process in Biosciences.

    PubMed

    Torres, Nestor V; Santos, Guido

    2015-01-01

    In this communication, we introduce a general framework and discussion on the role of models and the modeling process in the field of biosciences. The objective is to sum up the common procedures during the formalization and analysis of a biological problem from the perspective of Systems Biology, which approaches the study of biological systems as a whole. We begin by presenting the definitions of (biological) system and model. Particular attention is given to the meaning of mathematical model within the context of biology. Then, we present the process of modeling and analysis of biological systems. Three stages are described in detail: conceptualization of the biological system into a model, mathematical formalization of the previous conceptual model and optimization and system management derived from the analysis of the mathematical model. All along this work the main features and shortcomings of the process are analyzed and a set of rules that could help in the task of modeling any biological system are presented. Special regard is given to the formative requirements and the interdisciplinary nature of this approach. We conclude with some general considerations on the challenges that modeling is posing to current biology.

  18. Lack of adaptation from standing genetic variation despite the presence of putatively adaptive alleles in introduced sweet vernal grass (Anthoxanthum odoratum).

    PubMed

    Gould, B; Geber, M

    2016-01-01

    Population genetic theory predicts that the availability of appropriate standing genetic variation should facilitate rapid evolution when species are introduced to new environments. However, few tests of rapid evolution have been paired with empirical surveys for the presence of previously identified adaptive genetic variants in natural populations. In this study, we examined local adaptation to soil Al toxicity in the introduced range of sweet vernal grass (Anthoxanthum odoratum), and we genotyped populations for the presence of Al tolerance alleles previously identified at the long-term ecological Park Grass Experiment (PGE, Harpenden, UK) in the species native range. We found that markers associated with Al tolerance at the PGE were present at appreciable frequency in introduced populations. Despite this, there was no strong evidence of local adaptation to soil Al toxicity among populations. Populations demonstrated significantly different intrinsic root growth rates in the absence of Al. This suggests that selection on correlated root growth traits may constrain the ability of populations to evolve significantly different root growth responses to Al. Our results demonstrate that genotype-phenotype associations may differ substantially between the native and introduced parts of a species range and that adaptive alleles from a native species range may not necessarily promote phenotypic differentiation in the introduced range. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.

  19. An introduced and a native vertebrate hybridize to form a genetic bridge to a second native species

    USGS Publications Warehouse

    McDonald, D.B.; Parchman, T.L.; Bower, M.R.; Hubert, W.A.; Rahel, F.J.

    2008-01-01

    The genetic impacts of hybridization between native and introduced species are of considerable conservation concern, while the possibility of reticulate evolution affects our basic understanding of how species arise and shapes how we use genetic data to understand evolutionary diversification. By using mitochondrial NADH dehydrogenase subunit 2 (ND2) sequences and 467 amplified fragment-length polymorphism nuclear DNA markers, we show that the introduced white sucker (Catostomus commersoni) has hybridized with two species native to the Colorado River Basin - the flannelmouth sucker (Catostomus latipinnis) and the bluehead sucker (Catostomus discobolus). Hybrids between the flannelmouth sucker and white sucker have facilitated introgression between the two native species, previously isolated by reproductive barriers, such that individuals exist with contributions from all three genomes. Most hybrids had the mitochondrial haplotype of the introduced white sucker, emphasizing its pivotal role in this three-way hybridization. Our findings highlight how introduced species can threaten the genetic integrity of not only one species but also multiple previously reproductively isolated species. Furthermore, this complex three-way reticulate (as opposed to strictly bifurcating) evolution suggests that seeking examples in other vertebrate systems might be productive. Although the present study involved an introduced species, similar patterns of hybridization could result from natural processes, including stream capture or geological formations (e.g., the Bering land bridge). ?? 2008 by The National Academy of Sciences of the USA.

  20. Detecting dark matter in the Milky Way with cosmic and gamma radiation

    NASA Astrophysics Data System (ADS)

    Carlson, Eric C.

    Over the last decade, experiments in high-energy astroparticle physics have reached unprecedented precision and sensitivity which span the electromagnetic and cosmic-ray spectra. These advances have opened a new window onto the universe for which little was previously known. Such dramatic increases in sensitivity lead naturally to claims of excess emission, which call for either revised astrophysical models or the existence of exotic new sources such as particle dark matter. Here we stand firmly with Occam, sharpening his razor by (i) developing new techniques for discriminating astrophysical signatures from those of dark matter, and (ii) by developing detailed foreground models which can explain excess signals and shed light on the underlying astrophysical processes at hand. We concentrate most directly on observations of Galactic gamma and cosmic rays, factoring the discussion into three related parts which each contain significant advancements from our cumulative works. In Part I we introduce concepts which are fundamental to the Indirect Detection of particle dark matter, including motivations, targets, experiments, production of Standard Model particles, and a variety of statistical techniques. In Part II we introduce basic and advanced modelling techniques for propagation of cosmic-rays through the Galaxy and describe astrophysical gamma-ray production, as well as presenting state-of-the-art propagation models of the Milky Way.Finally, in Part III, we employ these models and techniques in order to study several indirect detection signals, including the Fermi GeV excess at the Galactic center, the Fermi 135 GeV line, the 3.5 keV line, and the WMAP-Planck haze.

  1. Supercritical wing sections 2, volume 108

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Garabedian, P.; Korn, D.; Jameson, A.; Beckmann, M. (Editor); Kuenzi, H. P. (Editor)

    1975-01-01

    A mathematical theory for the design and analysis of supercritical wing sections was previously presented. Examples and computer programs showing how this method works were included. The work on transonics is presented in a more definitive form. For design, a better model of the trailing edge is introduced which should eliminate a loss of fifteen or twenty percent in lift experienced with previous heavily aft loaded models, which is attributed to boundary layer separation. How drag creep can be reduced at off-design conditions is indicated. A rotated finite difference scheme is presented that enables the application of Murman's method of analysis in more or less arbitrary curvilinear coordinate systems. This allows the use of supersonic as well as subsonic free stream Mach numbers and to capture shock waves as far back on an airfoil as desired. Moreover, it leads to an effective three dimensional program for the computation of transonic flow past an oblique wing. In the case of two dimensional flow, the method is extended to take into account the displacement thickness computed by a semi-empirical turbulent boundary layer correction.

  2. Integrated Path Differential Absorption Lidar Optimizations Based on Pre-Analyzed Atmospheric Data for ASCENDS Mission Applications

    NASA Technical Reports Server (NTRS)

    Pliutau, Denis; Prasad, Narasimha S.

    2012-01-01

    In this paper a modeling method based on data reductions is investigated which includes pre analyzed MERRA atmospheric fields for quantitative estimates of uncertainties introduced in the integrated path differential absorption methods for the sensing of various molecules including CO2. This approach represents the extension of our existing lidar modeling framework previously developed and allows effective on- and offline wavelength optimizations and weighting function analysis to minimize the interference effects such as those due to temperature sensitivity and water vapor absorption. The new simulation methodology is different from the previous implementation in that it allows analysis of atmospheric effects over annual spans and the entire Earth coverage which was achieved due to the data reduction methods employed. The effectiveness of the proposed simulation approach is demonstrated with application to the mixing ratio retrievals for the future ASCENDS mission. Independent analysis of multiple accuracy limiting factors including the temperature, water vapor interferences, and selected system parameters is further used to identify favorable spectral regions as well as wavelength combinations facilitating the reduction in total errors in the retrieved XCO2 values.

  3. GALFIT-CORSAIR: Implementing the Core-Sérsic Model Into GALFIT

    NASA Astrophysics Data System (ADS)

    Bonfini, Paolo

    2014-10-01

    We introduce GALFIT-CORSAIR: a publicly available, fully retro-compatible modification of the 2D fitting software GALFIT (v.3) which adds an implementation of the core-Sersic model. We demonstrate the software by fitting the images of NGC 5557 and NGC 5813, which have been previously identified as core-Sersic galaxies by their 1D radial light profiles. These two examples are representative of different dust obscuration conditions, and of bulge/disk decomposition. To perform the analysis, we obtained deep Hubble Legacy Archive (HLA) mosaics in the F555W filter (~V-band). We successfully reproduce the results of the previous 1D analysis, modulo the intrinsic differences between the 1D and the 2D fitting procedures. The code and the analysis procedure described here have been developed for the first coherent 2D analysis of a sample of core-Sersic galaxies, which will be presented in a forth-coming paper. As the 2D analysis provides better constraining on multi-component fitting, and is fully seeing-corrected, it will yield complementary constraints on the missing mass in depleted galaxy cores.

  4. Counting “exotics”

    Treesearch

    Qinfeng Guo

    2011-01-01

    An introduced or exotic species is commonly defined as an organism accidentally or intentionally introduced to a new location by human activity (Williamson 1996; Richardson et al. 2000; Guo and Ricklefs 2010). However, the counting of exotics is often inconsistent. For example, in the US, previously published plant richness data for each state are only those either...

  5. Introductions of West Nile Virus Strains to Mexico

    PubMed Central

    Deardorff, Eleanor; Estrada-Franco, José G.; Brault, Aaron C.; Navarro-Lopez, Roberto; Campomanes-Cortes, Arturo; Paz-Ramirez, Pedro; Solis-Hernandez, Mario; Ramey, Wanichaya N.; Davis, C. Todd; Beasley, David W.C.; Tesh, Robert B.; Barrett, Alan D.T.

    2006-01-01

    Complete genome sequencing of 22 West Nile virus isolates suggested 2 independent introductions into Mexico. A previously identified mouse-attenuated glycosylation variant was introduced into southern Mexico through the southeastern United States, while a common US genotype appears to have been introduced incrementally into northern Mexico through the southwestern United States. PMID:16494762

  6. Saxon Math. What Works Clearinghouse Intervention Report

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2016

    2016-01-01

    "Saxon Math" is a core curriculum for students in grades K-12 that uses an incremental approach to instruction and assessment. This approach limits the amount of new math content delivered to students each day and allows time for daily practice. New concepts are introduced gradually and integrated with previously introduced content so…

  7. Introducing the Objective Structured Clinical Examination (OSCE) in the Undergraduate Psychiatric Curriculum: Evaluation after One Year

    ERIC Educational Resources Information Center

    Zahid, Muhammad Ajmal; Al-Zayed, Adel; Ohaeri, Jude; Varghese, Ramani

    2011-01-01

    Objective: The Objective Structured Clinical Examination (OSCE) was introduced in undergraduate psychiatry clerkship in 2008. The authors studied the effect of OSCE on the students' performance. Methods: The "short case" (SC) and "oral examination" (OE), two of the five components of the previous assessment format, were…

  8. Phases of kinky holographic nuclear matter

    NASA Astrophysics Data System (ADS)

    Elliot-Ripley, Matthew; Sutcliffe, Paul; Zamaklar, Marija

    2016-10-01

    Holographic QCD at finite baryon number density and zero temperature is studied within the five-dimensional Sakai-Sugimoto model. We introduce a new approximation that models a smeared crystal of solitonic baryons by assuming spatial homogeneity to obtain an effective kink theory in the holographic direction. The kink theory correctly reproduces a first order phase transition to lightly bound nuclear matter. As the density is further increased the kink splits into a pair of half-kink constituents, providing a concrete realization of the previously suggested dyonic salt phase, where the bulk soliton splits into constituents at high density. The kink model also captures the phenomenon of baryonic popcorn, in which a first order phase transition generates an additional soliton layer in the holographic direction. We find that this popcorn transition takes place at a density below the dyonic salt phase, making the latter energetically unfavourable. However, the kink model predicts only one pop, rather than the sequence of pops suggested by previous approximations. In the kink model the two layers produced by the single pop form the surface of a soliton bag that increases in size as the baryon chemical potential is increased. The interior of the bag is filled with abelian electric potential and the instanton charge density is localized on the surface of the bag. The soliton bag may provide a holographic description of a quarkyonic phase.

  9. Transient interaction model of electromagnetic field generated by lightning current pulses and human body

    NASA Astrophysics Data System (ADS)

    Iváncsy, T.; Kiss, I.; Szücs, L.; Tamus, Z. Á.

    2015-10-01

    The lightning current generates time-varying magnetic field near the down- conductor and the down-conductors are mounted on the wall of the buildings where residential places might be situated. It is well known that the rapidly changing magnetic fields can generate dangerous eddy currents in the human body.The higher duration and gradient of the magnetic field can cause potentially life threatening cardiac stimulation. The coupling mechanism between the electromagnetic field and the human body is based on a well-known physical phenomena (e.g. Faradays law of induction). However, the calculation of the induced current is very complicated because the shape of the organs is complex and the determination of the material properties of living tissues is difficult, as well. Our previous study revealed that the cardiac stimulation is independent of the rising time of the lightning current and only the peak of the current counts. In this study, the authors introduce an improved model of the interaction of electromagnetic fields of lighting current near down-conductor and human body. Our previous models are based on the quasi stationer field calculations, the new improved model is a transient model. This is because the magnetic field around the down-conductor and in the human body can be determined more precisely, therefore the dangerous currents in the body can be estimated.

  10. High-resolution modelling of atmospheric dispersion of dense gas using TWODEE-2.1: application to the 1986 Lake Nyos limnic eruption

    NASA Astrophysics Data System (ADS)

    Folch, Arnau; Barcons, Jordi; Kozono, Tomofumi; Costa, Antonio

    2017-06-01

    Atmospheric dispersal of a gas denser than air can threat the environment and surrounding communities if the terrain and meteorological conditions favour its accumulation in topographic depressions, thereby reaching toxic concentration levels. Numerical modelling of atmospheric gas dispersion constitutes a useful tool for gas hazard assessment studies, essential for planning risk mitigation actions. In complex terrains, microscale winds and local orographic features can have a strong influence on the gas cloud behaviour, potentially leading to inaccurate results if not captured by coarser-scale modelling. We introduce a methodology for microscale wind field characterisation based on transfer functions that couple a mesoscale numerical weather prediction model with a microscale computational fluid dynamics (CFD) model for the atmospheric boundary layer. The resulting time-dependent high-resolution microscale wind field is used as input for a shallow-layer gas dispersal model (TWODEE-2.1) to simulate the time evolution of CO2 gas concentration at different heights above the terrain. The strategy is applied to review simulations of the 1986 Lake Nyos event in Cameroon, where a huge CO2 cloud released by a limnic eruption spread downslopes from the lake, suffocating thousands of people and animals across the Nyos and adjacent secondary valleys. Besides several new features introduced in the new version of the gas dispersal code (TWODEE-2.1), we have also implemented a novel impact criterion based on the percentage of human fatalities depending on CO2 concentration and exposure time. New model results are quantitatively validated using the reported percentage of fatalities at several locations. The comparison with previous simulations that assumed coarser-scale steady winds and topography illustrates the importance of high-resolution modelling in complex terrains.

  11. Two-strain competition in quasineutral stochastic disease dynamics.

    PubMed

    Kogan, Oleg; Khasin, Michael; Meerson, Baruch; Schneider, David; Myers, Christopher R

    2014-10-01

    We develop a perturbation method for studying quasineutral competition in a broad class of stochastic competition models and apply it to the analysis of fixation of competing strains in two epidemic models. The first model is a two-strain generalization of the stochastic susceptible-infected-susceptible (SIS) model. Here we extend previous results due to Parsons and Quince [Theor. Popul. Biol. 72, 468 (2007)], Parsons et al. [Theor. Popul. Biol. 74, 302 (2008)], and Lin, Kim, and Doering [J. Stat. Phys. 148, 646 (2012)]. The second model, a two-strain generalization of the stochastic susceptible-infected-recovered (SIR) model with population turnover, has not been studied previously. In each of the two models, when the basic reproduction numbers of the two strains are identical, a system with an infinite population size approaches a point on the deterministic coexistence line (CL): a straight line of fixed points in the phase space of subpopulation sizes. Shot noise drives one of the strain populations to fixation, and the other to extinction, on a time scale proportional to the total population size. Our perturbation method explicitly tracks the dynamics of the probability distribution of the subpopulations in the vicinity of the CL. We argue that, whereas the slow strain has a competitive advantage for mathematically "typical" initial conditions, it is the fast strain that is more likely to win in the important situation when a few infectives of both strains are introduced into a susceptible population.

  12. The algebra of the general Markov model on phylogenetic trees and networks.

    PubMed

    Sumner, J G; Holland, B R; Jarvis, P D

    2012-04-01

    It is known that the Kimura 3ST model of sequence evolution on phylogenetic trees can be extended quite naturally to arbitrary split systems. However, this extension relies heavily on mathematical peculiarities of the associated Hadamard transformation, and providing an analogous augmentation of the general Markov model has thus far been elusive. In this paper, we rectify this shortcoming by showing how to extend the general Markov model on trees to include incompatible edges; and even further to more general network models. This is achieved by exploring the algebra of the generators of the continuous-time Markov chain together with the “splitting” operator that generates the branching process on phylogenetic trees. For simplicity, we proceed by discussing the two state case and then show that our results are easily extended to more states with little complication. Intriguingly, upon restriction of the two state general Markov model to the parameter space of the binary symmetric model, our extension is indistinguishable from the Hadamard approach only on trees; as soon as any incompatible splits are introduced the two approaches give rise to differing probability distributions with disparate structure. Through exploration of a simple example, we give an argument that our extension to more general networks has desirable properties that the previous approaches do not share. In particular, our construction allows for convergent evolution of previously divergent lineages; a property that is of significant interest for biological applications.

  13. Adapting Gel Wax into an Ultrasound-Guided Pericardiocentesis Model at Low Cost

    PubMed Central

    Daly, Robert; Planas, Jason H.; Edens, Mary Ann

    2017-01-01

    Cardiac tamponade is a life-threatening emergency for which pericardiocentesis may be required. Real-time bedside ultrasound has obviated the need for routine blind procedures in cardiac arrest, and the number of pericardiocenteses being performed has declined. Despite this fact, pericardiocentesis remains an essential skill in emergency medicine. While commercially available training models exist, cost, durability, and lack of anatomical landmarks limit their usefulness. We sought to create a pericardiocentesis model that is realistic, simple to build, reusable, and cost efficient. We constructed the model using a red dye-filled ping pong ball (simulating the right ventricle) and a 250cc normal saline bag (simulating the effusion) encased in an artificial rib cage and held in place by gel wax. The inner saline bag was connected to a 1L saline bag outside of the main assembly to act as a fluid reservoir for repeat uses. The entire construction process takes approximately 16–20 hours, most of which is attributed to cooling of the gel wax. Actual construction time is approximately four hours at a cost of less than $200. The model was introduced to emergency medicine residents and medical students during a procedure simulation lab and compared to a model previously described by dell’Orto.1 The learners performed ultrasound-guided pericardiocentesis using both models. Learners who completed a survey comparing realism of the two models felt our model was more realistic than the previously described model. On a scale of 1–9, with 9 being very realistic, the previous model was rated a 4.5. Our model was rated a 7.8. There was also a marked improvement in the perceived recognition of the pericardium, the heart, and the pericardial sac. Additionally, 100% of the students were successful at performing the procedure using our model. In simulation, our model provided both palpable and ultrasound landmarks and held up to several months of repeated use. It was less expensive than commercial models ($200 vs up to $16,500) while being more realistic in simulation than other described “do-it-yourself models.” This model can be easily replicated to teach the necessary skill of pericardiocentesis. PMID:28116020

  14. Prediction of siRNA potency using sparse logistic regression.

    PubMed

    Hu, Wei; Hu, John

    2014-06-01

    RNA interference (RNAi) can modulate gene expression at post-transcriptional as well as transcriptional levels. Short interfering RNA (siRNA) serves as a trigger for the RNAi gene inhibition mechanism, and therefore is a crucial intermediate step in RNAi. There have been extensive studies to identify the sequence characteristics of potent siRNAs. One such study built a linear model using LASSO (Least Absolute Shrinkage and Selection Operator) to measure the contribution of each siRNA sequence feature. This model is simple and interpretable, but it requires a large number of nonzero weights. We have introduced a novel technique, sparse logistic regression, to build a linear model using single-position specific nucleotide compositions which has the same prediction accuracy of the linear model based on LASSO. The weights in our new model share the same general trend as those in the previous model, but have only 25 nonzero weights out of a total 84 weights, a 54% reduction compared to the previous model. Contrary to the linear model based on LASSO, our model suggests that only a few positions are influential on the efficacy of the siRNA, which are the 5' and 3' ends and the seed region of siRNA sequences. We also employed sparse logistic regression to build a linear model using dual-position specific nucleotide compositions, a task LASSO is not able to accomplish well due to its high dimensional nature. Our results demonstrate the superiority of sparse logistic regression as a technique for both feature selection and regression over LASSO in the context of siRNA design.

  15. A Review of Depth and Normal Fusion Algorithms

    PubMed Central

    Štolc, Svorad; Pock, Thomas

    2018-01-01

    Geometric surface information such as depth maps and surface normals can be acquired by various methods such as stereo light fields, shape from shading and photometric stereo techniques. We compare several algorithms which deal with the combination of depth with surface normal information in order to reconstruct a refined depth map. The reasons for performance differences are examined from the perspective of alternative formulations of surface normals for depth reconstruction. We review and analyze methods in a systematic way. Based on our findings, we introduce a new generalized fusion method, which is formulated as a least squares problem and outperforms previous methods in the depth error domain by introducing a novel normal weighting that performs closer to the geodesic distance measure. Furthermore, a novel method is introduced based on Total Generalized Variation (TGV) which further outperforms previous approaches in terms of the geodesic normal distance error and maintains comparable quality in the depth error domain. PMID:29389903

  16. A Reserve-based Method for Mitigating the Impact of Renewable Energy

    NASA Astrophysics Data System (ADS)

    Krad, Ibrahim

    The fundamental operating paradigm of today's power systems is undergoing a significant shift. This is partially motivated by the increased desire for incorporating variable renewable energy resources into generation portfolios. While these generating technologies offer clean energy at zero marginal cost, i.e. no fuel costs, they also offer unique operating challenges for system operators. Perhaps the biggest operating challenge these resources introduce is accommodating their intermittent fuel source availability. For this reason, these generators increase the system-wide variability and uncertainty. As a result, system operators are revisiting traditional operating strategies to more efficiently incorporate these generation resources to maximize the benefit they provide while minimizing the challenges they introduce. One way system operators have accounted for system variability and uncertainty is through the use of operating reserves. Operating reserves can be simplified as excess capacity kept online during real time operations to help accommodate unforeseen fluctuations in demand. With new generation resources, a new class of operating reserves has emerged that is generally known as flexibility, or ramping, reserves. This new reserve class is meant to better position systems to mitigate severe ramping in the net load profile. The best way to define this new requirement is still under investigation. Typical requirement definitions focus on the additional uncertainty introduced by variable generation and there is room for improvement regarding explicit consideration for the variability they introduce. An exogenous reserve modification method is introduced in this report that can improve system reliability with minimal impacts on total system wide production costs. Another potential solution to this problem is to formulate the problem as a stochastic programming problem. The unit commitment and economic dispatch problems are typically formulated as deterministic problems due to fast solution times and the solutions being sufficient for operations. Improvements in technical computing hardware have reignited interest in stochastic modeling. The variability of wind and solar naturally lends itself to stochastic modeling. The use of explicit reserve requirements in stochastic models is an area of interest for power system researchers. This report introduces a new reserve modification implementation based on previous results to be used in a stochastic modeling framework. With technological improvements in distributed generation technologies, microgrids are currently being researched and implemented. Microgrids are small power systems that have the ability to serve their demand with their own generation resources and may have a connection to a larger power system. As battery technologies improve, they are becoming a more viable option in these distributed power systems and research is necessary to determine the most efficient way to utilize them. This report will investigate several unique operating strategies for batteries in small power systems and analyze their benefits. These new operating strategies will help reduce operating costs and improve system reliability.

  17. Inverse modeling of ground surface uplift and pressure with iTOUGH-PEST and TOUGH-FLAC: The case of CO2 injection at In Salah, Algeria

    NASA Astrophysics Data System (ADS)

    Rinaldi, Antonio P.; Rutqvist, Jonny; Finsterle, Stefan; Liu, Hui-Hai

    2017-11-01

    Ground deformation, commonly observed in storage projects, carries useful information about processes occurring in the injection formation. The Krechba gas field at In Salah (Algeria) is one of the best-known sites for studying ground surface deformation during geological carbon storage. At this first industrial-scale on-shore CO2 demonstration project, satellite-based ground-deformation monitoring data of high quality are available and used to study the large-scale hydrological and geomechanical response of the system to injection. In this work, we carry out coupled fluid flow and geomechanical simulations to understand the uplift at three different CO2 injection wells (KB-501, KB-502, KB-503). Previous numerical studies focused on the KB-502 injection well, where a double-lobe uplift pattern has been observed in the ground-deformation data. The observed uplift patterns at KB-501 and KB-503 have single-lobe patterns, but they can also indicate a deep fracture zone mechanical response to the injection. The current study improves the previous modeling approach by introducing an injection reservoir and a fracture zone, both responding to a Mohr-Coulomb failure criterion. In addition, we model a stress-dependent permeability and bulk modulus, according to a dual continuum model. Mechanical and hydraulic properties are determined through inverse modeling by matching the simulated spatial and temporal evolution of uplift to InSAR observations as well as by matching simulated and measured pressures. The numerical simulations are in agreement with both spatial and temporal observations. The estimated values for the parameterized mechanical and hydraulic properties are in good agreement with previous numerical results. In addition, the formal joint inversion of hydrogeological and geomechanical data provides measures of the estimation uncertainty.

  18. Generic Safety Requirements for Developing Safe Insulin Pump Software

    PubMed Central

    Zhang, Yi; Jetley, Raoul; Jones, Paul L; Ray, Arnab

    2011-01-01

    Background The authors previously introduced a highly abstract generic insulin infusion pump (GIIP) model that identified common features and hazards shared by most insulin pumps on the market. The aim of this article is to extend our previous work on the GIIP model by articulating safety requirements that address the identified GIIP hazards. These safety requirements can be validated by manufacturers, and may ultimately serve as a safety reference for insulin pump software. Together, these two publications can serve as a basis for discussing insulin pump safety in the diabetes community. Methods In our previous work, we established a generic insulin pump architecture that abstracts functions common to many insulin pumps currently on the market and near-future pump designs. We then carried out a preliminary hazard analysis based on this architecture that included consultations with many domain experts. Further consultation with domain experts resulted in the safety requirements used in the modeling work presented in this article. Results Generic safety requirements for the GIIP model are presented, as appropriate, in parameterized format to accommodate clinical practices or specific insulin pump criteria important to safe device performance. Conclusions We believe that there is considerable value in having the diabetes, academic, and manufacturing communities consider and discuss these generic safety requirements. We hope that the communities will extend and revise them, make them more representative and comprehensive, experiment with them, and use them as a means for assessing the safety of insulin pump software designs. One potential use of these requirements is to integrate them into model-based engineering (MBE) software development methods. We believe, based on our experiences, that implementing safety requirements using MBE methods holds promise in reducing design/implementation flaws in insulin pump development and evolutionary processes, therefore improving overall safety of insulin pump software. PMID:22226258

  19. A mathematical model of insulin resistance in Parkinson's disease.

    PubMed

    Braatz, Elise M; Coleman, Randolph A

    2015-06-01

    This paper introduces a mathematical model representing the biochemical interactions between insulin signaling and Parkinson's disease. The model can be used to examine the changes that occur over the course of the disease as well as identify which processes would be the most effective targets for treatment. The model is mathematized using biochemical systems theory (BST). It incorporates a treatment strategy that includes several experimental drugs along with current treatments. In the past, BST models of neurodegeneration have used power law analysis and simulation (PLAS) to model the system. This paper recommends the use of MATLAB instead. MATLAB allows for more flexibility in both the model itself and in data analysis. Previous BST analyses of neurodegeneration began treatment at disease onset. As shown in this model, the outcomes of delayed, realistic treatment and full treatment at disease onset are significantly different. The delayed treatment strategy is an important development in BST modeling of neurodegeneration. It emphasizes the importance of early diagnosis, and allows for a more accurate representation of disease and treatment interactions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Convergence of methods for coupling of microscopic and mesoscopic reaction-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Flegg, Mark B.; Hellander, Stefan; Erban, Radek

    2015-05-01

    In this paper, three multiscale methods for coupling of mesoscopic (compartment-based) and microscopic (molecular-based) stochastic reaction-diffusion simulations are investigated. Two of the three methods that will be discussed in detail have been previously reported in the literature; the two-regime method (TRM) and the compartment-placement method (CPM). The third method that is introduced and analysed in this paper is called the ghost cell method (GCM), since it works by constructing a "ghost cell" in which molecules can disappear and jump into the compartment-based simulation. Presented is a comparison of sources of error. The convergent properties of this error are studied as the time step Δt (for updating the molecular-based part of the model) approaches zero. It is found that the error behaviour depends on another fundamental computational parameter h, the compartment size in the mesoscopic part of the model. Two important limiting cases, which appear in applications, are considered: Δt → 0 and h is fixed; Δt → 0 and h → 0 such that √{ Δt } / h is fixed. The error for previously developed approaches (the TRM and CPM) converges to zero only in the limiting case (ii), but not in case (i). It is shown that the error of the GCM converges in the limiting case (i). Thus the GCM is superior to previous coupling techniques if the mesoscopic description is much coarser than the microscopic part of the model.

  1. ECHO: A reference-free short-read error correction algorithm

    PubMed Central

    Kao, Wei-Chun; Chan, Andrew H.; Song, Yun S.

    2011-01-01

    Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth. PMID:21482625

  2. The minimal SUSY B - L model: simultaneous Wilson lines and string thresholds

    DOE PAGES

    Deen, Rehan; Ovrut, Burt A.; Purves, Austin

    2016-07-08

    In previous work, we presented a statistical scan over the soft supersymmetry breaking parameters of the minimal SUSY B - L model. For specificity of calculation, unification of the gauge parameters was enforced by allowing the two Z 3×Z 3 Wilson lines to have mass scales separated by approximately an order of magnitude. This introduced an additional “left-right” sector below the unification scale. In this paper, for three important reasons, we modify our previous analysis by demanding that the mass scales of the two Wilson lines be simultaneous and equal to an “average unification” mass U >. The present analysismore » is 1) more “natural” than the previous calculations, which were only valid in a very specific region of the Calabi-Yau moduli space, 2) the theory is conceptually simpler in that the left-right sector has been removed and 3) in the present analysis the lack of gauge unification is due to threshold effects — particularly heavy string thresholds, which we calculate statistically in detail. As in our previous work, the theory is renormalization group evolved from U > to the electroweak scale — being subjected, sequentially, to the requirement of radiative B - L and electroweak symmetry breaking, the present experimental lower bounds on the B - L vector boson and sparticle masses, as well as the lightest neutral Higgs mass of ~125 GeV. The subspace of soft supersymmetry breaking masses that satisfies all such constraints is presented and shown to be substantial.« less

  3. A Summary of the NASA Lightning Nitrogen Oxides Model (LNOM) and Recent Results

    NASA Technical Reports Server (NTRS)

    Koshak, William; Peterson, Harld

    2011-01-01

    The NASA Marshall Space Flight Center introduced the Lightning Nitrogen Oxides Model (LNOM) a couple of years ago to combine routine state-of-the-art measurements of lightning with empirical laboratory results of lightning NOx production. The routine measurements included VHF lightning source data [such as from the North Alabama Lightning Mapping Array (LMA)], and ground flash location, peak current, and stroke multiplicity data from the National Lightning Detection Network(TradeMark) (NLDN). Following these initial runs of LNOM, the model was updated to include several non-return stroke lightning NOx production mechanisms, and provided the impact of lightning NOx on an August 2006 run of CMAQ. In this study, we review the evolution of the LNOM in greater detail and discuss the model?s latest upgrades and applications. Whereas previous applications were limited to five summer months of data for North Alabama thunderstorms, the most recent LNOM analyses cover several years. The latest statistics of ground and cloud flash NOx production are provided.

  4. Improved Neural Networks with Random Weights for Short-Term Load Forecasting

    PubMed Central

    Lang, Kun; Zhang, Mingyuan; Yuan, Yongbo

    2015-01-01

    An effective forecasting model for short-term load plays a significant role in promoting the management efficiency of an electric power system. This paper proposes a new forecasting model based on the improved neural networks with random weights (INNRW). The key is to introduce a weighting technique to the inputs of the model and use a novel neural network to forecast the daily maximum load. Eight factors are selected as the inputs. A mutual information weighting algorithm is then used to allocate different weights to the inputs. The neural networks with random weights and kernels (KNNRW) is applied to approximate the nonlinear function between the selected inputs and the daily maximum load due to the fast learning speed and good generalization performance. In the application of the daily load in Dalian, the result of the proposed INNRW is compared with several previously developed forecasting models. The simulation experiment shows that the proposed model performs the best overall in short-term load forecasting. PMID:26629825

  5. Improved Neural Networks with Random Weights for Short-Term Load Forecasting.

    PubMed

    Lang, Kun; Zhang, Mingyuan; Yuan, Yongbo

    2015-01-01

    An effective forecasting model for short-term load plays a significant role in promoting the management efficiency of an electric power system. This paper proposes a new forecasting model based on the improved neural networks with random weights (INNRW). The key is to introduce a weighting technique to the inputs of the model and use a novel neural network to forecast the daily maximum load. Eight factors are selected as the inputs. A mutual information weighting algorithm is then used to allocate different weights to the inputs. The neural networks with random weights and kernels (KNNRW) is applied to approximate the nonlinear function between the selected inputs and the daily maximum load due to the fast learning speed and good generalization performance. In the application of the daily load in Dalian, the result of the proposed INNRW is compared with several previously developed forecasting models. The simulation experiment shows that the proposed model performs the best overall in short-term load forecasting.

  6. Improvement and Application of the Softened Strut-and-Tie Model

    NASA Astrophysics Data System (ADS)

    Fan, Guoxi; Wang, Debin; Diao, Yuhong; Shang, Huaishuai; Tang, Xiaocheng; Sun, Hai

    2017-11-01

    Previous experimental researches indicate that reinforced concrete beam-column joints play an important role in the mechanical properties of moment resisting frame structures, so as to require proper design. The aims of this paper are to predict the joint carrying capacity and cracks development theoretically. Thus, a rational model needs to be developed. Based on the former considerations, the softened strut-and-tie model is selected to be introduced and analyzed. Four adjustments including modifications of the depth of the diagonal strut, the inclination angle of diagonal compression strut, the smeared stress of mild steel bars embedded in concrete, as well as the softening coefficient are made. After that, the carrying capacity of beam-column joint and cracks development are predicted using the improved softened strut-and-tie model. Based on the test results, it is not difficult to find that the improved softened strut-and-tie model can be used to predict the joint carrying capacity and cracks development with sufficient accuracy.

  7. Micromechanical Fatigue Visco-Damage Model for Short Glass Fiber Reinforced Polyamide-66

    NASA Astrophysics Data System (ADS)

    Despringre, N.; Chemisky, Y.; Robert, G.; Meraghni, F.

    This work presents a micromechanical fatigue damage model developed for short glass fiber reinforced PA66. It has been developed to predict the high cycle fatigue behavior of PA66/GF30. The model is based on an extended Mori-Tanaka method which includes coated inclusions, matrix viscoelasticity and the evolution of micro-scale damage. The developed model accounts for the nonlinear matrix viscoelasticity and the reinforcement orientation. The description of the damage processes is based on the experimental investigation of damage mechanisms previously performed through in-situ SEM tests and X-ray micro-computed tomography observations. Damage chronologies have been proposed involving three different processes: interface debonding/coating, matrix micro-cracking and fiber breakages. Their occurrence strongly depends on the microstructure and the relative humidity. Each damage mechanism is introduced through an evolution law coupled to local stress fields. The developed model is implemented using a UMAT subroutine. Its experimental validation is achieved under stress or strain controlled fatigue tests.

  8. Temperature modelling and prediction for activated sludge systems.

    PubMed

    Lippi, S; Rosso, D; Lubello, C; Canziani, R; Stenstrom, M K

    2009-01-01

    Temperature is an important factor affecting biomass activity, which is critical to maintain efficient biological wastewater treatment, and also physiochemical properties of mixed liquor as dissolved oxygen saturation and settling velocity. Controlling temperature is not normally possible for treatment systems but incorporating factors impacting temperature in the design process, such as aeration system, surface to volume ratio, and tank geometry can reduce the range of temperature extremes and improve the overall process performance. Determining how much these design or up-grade options affect the tank temperature requires a temperature model that can be used with existing design methodologies. This paper presents a new steady state temperature model developed by incorporating the best aspects of previously published models, introducing new functions for selected heat exchange paths and improving the method for predicting the effects of covering aeration tanks. Numerical improvements with embedded reference data provide simpler formulation, faster execution, easier sensitivity analyses, using an ordinary spreadsheet. The paper presents several cases to validate the model.

  9. Capture-recapture studies for multiple strata including non-markovian transitions

    USGS Publications Warehouse

    Brownie, C.; Hines, J.E.; Nichols, J.D.; Pollock, K.H.; Hestbeck, J.B.

    1993-01-01

    We consider capture-recapture studies where release and recapture data are available from each of a number of strata on every capture occasion. Strata may, for example, be geographic locations or physiological states. Movement of animals among strata occurs with unknown probabilities, and estimation of these unknown transition probabilities is the objective. We describe a computer routine for carrying out the analysis under a model that assumes Markovian transitions and under reduced parameter versions of this model. We also introduce models that relax the Markovian assumption and allow 'memory' to operate (i.e., allow dependence of the transition probabilities on the previous state). For these models, we sugg st an analysis based on a conditional likelihood approach. Methods are illustrated with data from a large study on Canada geese (Branta canadensis) banded in three geographic regions. The assumption of Markovian transitions is rejected convincingly for these data, emphasizing the importance of the more general models that allow memory.

  10. New Correlation Methods of Evaporation Heat Transfer in Horizontal Microfine Tubes

    NASA Astrophysics Data System (ADS)

    Makishi, Osamu; Honda, Hiroshi

    A stratified flow model and an annular flow model of evaporation heat transfer in horizontal microfin tubes have been proposed. In the stratified flow model, the contributions of thin film evaporation and nucleate boiling in the groove above a stratified liquid were predicted by a previously reported numerical analysis and a newly developed correlation, respectively. The contributions of nucleate boiling and forced convection in the stratified liquid region were predicted by the new correlation and the Carnavos equation, respectively. In the annular flow model, the contributions of nucleate boiling and forced convection were predicted by the new correlation and the Carnavos equation in which the equivalent Reynolds number was introduced, respectively. A flow pattern transition criterion proposed by Kattan et al. was incorporated to predict the circumferential average heat transfer coefficient in the intermediate region by use of the two models. The predictions of the heat transfer coefficient compared well with available experimental data for ten tubes and four refrigerants.

  11. On the distortions in calculated GW parameters during slanted atmospheric soundings

    NASA Astrophysics Data System (ADS)

    de la Torre, Alejandro; Alexander, Peter; Schmidt, Torsten; Llamedo, Pablo; Hierro, Rodrigo

    2018-03-01

    The significant distortions introduced in the measured atmospheric gravity wavelengths by soundings other than those in vertical and horizontal directions, are discussed as a function of the elevation angle of the sounding path and the gravity wave aspect ratio. Under- or overestimation of real vertical wavelengths during the measurement process depends on the value of these two parameters. The consequences of these distortions on the calculation of the energy and the vertical flux of horizontal momentum are analyzed and discussed in the context of two experimental limb satellite setups: GPS-LEO radio occultations and TIMED/SABER ((Atmosphere using Broadband Emission Radiometry/Thermosphere-Ionosphere-Mesosphere-Energetics and Dynamics)) measurements. Possible discrepancies previously found between the momentum flux calculated from satellite temperature profiles, on site and from model simulations, may to a certain degree be attributed to these distortions. A recalculation of previous momentum flux climatologies based on these considerations seems to be a difficult goal.

  12. Properties of quantum systems via diagonalization of transition amplitudes. II. Systematic improvements of short-time propagation

    NASA Astrophysics Data System (ADS)

    Vidanović, Ivana; Bogojević, Aleksandar; Balaž, Antun; Belić, Aleksandar

    2009-12-01

    In this paper, building on a previous analysis [I. Vidanović, A. Bogojević, and A. Belić, preceding paper, Phys. Rev. E 80, 066705 (2009)] of exact diagonalization of the space-discretized evolution operator for the study of properties of nonrelativistic quantum systems, we present a substantial improvement to this method. We apply recently introduced effective action approach for obtaining short-time expansion of the propagator up to very high orders to calculate matrix elements of space-discretized evolution operator. This improves by many orders of magnitude previously used approximations for discretized matrix elements and allows us to numerically obtain large numbers of accurate energy eigenvalues and eigenstates using numerical diagonalization. We illustrate this approach on several one- and two-dimensional models. The quality of numerically calculated higher-order eigenstates is assessed by comparison with semiclassical cumulative density of states.

  13. An improved Graves' disease model established by using in vivo electroporation exhibited long-term immunity to hyperthyroidism in BALB/c mice.

    PubMed

    Kaneda, Toshio; Honda, Asako; Hakozaki, Atsushi; Fuse, Tetsuya; Muto, Akihiro; Yoshida, Tadashi

    2007-05-01

    In Graves' disease, the overstimulation of the thyroid gland and hyperthyroidism are caused by autoantibodies directed against the TSH receptor (TSHR) that mimics the action of TSH. The establishment of an animal model is an important step to study the pathophysiology of autoimmune hyperthyroidism and for immunological analysis. In this study, we adopted the technique of electroporation (EP) for genetic immunization to achieve considerable enhancement of in vivo human TSHR (hTSHR) expression and efficient induction of hyperthyroidism in mice. In a preliminary study using beta-galactosidase (beta-gal) expression vectors, beta-gal introduced into the muscle by EP showed over 40-fold higher enzymatic activity than that introduced via previous direct gene transfer methods. The sustained hTSHR mRNA expression derived from cDNA transferred by EP was detectable in muscle tissue for at least 2 wk by RT-PCR. Based on these results, we induced hyperthyroidism via two expression vectors inserted with hTSHR or hTSHR289His cDNA. Consequently, 12.0-31.8% BALB/c mice immunized with hTSHR and 79.2-95.7% immunized with hTSHR289His showed high total T(4) levels due to the TSHR-stimulating antibody after three to four times repeated immunization by EP, and thyroid follicles of which were hyperplastic and had highly irregular epithelium. Moreover, TSHR-stimulating antibody surprisingly persisted more than 8 months after the last immunization. These results demonstrate that genetic immunization by in vivo EP is more efficient than previous procedures, and that it is useful for delineating the pathophysiology of Graves' disease.

  14. Triggers and monitoring in intelligent personal health record.

    PubMed

    Luo, Gang

    2012-10-01

    Although Web-based personal health records (PHRs) have been widely deployed, the existing ones have limited intelligence. Previously, we introduced expert system technology and Web search technology into the PHR domain and proposed the concept of an intelligent PHR (iPHR). iPHR provides personalized healthcare information to facilitate users' daily activities of living. The current iPHR is passive and follows the pull model of information distribution. This paper introduces triggers and monitoring into iPHR to make iPHR become active. Our idea is to let medical professionals pre-compile triggers and store them in iPHR's knowledge base. Each trigger corresponds to an abnormal event that may have potential medical impact. iPHR keeps collecting, processing, and analyzing the user's medical data from various sources such as wearable sensors. Whenever an abnormal event is detected from the user's medical data, the corresponding trigger fires and the related personalized healthcare information is pushed to the user using natural language generation technology, expert system technology, and Web search technology.

  15. Gas Discharge Visualization: An Imaging and Modeling Tool for Medical Biometrics

    PubMed Central

    Kostyuk, Nataliya; Cole, Phyadragren; Meghanathan, Natarajan; Isokpehi, Raphael D.; Cohly, Hari H. P.

    2011-01-01

    The need for automated identification of a disease makes the issue of medical biometrics very current in our society. Not all biometric tools available provide real-time feedback. We introduce gas discharge visualization (GDV) technique as one of the biometric tools that have the potential to identify deviations from the normal functional state at early stages and in real time. GDV is a nonintrusive technique to capture the physiological and psychoemotional status of a person and the functional status of different organs and organ systems through the electrophotonic emissions of fingertips placed on the surface of an impulse analyzer. This paper first introduces biometrics and its different types and then specifically focuses on medical biometrics and the potential applications of GDV in medical biometrics. We also present our previous experience with GDV in the research regarding autism and the potential use of GDV in combination with computer science for the potential development of biological pattern/biomarker for different kinds of health abnormalities including cancer and mental diseases. PMID:21747817

  16. Gas discharge visualization: an imaging and modeling tool for medical biometrics.

    PubMed

    Kostyuk, Nataliya; Cole, Phyadragren; Meghanathan, Natarajan; Isokpehi, Raphael D; Cohly, Hari H P

    2011-01-01

    The need for automated identification of a disease makes the issue of medical biometrics very current in our society. Not all biometric tools available provide real-time feedback. We introduce gas discharge visualization (GDV) technique as one of the biometric tools that have the potential to identify deviations from the normal functional state at early stages and in real time. GDV is a nonintrusive technique to capture the physiological and psychoemotional status of a person and the functional status of different organs and organ systems through the electrophotonic emissions of fingertips placed on the surface of an impulse analyzer. This paper first introduces biometrics and its different types and then specifically focuses on medical biometrics and the potential applications of GDV in medical biometrics. We also present our previous experience with GDV in the research regarding autism and the potential use of GDV in combination with computer science for the potential development of biological pattern/biomarker for different kinds of health abnormalities including cancer and mental diseases.

  17. Design of feedback control systems for stable plants with saturating actuators

    NASA Technical Reports Server (NTRS)

    Kapasouris, Petros; Athans, Michael; Stein, Gunter

    1988-01-01

    A systematic control design methodology is introduced for multi-input/multi-output stable open loop plants with multiple saturations. This new methodology is a substantial improvement over previous heuristic single-input/single-output approaches. The idea is to introduce a supervisor loop so that when the references and/or disturbances are sufficiently small, the control system operates linearly as designed. For signals large enough to cause saturations, the control law is modified in such a way as to ensure stability and to preserve, to the extent possible, the behavior of the linear control design. Key benefits of the methodology are: the modified compensator never produces saturating control signals, integrators and/or slow dynamics in the compensator never windup, the directional properties of the controls are maintained, and the closed loop system has certain guaranteed stability properties. The advantages of the new design methodology are illustrated in the simulation of an academic example and the simulation of the multivariable longitudinal control of a modified model of the F-8 aircraft.

  18. Amplification of the concept of erroneous meaning in psychodynamic science and in the consulting room.

    PubMed

    Brookes, Crittenden E

    2007-01-01

    Previous papers dealt with the concept of psyche as that dynamic field which underlies the subjective experience of mind. A new paradigm, psychodynamic science, was suggested for dealing with subjective data. The venue of the psychotherapeutic consulting room is now brought directly into science, expanding the definition of psychotherapy to include both humanistic and scientific elements. Certain concepts were introduced to amplify this new scientific model, including psyche as hypothetical construct, the concept of meaning as replacement for operational validation in scientific investigation, the synonymity of meaning and insight, and the concept of synchronicity, together with the meaning-connected affect of numinosity. The presence of unhealthy anxiety as the conservative ego attempts to preserve its integrity requires a deeper look at the concept of meaning. This leads to a distinction between meaning and erroneous meaning. The main body of this paper amplifies that distinction, and introduces the concept of intolerance of ambiguity in the understanding of erroneous meanings and their connection with human neurosis.

  19. Information dissipation as an early-warning signal for the Lehman Brothers collapse in financial time series

    PubMed Central

    Quax, Rick; Kandhai, Drona; Sloot, Peter M. A.

    2013-01-01

    In financial markets, participants locally optimize their profit which can result in a globally unstable state leading to a catastrophic change. The largest crash in the past decades is the bankruptcy of Lehman Brothers which was followed by a trust-based crisis between banks due to high-risk trading in complex products. We introduce information dissipation length (IDL) as a leading indicator of global instability of dynamical systems based on the transmission of Shannon information, and apply it to the time series of USD and EUR interest rate swaps (IRS). We find in both markets that the IDL steadily increases toward the bankruptcy, then peaks at the time of bankruptcy, and decreases afterwards. Previously introduced indicators such as ‘critical slowing down' do not provide a clear leading indicator. Our results suggest that the IDL may be used as an early-warning signal for critical transitions even in the absence of a predictive model. PMID:23719567

  20. Information dissipation as an early-warning signal for the Lehman Brothers collapse in financial time series

    NASA Astrophysics Data System (ADS)

    Quax, Rick; Kandhai, Drona; Sloot, Peter M. A.

    2013-05-01

    In financial markets, participants locally optimize their profit which can result in a globally unstable state leading to a catastrophic change. The largest crash in the past decades is the bankruptcy of Lehman Brothers which was followed by a trust-based crisis between banks due to high-risk trading in complex products. We introduce information dissipation length (IDL) as a leading indicator of global instability of dynamical systems based on the transmission of Shannon information, and apply it to the time series of USD and EUR interest rate swaps (IRS). We find in both markets that the IDL steadily increases toward the bankruptcy, then peaks at the time of bankruptcy, and decreases afterwards. Previously introduced indicators such as `critical slowing down' do not provide a clear leading indicator. Our results suggest that the IDL may be used as an early-warning signal for critical transitions even in the absence of a predictive model.

  1. Effects of anesthesia on the cerebral capillary blood flow in young and old mice

    NASA Astrophysics Data System (ADS)

    Moeini, Mohammad; Tabatabaei, Maryam S.; Bélanger, Samuel; Avti, Pramod; Castonguay, Alexandre; Pouliot, Philippe; Lesage, Frédéric

    2015-03-01

    Despite recent findings on the possible role of age-related cerebral microvasculature changes in cognition decline, previous studies of capillary blood flow in aging (using animal models) are scarce and limited to anesthetized conditions. Since anesthesia can have different effects in young and old animals, it may introduce a confounding effect in aging studies. The present study aimed to eliminate the potential confound introduced by anesthesia by measuring capillary blood flow parameters in both awake conditions and under isoflurane anesthesia. We used 2-photon laser scanning fluorescence microscopy to measure capillary diameter, red blood cell velocity and flux, hematocrit and capillary volumetric flow in individual capillaries in the barrel cortex of 6- and 24-month old C57Bl/6 mice. It was observed that microvascular properties are significantly affected by anesthesia leading to different trends in capillary blood flow parameters with aging when measured under awake or anesthetized conditions. The findings in this study suggest taking extra care in interpreting aging studies from anesthetized animals.

  2. A Bridge from Optical to Infrared Galaxies: Explaining Local Properties and Predicting Galaxy Counts and the Cosmic Background Radiation

    NASA Astrophysics Data System (ADS)

    Totani, Tomonori; Takeuchi, Tsutomu T.

    2002-05-01

    We give an explanation for the origin of various properties observed in local infrared galaxies and make predictions for galaxy counts and cosmic background radiation (CBR) using a new model extended from that for optical/near-infrared galaxies. Important new characteristics of this study are that (1) mass scale dependence of dust extinction is introduced based on the size-luminosity relation of optical galaxies and that (2) the large-grain dust temperature Tdust is calculated based on a physical consideration for energy balance rather than by using the empirical relation between Tdust and total infrared luminosity LIR found in local galaxies, which has been employed in most previous works. Consequently, the local properties of infrared galaxies, i.e., optical/infrared luminosity ratios, LIR-Tdust correlation, and infrared luminosity function are outputs predicted by the model, while these have been inputs in a number of previous models. Our model indeed reproduces these local properties reasonably well. Then we make predictions for faint infrared counts (in 15, 60, 90, 170, 450, and 850 μm) and CBR using this model. We found results considerably different from those of most previous works based on the empirical LIR-Tdust relation; especially, it is shown that the dust temperature of starbursting primordial elliptical galaxies is expected to be very high (40-80 K), as often seen in starburst galaxies or ultraluminous infrared galaxies in the local and high-z universe. This indicates that intense starbursts of forming elliptical galaxies should have occurred at z~2-3, in contrast to the previous results that significant starbursts beyond z~1 tend to overproduce the far-infrared (FIR) CBR detected by COBE/FIRAS. On the other hand, our model predicts that the mid-infrared (MIR) flux from warm/nonequilibrium dust is relatively weak in such galaxies making FIR CBR, and this effect reconciles the prima facie conflict between the upper limit on MIR CBR from TeV gamma-ray observations and the COBE detections of FIR CBR. The intergalactic optical depth of TeV gamma rays based on our model is also presented.

  3. The macroeconomic consequences of renouncing to universal access to antiretroviral treatment for HIV in Africa: a micro-simulation model.

    PubMed

    Ventelou, Bruno; Arrighi, Yves; Greener, Robert; Lamontagne, Erik; Carrieri, Patrizia; Moatti, Jean-Paul

    2012-01-01

    Previous economic literature on the cost-effectiveness of antiretroviral treatment (ART) programs has been mainly focused on the microeconomic consequences of alternative use of resources devoted to the fight against the HIV pandemic. We rather aim at forecasting the consequences of alternative scenarios for the macroeconomic performance of countries. We used a micro-simulation model based on individuals aged 15-49 selected from nationally representative surveys (DHS for Cameroon, Tanzania and Swaziland) to compare alternative scenarios : 1-freezing of ART programs to current levels of access, 2- universal access (scaling up to 100% coverage by 2015, with two variants defining ART eligibility according to previous or current WHO guidelines). We introduced an "artificial" ageing process by programming methods. Individuals could evolve through different health states: HIV negative, HIV positive (with different stages of the syndrome). Scenarios of ART procurement determine this dynamics. The macroeconomic impact is obtained using sample weights that take into account the resulting age-structure of the population in each scenario and modeling of the consequences on total growth of the economy. Increased levels of ART coverage result in decreasing HIV incidence and related mortality. Universal access to ART has a positive impact on workers' productivity; the evaluations performed for Swaziland and Cameroon show that universal access would imply net cost-savings at the scale of the society, when the full macroeconomic consequences are introduced in the calculations. In Tanzania, ART access programs imply a net cost for the economy, but 70% of costs are covered by GDP gains at the 2034 horizon, even in the extended coverage option promoted by WHO guidelines initiating ART at levels of 350 cc/mm(3) CD4 cell counts. Universal Access ART scaling-up strategies, which are more costly in the short term, remain the best economic choice in the long term. Renouncing or significantly delaying the achievement of this goal, due to "legitimate" short term budgetary constraints would be a misguided choice.

  4. [Development of child mental health services in Lithuania: achievements and obstacles].

    PubMed

    Pūras, Dainius

    2002-01-01

    In 1990, political, economic and social changes in Lithuania introduced the possibility to develop for the first time in nations's history an effective and modern system of child mental health services. During the period between 1990 and 1995 a new model of services was developed in the Department of Social pediatrics and child psychiatry of Vilnius University. The model included development of child and adolescent psychiatric services, as well as early intervention services for infants and preschool children with developmental disabilities. The emphasis, following recommendations of WHO and existing international standards, was made on deinstitutionalization and development of family-oriented and community-based services, which have been ignored by previous system. In the first half of 90's of 20th century, new training programs for professionals were introduced, more than 50 methods of assessment, treatment and rehabilitation, new for Lithuanian clinical practice, were implemented, and a new model of services, including primary, secondary and tertiary level of prevention, was introduced in demonstration sites. However, during next phase of development, in 1997-2001, serious obstacles for replicating new approaches across the country, have been identified, which threatened successful implementation of the new model of services into everyday clinical practice. Analysis of obstacles, which are blocking development of new approaches in the field of child mental health, is presented in the article. The main obstacles, identified during analysis of socioeconomic context, planning and utilization of resources, running of the system of services and evaluation of outcomes, are as follows: lack of intersectorial cooperation between health, education and social welfare systems; strong tradition of discrimination of psychosocial interventions in funding schemes of health services; societal attitudes, which tend to discriminate and stigmatize marginal groups, including disabled children and dysfunctional families; lack of evidence-based decision making in the process of health care reform and reform of social infrastructure.

  5. Predicting invasiveness of species in trade: Climate match, trophic guild and fecundity influence establishment and impact of non-native freshwater fishes

    USGS Publications Warehouse

    Howeth, Jennifer G.; Gantz, Crysta A.; Angermeier, Paul; Frimpong, Emmanuel A.; Hoff, Michael H.; Keller, Reuben P.; Mandrak, Nicholas E.; Marchetti, Michael P.; Olden, Julian D.; Romagosa, Christina M.; Lodge, David M.

    2016-01-01

    AimImpacts of non-native species have motivated development of risk assessment tools for identifying introduced species likely to become invasive. Here, we develop trait-based models for the establishment and impact stages of freshwater fish invasion, and use them to screen non-native species common in international trade. We also determine which species in the aquarium, biological supply, live bait, live food and water garden trades are likely to become invasive. Results are compared to historical patterns of non-native fish establishment to assess the relative importance over time of pathways in causing invasions.LocationLaurentian Great Lakes region.MethodsTrait-based classification trees for the establishment and impact stages of invasion were developed from data on freshwater fish species that established or failed to establish in the Great Lakes. Fishes in trade were determined from import data from Canadian and United States regulatory agencies, assigned to specific trades and screened through the developed models.ResultsClimate match between a species’ native range and the Great Lakes region predicted establishment success with 75–81% accuracy. Trophic guild and fecundity predicted potential harmful impacts of established non-native fishes with 75–83% accuracy. Screening outcomes suggest the water garden trade poses the greatest risk of introducing new invasive species, followed by the live food and aquarium trades. Analysis of historical patterns of introduction pathways demonstrates the increasing importance of these trades relative to other pathways. Comparisons among trades reveal that model predictions parallel historical patterns; all fishes previously introduced from the water garden trade have established. The live bait, biological supply, aquarium and live food trades have also contributed established non-native fishes.Main conclusionsOur models predict invasion risk of potential fish invaders to the Great Lakes region and could help managers prioritize efforts among species and pathways to minimize such risk. Similar approaches could be applied to other taxonomic groups and geographic regions.

  6. Measurements of electrostatic double layer potentials with atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Giamberardino, Jason

    The aim of this thesis is to provide a thorough description of the development of theory and experiment pertaining to the electrostatic double layer (EDL) in aqueous electrolytic systems. The EDL is an important physical element of many systems and its behavior has been of interest to scientists for many decades. Because many areas of science and engineering move to test, build, and understand systems at smaller and smaller scales, this work focuses on nanoscopic experimental investigations of the EDL. In that vein, atomic force microscopy (AFM) will be introduced and discussed as a tool for making high spatial resolution measurements of the solid-liquid interface, culminating in a description of the development of a method for completely characterizing the EDL. This thesis first explores, in a semi-historical fashion, the development of the various models and theories that are used to describe the electrostatic double layer. Later, various experimental techniques and ideas are addressed as ways to make measurements of interesting characteristics of the EDL. Finally, a newly developed approach to measuring the EDL system with AFM is introduced. This approach relies on both implementation of existing theoretical models with slight modifications as well as a unique experimental measurement scheme. The model proposed clears up previous ambiguities in definitions of various parameters pertaining to measurements of the EDL and also can be used to fully characterize the system in a way not yet demonstrated.

  7. Multiensemble Markov models of molecular thermodynamics and kinetics.

    PubMed

    Wu, Hao; Paul, Fabian; Wehmeyer, Christoph; Noé, Frank

    2016-06-07

    We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models-clustering of high-dimensional spaces and modeling of complex many-state systems-with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein-ligand binding model.

  8. A Self-Adaptive Dynamic Recognition Model for Fatigue Driving Based on Multi-Source Information and Two Levels of Fusion

    PubMed Central

    Sun, Wei; Zhang, Xiaorui; Peeta, Srinivas; He, Xiaozheng; Li, Yongfu; Zhu, Senlai

    2015-01-01

    To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, the proposed model introduces a dynamic basic probability assignment (BPA) to the decision-level fusion such that the weight of each feature source can change dynamically with the real-time fatigue feature measurements. Further, the proposed model can combine the fatigue state at the previous time step in the decision-level fusion to improve the robustness of the fatigue driving recognition. An improved correction strategy of the BPA is also proposed to accommodate the decision conflict caused by external disturbances. Results from field experiments demonstrate that the effectiveness and robustness of the proposed model are better than those of models based on a single fatigue feature and/or single-source information fusion, especially when the most effective fatigue features are used in the proposed model. PMID:26393615

  9. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

    PubMed

    Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

    2016-06-01

    Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.

  10. Worst-Case Flutter Margins from F/A-18 Aircraft Aeroelastic Data

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty

    1997-01-01

    An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, micron, computes a stability margin which directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The micron margins are robust margins which indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 SRA using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.

  11. Will the "Fixes" Fall Flat? Prospects for Quality Measures and Payment Incentives to Control Healthcare Spending.

    PubMed

    Hauswald, Erik; Sklar, David

    2017-04-01

    Payment systems in the US healthcare system have rewarded physicians for services and attempted to control healthcare spending, with rewards and penalties based upon projected goals for future spending. The incorporation of quality goals and alternatives to fee-for-service was introduced to replace the previous system of rewards and penalties. We describe the history of the US healthcare payment system, focusing on Medicare and the efforts to control spending through the Sustainable Growth Rate. We describe the latest evolution of the payment system, which emphasizes quality measurement and alternative payment models. We conclude with suggestions for how to influence physician behavior through education and payment reform so that their behavior aligns with alternative care models to control spending in the future.

  12. Introducing the Met Office 2.2-km Europe-wide convection-permitting regional climate simulations

    NASA Astrophysics Data System (ADS)

    Kendon, Elizabeth J.; Chan, Steven C.; Berthou, Segolene; Fosser, Giorgia; Roberts, Malcolm J.; Fowler, Hayley J.

    2017-04-01

    The Met Office is currently conducting Europe-wide 2.2-km convection-permitting model (CPM) simulations driven by ERA-Interim reanalysis and present/future-climate GCM simulations. Here, we present the preliminary results of these new European simulations examining daily and sub-daily precipitation outputs in comparison with observations across Europe, 12-km European and 1.5-km UK climate model simulations. As the simulations are not yet complete, we focus on diagnostics that are relatively robust with a limited amount of data; for instance, the diurnal cycle and the probability distribution of daily and sub-daily precipitation intensities. We will also present specific case studies that showcase the benefits of using continental-scale CPM simulations over previously-available small-domain CPM simulations.

  13. Towards parameter-free classification of sound effects in movies

    NASA Astrophysics Data System (ADS)

    Chu, Selina; Narayanan, Shrikanth; Kuo, C.-C. J.

    2005-08-01

    The problem of identifying intense events via multimedia data mining in films is investigated in this work. Movies are mainly characterized by dialog, music, and sound effects. We begin our investigation with detecting interesting events through sound effects. Sound effects are neither speech nor music, but are closely associated with interesting events such as car chases and gun shots. In this work, we utilize low-level audio features including MFCC and energy to identify sound effects. It was shown in previous work that the Hidden Markov model (HMM) works well for speech/audio signals. However, this technique requires a careful choice in designing the model and choosing correct parameters. In this work, we introduce a framework that will avoid such necessity and works well with semi- and non-parametric learning algorithms.

  14. Geodesy by radio interferometry - Effects of atmospheric modeling errors on estimates of baseline length

    NASA Technical Reports Server (NTRS)

    Davis, J. L.; Herring, T. A.; Shapiro, I. I.; Rogers, A. E. E.; Elgered, G.

    1985-01-01

    Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for approximately 8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere ('atmospheric delay'). Here observational evidence for the existence of such errors in the previously used models for the atmospheric delay is discussed, and a new 'mapping' function for the elevation angle dependence of this delay is developed. The delay predicted by this new mapping function differs from ray trace results by less than approximately 5 mm, at all elevations down to 5 deg elevation, and introduces errors into the estimates of baseline length of less than about 1 cm, for the multistation intercontinental experiment analyzed here.

  15. Collective charge excitations and the metal-insulator transition in the square lattice Hubbard-Coulomb model

    DOE PAGES

    Ulybyshev, Maksim; Winterowd, Christopher; Zafeiropoulos, Savvas

    2017-11-09

    Here in this article, we discuss the nontrivial collective charge excitations (plasmons) of the extended square lattice Hubbard model. Using a fully nonperturbative approach, we employ the hybrid Monte Carlo algorithm to simulate the system at half-filling. A modified Backus-Gilbert method is introduced to obtain the spectral functions via numerical analytic continuation. We directly compute the single-particle density of states which demonstrates the formation of Hubbard bands in the strongly correlated phase. The momentum-resolved charge susceptibility also is computed on the basis of the Euclidean charge-density-density correlator. In agreement with previous extended dynamical mean-field theory studies, we find that, atmore » high strength of the electron-electron interaction, the plasmon dispersion develops two branches.« less

  16. Collective charge excitations and the metal-insulator transition in the square lattice Hubbard-Coulomb model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulybyshev, Maksim; Winterowd, Christopher; Zafeiropoulos, Savvas

    Here in this article, we discuss the nontrivial collective charge excitations (plasmons) of the extended square lattice Hubbard model. Using a fully nonperturbative approach, we employ the hybrid Monte Carlo algorithm to simulate the system at half-filling. A modified Backus-Gilbert method is introduced to obtain the spectral functions via numerical analytic continuation. We directly compute the single-particle density of states which demonstrates the formation of Hubbard bands in the strongly correlated phase. The momentum-resolved charge susceptibility also is computed on the basis of the Euclidean charge-density-density correlator. In agreement with previous extended dynamical mean-field theory studies, we find that, atmore » high strength of the electron-electron interaction, the plasmon dispersion develops two branches.« less

  17. Bi-cooperative games in bipolar fuzzy settings

    NASA Astrophysics Data System (ADS)

    Hazarika, Pankaj; Borkotokey, Surajit; Mesiar, Radko

    2018-01-01

    In this paper, we introduce the notion of a bi-cooperative game with Bipolar Fuzzy Bi-coalitions and discuss the related properties. In many decision-making situations, players show bipolar motives while cooperating among themselves. This is modelled in both crisp and fuzzy environments. Bi-cooperative games with fuzzy bi-coalitions have already been proposed under the product order of bi-coalitions where one had memberships in [0, 1]. In the present paper, we adopt the alternative ordering: ordering by monotonicity and account for players' memberships in ?, a break from the previous formulation. This simplifies the model to a great extent. The corresponding Shapley axioms are proposed. An explicit form of the Shapley value to a particular class of such games is also obtained. Our study is supplemented with an illustrative example.

  18. Effects of biological control agents and exotic plant invasion on deer mouse populations

    Treesearch

    Yvette K. Ortega; Dean E. Pearson; Kevin S. McKelvey

    2004-01-01

    Exotic insects are commonly introduced as biological control agents to reduce densities of invasive exotic plants. Although current biocontrol programs for weeds take precautions to minimize ecological risks, little attention is paid to the potential nontarget effects of introduced food subsidies on native consumers. Previous research demonstrated that two gall flies (...

  19. Patterns of hybridization among cutthroat trout and rainbow trout in northern Rocky Mountain streams

    Treesearch

    Kevin S. McKelvey; Michael K. Young; Taylor M. Wilcox; Daniel M. Bingham; Kristine L. Pilgrim; Michael K. Schwartz

    2016-01-01

    Introgressive hybridization between native and introduced species is a growing conservation concern. For native cutthroat trout and introduced rainbow trout in western North America, this process is thought to lead to the formation of hybrid swarms and the loss of monophyletic evolutionary lineages. Previous studies of this phenomenon, however, indicated that...

  20. Retrolife and the Pawns Neighbors

    ERIC Educational Resources Information Center

    Elran, Yossi

    2012-01-01

    One of Martin Gardner's most famous columns introduced John Conway's game of Life. The inverse problem, finding a previous generation in the Game of Life given some extra constraints, was introduced a few years ago and is referred to as Retrolife. In this paper we present a puzzle played on a chessboard that is isomorphic to a variation of…

  1. Hydrodynamical model of anisotropic, polarized turbulent superfluids. I: constraints for the fluxes

    NASA Astrophysics Data System (ADS)

    Mongiovì, Maria Stella; Restuccia, Liliana

    2018-02-01

    This work is the first of a series of papers devoted to the study of the influence of the anisotropy and polarization of the tangle of quantized vortex lines in superfluid turbulence. A thermodynamical model of inhomogeneous superfluid turbulence previously formulated is here extended, to take into consideration also these effects. The model chooses as thermodynamic state vector the density, the velocity, the energy density, the heat flux, and a complete vorticity tensor field, including its symmetric traceless part and its antisymmetric part. The relations which constrain the constitutive quantities are deduced from the second principle of thermodynamics using the Liu procedure. The results show that the presence of anisotropy and polarization in the vortex tangle affects in a substantial way the dynamics of the heat flux, and allow us to give a physical interpretation of the vorticity tensor here introduced, and to better describe the internal structure of a turbulent superfluid.

  2. Corruption and economic growth with non constant labor force growth

    NASA Astrophysics Data System (ADS)

    Brianzoni, Serena; Campisi, Giovanni; Russo, Alberto

    2018-05-01

    Based on Brianzoni et al. [1] in the present work we propose an economic model regarding the relationship between corruption in public procurement and economic growth. We extend the benchmark model by introducing endogenous labor force growth, described by the logistic equation. The results of previous studies, as Del Monte and Papagni [2] and Mauro [3], show that countries are stuck in one of the two equilibria (high corruption and low economic growth or low corruption and high economic growth). Brianzoni et al. [1] prove the existence of a further steady state characterized by intermediate levels of capital per capita and corruption. Our aim is to investigate the effects of the endogenous growth around such equilibrium. Moreover, due to the high number of parameters of the model, specific attention is given to the numerical simulations which highlight new policy measures that can be adopted by the government to fight corruption.

  3. Denoising by coupled partial differential equations and extracting phase by backpropagation neural networks for electronic speckle pattern interferometry.

    PubMed

    Tang, Chen; Lu, Wenjing; Chen, Song; Zhang, Zhen; Li, Botao; Wang, Wenping; Han, Lin

    2007-10-20

    We extend and refine previous work [Appl. Opt. 46, 2907 (2007)]. Combining the coupled nonlinear partial differential equations (PDEs) denoising model with the ordinary differential equations enhancement method, we propose the new denoising and enhancing model for electronic speckle pattern interferometry (ESPI) fringe patterns. Meanwhile, we propose the backpropagation neural networks (BPNN) method to obtain unwrapped phase values based on a skeleton map instead of traditional interpolations. We test the introduced methods on the computer-simulated speckle ESPI fringe patterns and experimentally obtained fringe pattern, respectively. The experimental results show that the coupled nonlinear PDEs denoising model is capable of effectively removing noise, and the unwrapped phase values obtained by the BPNN method are much more accurate than those obtained by the well-known traditional interpolation. In addition, the accuracy of the BPNN method is adjustable by changing the parameters of networks such as the number of neurons.

  4. Full Waveform Modeling of Transient Electromagnetic Response Based on Temporal Interpolation and Convolution Method

    NASA Astrophysics Data System (ADS)

    Qi, Youzheng; Huang, Ling; Wu, Xin; Zhu, Wanhua; Fang, Guangyou; Yu, Gang

    2017-07-01

    Quantitative modeling of the transient electromagnetic (TEM) response requires consideration of the full transmitter waveform, i.e., not only the specific current waveform in a half cycle but also the bipolar repetition. In this paper, we present a novel temporal interpolation and convolution (TIC) method to facilitate the accurate TEM modeling. We first calculate the temporal basis response on a logarithmic scale using the fast digital-filter-based methods. Then, we introduce a function named hamlogsinc in the framework of discrete signal processing theory to reconstruct the basis function and to make the convolution with the positive half of the waveform. Finally, a superposition procedure is used to take account of the effect of previous bipolar waveforms. Comparisons with the established fast Fourier transform method demonstrate that our TIC method can get the same accuracy with a shorter computing time.

  5. A Hybrid RANS/LES Approach for Predicting Jet Noise

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2006-01-01

    Hybrid acoustic prediction methods have an important advantage over the current Reynolds averaged Navier-Stokes (RANS) based methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence. Unfortunately, they are unable to account for the high frequency sound generated by the turbulence in the initial mixing layers. This paper introduces an alternative approach that directly calculates the sound from a hybrid RANS/LES flow model (which can resolve the steep gradients in the initial mixing layers near the nozzle lip) and adopts modeling techniques similar to those used in current RANS based noise prediction methods to determine the unknown sources in the equations for the remaining unresolved components of the sound field. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid noise prediction methods.

  6. A new constitutive analysis of hexagonal close-packed metal in equal channel angular pressing by crystal plasticity finite element method

    NASA Astrophysics Data System (ADS)

    Li, Hejie; Öchsner, Andreas; Yarlagadda, Prasad K. D. V.; Xiao, Yin; Furushima, Tsuyoshi; Wei, Dongbin; Jiang, Zhengyi; Manabe, Ken-ichi

    2018-01-01

    Most of hexagonal close-packed (HCP) metals are lightweight metals. With the increasing application of light metal products, the production of light metal is increasingly attracting the attentions of researchers worldwide. To obtain a better understanding of the deformation mechanism of HCP metals (especially for Mg and its alloys), a new constitutive analysis was carried out based on previous research. In this study, combining the theories of strain gradient and continuum mechanics, the equal channel angular pressing process is analyzed and a HCP crystal plasticity constitutive model is developed especially for Mg and its alloys. The influence of elevated temperature on the deformation mechanism of the Mg alloy (slip and twin) is novelly introduced into a crystal plasticity constitutive model. The solution for the new developed constitutive model is established on the basis of the Lagrangian iterations and Newton Raphson simplification.

  7. Ultrasonic and micromechanical study of damage and elastic properties of SiC/RBSN ceramic composites. [Reaction Bonded Silicon Nitride

    NASA Technical Reports Server (NTRS)

    Chu, Y. C.; Hefetz, M.; Rokhlin, S. I.; Baaklini, G. Y.

    1992-01-01

    Ultrasonic techniques are employed to develop methods for nondestructive evaluation of elastic properties and damage in SiC/RBSN composites. To incorporate imperfect boundary conditions between fibers and matrix into a micromechanical model, a model of fibers having effective anisotropic properties is introduced. By inverting Hashin's (1979) microstructural model for a composite material with microscopic constituents the effective fiber properties were found from ultrasonic measurements. Ultrasonic measurements indicate that damage due to thermal shock is located near the surface, so the surface wave is most appropriate for estimation of the ultimate strength reduction and critical temperature of thermal shock. It is concluded that bonding between laminates of SiC/RBSN composites is severely weakened by thermal oxidation. Generally, nondestructive evaluation of thermal oxidation effects and thermal shock shows good correlation with measurements previously performed by destructive methods.

  8. Near-equilibrium dumb-bell-shaped figures for cohesionless small bodies

    NASA Astrophysics Data System (ADS)

    Descamps, Pascal

    2016-02-01

    In a previous paper (Descamps, P. [2015]. Icarus 245, 64-79), we developed a specific method aimed to retrieve the main physical characteristics (shape, density, surface scattering properties) of highly elongated bodies from their rotational lightcurves through the use of dumb-bell-shaped equilibrium figures. The present work is a test of this method. For that purpose we introduce near-equilibrium dumb-bell-shaped figures which are base dumb-bell equilibrium shapes modulated by lognormal statistics. Such synthetic irregular models are used to generate lightcurves from which our method is successfully applied. Shape statistical parameters of such near-equilibrium dumb-bell-shaped objects are in good agreement with those calculated for example for the Asteroid (216) Kleopatra from its dog-bone radar model. It may suggest that such bilobed and elongated asteroids can be approached by equilibrium figures perturbed be the interplay with a substantial internal friction modeled by a Gaussian random sphere.

  9. Climate collective risk dilemma with feedback of real-time temperatures

    NASA Astrophysics Data System (ADS)

    Du, Jinming; Wu, Bin; Wang, Long

    2014-09-01

    Controlling global warming through collective cooperation is a non-optional threshold public goods game. Previous models assume that the disaster is a sudden event and it happens with a given probability. It is shown that high risk can pave the way for reaching the cooperative target. These models, however, neglect the temperature dynamics, which is influenced by the collective behaviours. Here, we establish a temperature dynamics, and introduce the feedback between human strategy updating and the temperature change: high temperature will discount individuals' payoffs; while sufficient public goods may decrease the ever-rising temperature. We investigate how the temperature is affected by human behaviour and vice versa. It is found that, on the one hand, the temperature can be stabilized to a relatively safe level in the long run. On the other hand, the cooperation can be promoted and be maintained at a higher level, compared with public goods game models with no such feedback.

  10. Computational study of Li2OHCl as a possible solid state battery material

    NASA Astrophysics Data System (ADS)

    Howard, Jason; Holzwarth, N. A. W.

    Preparations of Li2OHCl have recently been experimentally studied as solid state Li ion electrolytes. A disordered cubic phase is known to be stable at temperatures T >35o C. Following previous ideas, first principles supercells are constructed with up to 320 atoms to model the cubic phase. First principles molecular dynamics simulations of the cubic phase show Li ion diffusion occuring on the t =10-12 s time scale, at temperatures as low as T = 400 K. The structure of the lower temperature phase (T <35o C) is not known in detail. A reasonable model of this structure is developed by using the tetragonal ideal structure found by first principles simulations and a model Hamiltonian to account for alternative orientations of the OH groups. Supported by NSF Grant DMR-1507942. Thanks to Zachary D. Hood of GaTech and ORNL for introducing these materials to us.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casana, Rodolfo, E-mail: rodolfo.casana@gmail.com; Ferreira, Manoel M., E-mail: manojr.ufma@gmail.com; Mota, Alexsandro Lucena, E-mail: lucenalexster@gmail.com

    We have studied the existence of topological self-dual configurations in a nonminimal CPT-odd and Lorentz-violating (LV) Maxwell–Higgs model, where the LV interaction is introduced by modifying the minimal covariant derivative. The Bogomol’nyi–Prasad–Sommerfield formalism has been implemented, revealing that the scalar self-interaction implying self-dual equations contains a derivative coupling. The CPT-odd self-dual equations describe electrically neutral configurations with finite total energy proportional to the total magnetic flux, which differ from the charged solutions of other CPT-odd and LV models previously studied. In particular, we have investigated the axially symmetrical self-dual vortex solutions altered by the LV parameter. For large distances, themore » profiles possess general behavior similar to the vortices of Abrikosov–Nielsen–Olesen. However, within the vortex core, the profiles of the magnetic field and energy can differ substantially from ones of the Maxwell–Higgs model depending if the LV parameter is negative or positive.« less

  12. An integer programming approach to a real-world recyclable waste collection problem in Argentina.

    PubMed

    Braier, Gustavo; Durán, Guillermo; Marenco, Javier; Wesner, Francisco

    2017-05-01

    This article reports on the use of mathematical programming techniques to optimise the routes of a recyclable waste collection system servicing Morón, a large municipality outside Buenos Aires, Argentina. The truck routing problem posed by the system is a particular case of the generalised directed open rural postman problem. An integer programming model is developed with a solving procedure built around a subtour-merging algorithm and the addition of subtour elimination constraints. The route solutions generated by the proposed methodology perform significantly better than the previously used, manually designed routes, the main improvement being that coverage of blocks within the municipality with the model solutions is 100% by construction, whereas with the manual routes as much as 16% of the blocks went unserviced. The model-generated routes were adopted by the municipality in 2014 and the national government is planning to introduce the methodology elsewhere in the country.

  13. Calibration of semi-stochastic procedure for simulating high-frequency ground motions

    USGS Publications Warehouse

    Seyhan, Emel; Stewart, Jonathan P.; Graves, Robert

    2013-01-01

    Broadband ground motion simulation procedures typically utilize physics-based modeling at low frequencies, coupled with semi-stochastic procedures at high frequencies. The high-frequency procedure considered here combines deterministic Fourier amplitude spectra (dependent on source, path, and site models) with random phase. Previous work showed that high-frequency intensity measures from this simulation methodology attenuate faster with distance and have lower intra-event dispersion than in empirical equations. We address these issues by increasing crustal damping (Q) to reduce distance attenuation bias and by introducing random site-to-site variations to Fourier amplitudes using a lognormal standard deviation ranging from 0.45 for Mw < 7 to zero for Mw 8. Ground motions simulated with the updated parameterization exhibit significantly reduced distance attenuation bias and revised dispersion terms are more compatible with those from empirical models but remain lower at large distances (e.g., > 100 km).

  14. A fault isolation method based on the incidence matrix of an augmented system

    NASA Astrophysics Data System (ADS)

    Chen, Changxiong; Chen, Liping; Ding, Jianwan; Wu, Yizhong

    2018-03-01

    A new approach is proposed for isolating faults and fast identifying the redundant sensors of a system in this paper. By introducing fault signal as additional state variable, an augmented system model is constructed by the original system model, fault signals and sensor measurement equations. The structural properties of an augmented system model are provided in this paper. From the viewpoint of evaluating fault variables, the calculating correlations of the fault variables in the system can be found, which imply the fault isolation properties of the system. Compared with previous isolation approaches, the highlights of the new approach are that it can quickly find the faults which can be isolated using exclusive residuals, at the same time, and can identify the redundant sensors in the system, which are useful for the design of diagnosis system. The simulation of a four-tank system is reported to validate the proposed method.

  15. Path-preference cellular-automaton model for traffic flow through transit points and its application to the transcription process in human cells.

    PubMed

    Ohta, Yoshihiro; Nishiyama, Akinobu; Wada, Yoichiro; Ruan, Yijun; Kodama, Tatsuhiko; Tsuboi, Takashi; Tokihiro, Tetsuji; Ihara, Sigeo

    2012-08-01

    We all use path routing everyday as we take shortcuts to avoid traffic jams, or by using faster traffic means. Previous models of traffic flow of RNA polymerase II (RNAPII) during transcription, however, were restricted to one dimension along the DNA template. Here we report the modeling and application of traffic flow in transcription that allows preferential paths of different dimensions only restricted to visit some transit points, as previously introduced between the 5' and 3' end of the gene. According to its position, an RNAPII protein molecule prefers paths obeying two types of time-evolution rules. One is an asymmetric simple exclusion process (ASEP) along DNA, and the other is a three-dimensional jump between transit points in DNA where RNAPIIs are staying. Simulations based on our model, and comparison experimental results, reveal how RNAPII molecules are distributed at the DNA-loop-formation-related protein binding sites as well as CTCF insulator proteins (or exons). As time passes after the stimulation, the RNAPII density at these sites becomes higher. Apparent far-distance jumps in one dimension are realized by short-range three-dimensional jumps between DNA loops. We confirm the above conjecture by applying our model calculation to the SAMD4A gene by comparing the experimental results. Our probabilistic model provides possible scenarios for assembling RNAPII molecules into transcription factories, where RNAPII and related proteins cooperatively transcribe DNA.

  16. Student responses to the introduction of case-based learning and practical activities into a theoretical obstetrics and gynaecology teaching programme

    PubMed Central

    Massonetto, Júlio Cesar; Marcellini, Cláudio; Assis, Paulo Sérgio Ribeiro; de Toledo, Sérgio Floriano

    2004-01-01

    Background The fourth-year Obstetrics and Gynaecology course at our institution had previously been taught using theory classes alone. A new teaching model was introduced to provide a better link with professional practice. We wished to evaluate the impact of the introduction of case discussions and other practical activities upon students' perceptions of the learning process. Methods Small-group discussions of cases and practical activities were introduced for the teaching of a fourth-year class in 2003 (Group II; 113 students). Comparisons were made with the fourth-year class of 2002 (Group I; 108 students), from before the new programme was introduced. Students were asked to rate their satisfaction with various elements of the teaching programme. Statistical differences in their ratings were analysed using the chi-square and Bonferroni tests. Results Group II gave higher ratings to the clarity of theory classes and lecturers' teaching abilities (p < 0.05) and lecturers' punctuality (p < 0.001) than did Group I. Group II had greater belief that the knowledge assessment tests were useful (p < 0.001) and that their understanding of the subject was good (p < 0.001) than did Group I. Group II gave a higher overall rating to the course (p < 0.05) than did Group I. However, there was no difference in the groups' assessments of the use made of the timetabled hours available for the subject or lecturers' concern for students' learning. Conclusions Students were very receptive to the new teaching model. PMID:15569385

  17. Accurate model annotation of a near-atomic resolution cryo-EM map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hryc, Corey F.; Chen, Dong-Hua; Afonine, Pavel V.

    Electron cryomicroscopy (cryo-EM) has been used to determine the atomic coordinates (models) from density maps of biological assemblies. These models can be assessed by their overall fit to the experimental data and stereochemical information. However, these models do not annotate the actual density values of the atoms nor their positional uncertainty. Here, we introduce a computational procedure to derive an atomic model from a cryo- EM map with annotated metadata. The accuracy of such a model is validated by a faithful replication of the experimental cryo-EM map computed using the coordinates and associated metadata. The functional interpretation of any structuralmore » features in the model and its utilization for future studies can be made in the context of its measure of uncertainty. We applied this protocol to the 3.3-Å map of the mature P22 bacteriophage capsid, a large and complex macromolecular assembly. With this protocol, we identify and annotate previously undescribed molecular interactions between capsid subunits that are crucial to maintain stability in the absence of cementing proteins or cross-linking, as occur in other bacteriophages.« less

  18. Near-optimal experimental design for model selection in systems biology.

    PubMed

    Busetto, Alberto Giovanni; Hauser, Alain; Krummenacher, Gabriel; Sunnåker, Mikael; Dimopoulos, Sotiris; Ong, Cheng Soon; Stelling, Jörg; Buhmann, Joachim M

    2013-10-15

    Biological systems are understood through iterations of modeling and experimentation. Not all experiments, however, are equally valuable for predictive modeling. This study introduces an efficient method for experimental design aimed at selecting dynamical models from data. Motivated by biological applications, the method enables the design of crucial experiments: it determines a highly informative selection of measurement readouts and time points. We demonstrate formal guarantees of design efficiency on the basis of previous results. By reducing our task to the setting of graphical models, we prove that the method finds a near-optimal design selection with a polynomial number of evaluations. Moreover, the method exhibits the best polynomial-complexity constant approximation factor, unless P = NP. We measure the performance of the method in comparison with established alternatives, such as ensemble non-centrality, on example models of different complexity. Efficient design accelerates the loop between modeling and experimentation: it enables the inference of complex mechanisms, such as those controlling central metabolic operation. Toolbox 'NearOED' available with source code under GPL on the Machine Learning Open Source Software Web site (mloss.org).

  19. Accurate model annotation of a near-atomic resolution cryo-EM map.

    PubMed

    Hryc, Corey F; Chen, Dong-Hua; Afonine, Pavel V; Jakana, Joanita; Wang, Zhao; Haase-Pettingell, Cameron; Jiang, Wen; Adams, Paul D; King, Jonathan A; Schmid, Michael F; Chiu, Wah

    2017-03-21

    Electron cryomicroscopy (cryo-EM) has been used to determine the atomic coordinates (models) from density maps of biological assemblies. These models can be assessed by their overall fit to the experimental data and stereochemical information. However, these models do not annotate the actual density values of the atoms nor their positional uncertainty. Here, we introduce a computational procedure to derive an atomic model from a cryo-EM map with annotated metadata. The accuracy of such a model is validated by a faithful replication of the experimental cryo-EM map computed using the coordinates and associated metadata. The functional interpretation of any structural features in the model and its utilization for future studies can be made in the context of its measure of uncertainty. We applied this protocol to the 3.3-Å map of the mature P22 bacteriophage capsid, a large and complex macromolecular assembly. With this protocol, we identify and annotate previously undescribed molecular interactions between capsid subunits that are crucial to maintain stability in the absence of cementing proteins or cross-linking, as occur in other bacteriophages.

  20. Accurate model annotation of a near-atomic resolution cryo-EM map

    PubMed Central

    Hryc, Corey F.; Chen, Dong-Hua; Afonine, Pavel V.; Jakana, Joanita; Wang, Zhao; Haase-Pettingell, Cameron; Jiang, Wen; Adams, Paul D.; King, Jonathan A.; Schmid, Michael F.; Chiu, Wah

    2017-01-01

    Electron cryomicroscopy (cryo-EM) has been used to determine the atomic coordinates (models) from density maps of biological assemblies. These models can be assessed by their overall fit to the experimental data and stereochemical information. However, these models do not annotate the actual density values of the atoms nor their positional uncertainty. Here, we introduce a computational procedure to derive an atomic model from a cryo-EM map with annotated metadata. The accuracy of such a model is validated by a faithful replication of the experimental cryo-EM map computed using the coordinates and associated metadata. The functional interpretation of any structural features in the model and its utilization for future studies can be made in the context of its measure of uncertainty. We applied this protocol to the 3.3-Å map of the mature P22 bacteriophage capsid, a large and complex macromolecular assembly. With this protocol, we identify and annotate previously undescribed molecular interactions between capsid subunits that are crucial to maintain stability in the absence of cementing proteins or cross-linking, as occur in other bacteriophages. PMID:28270620

  1. Accurate model annotation of a near-atomic resolution cryo-EM map

    DOE PAGES

    Hryc, Corey F.; Chen, Dong-Hua; Afonine, Pavel V.; ...

    2017-03-07

    Electron cryomicroscopy (cryo-EM) has been used to determine the atomic coordinates (models) from density maps of biological assemblies. These models can be assessed by their overall fit to the experimental data and stereochemical information. However, these models do not annotate the actual density values of the atoms nor their positional uncertainty. Here, we introduce a computational procedure to derive an atomic model from a cryo- EM map with annotated metadata. The accuracy of such a model is validated by a faithful replication of the experimental cryo-EM map computed using the coordinates and associated metadata. The functional interpretation of any structuralmore » features in the model and its utilization for future studies can be made in the context of its measure of uncertainty. We applied this protocol to the 3.3-Å map of the mature P22 bacteriophage capsid, a large and complex macromolecular assembly. With this protocol, we identify and annotate previously undescribed molecular interactions between capsid subunits that are crucial to maintain stability in the absence of cementing proteins or cross-linking, as occur in other bacteriophages.« less

  2. Multiensemble Markov models of molecular thermodynamics and kinetics

    PubMed Central

    Wu, Hao; Paul, Fabian; Noé, Frank

    2016-01-01

    We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models—clustering of high-dimensional spaces and modeling of complex many-state systems—with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein–ligand binding model. PMID:27226302

  3. n-D shape/texture optimal synthetic description and modeling by GEOGINE

    NASA Astrophysics Data System (ADS)

    Fiorini, Rodolfo A.; Dacquino, Gianfranco F.

    2004-12-01

    GEOGINE(GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for multidimensional shape/texture optimal synthetic description and learning, is presented. Usually elementary geometric shape robust characterization, subjected to geometric transformation, on a rigorous mathematical level is a key problem in many computer applications in different interest areas. The past four decades have seen solutions almost based on the use of n-Dimensional Moment and Fourier descriptor invariants. The present paper introduces a new approach for automatic model generation based on n -Dimensional Tensor Invariants as formal dictionary. An ontological model is the kernel used for specifying ontologies so that how close an ontology can be from the real world depends on the possibilities offered by the ontological model. By this approach even chromatic information content can be easily and reliably decoupled from target geometric information and computed into robus colour shape parameter attributes. Main GEOGINEoperational advantages over previous approaches are: 1) Automated Model Generation, 2) Invariant Minimal Complete Set for computational efficiency, 3) Arbitrary Model Precision for robust object description.

  4. Segment-based acoustic models for continuous speech recognition

    NASA Astrophysics Data System (ADS)

    Ostendorf, Mari; Rohlicek, J. R.

    1993-07-01

    This research aims to develop new and more accurate stochastic models for speaker-independent continuous speech recognition, by extending previous work in segment-based modeling and by introducing a new hierarchical approach to representing intra-utterance statistical dependencies. These techniques, which are more costly than traditional approaches because of the large search space associated with higher order models, are made feasible through rescoring a set of HMM-generated N-best sentence hypotheses. We expect these different modeling techniques to result in improved recognition performance over that achieved by current systems, which handle only frame-based observations and assume that these observations are independent given an underlying state sequence. In the fourth quarter of the project, we have completed the following: (1) ported our recognition system to the Wall Street Journal task, a standard task in the ARPA community; (2) developed an initial dependency-tree model of intra-utterance observation correlation; and (3) implemented baseline language model estimation software. Our initial results on the Wall Street Journal task are quite good and represent significantly improved performance over most HMM systems reporting on the Nov. 1992 5k vocabulary test set.

  5. A Microstructurally Inspired Damage Model for Early Venous Thrombus

    PubMed Central

    Rausch, Manuel K.; Humphrey, Jay D.

    2015-01-01

    Accumulative damage may be an important contributor to many cases of thrombotic disease progression. Thus, a complete understanding of the pathological role of thrombus requires an understanding of its mechanics and in particular mechanical consequences of damage. In the current study, we introduce a novel microstructurally inspired constitutive model for thrombus that considers a non-uniform distribution of microstructural fibers at various crimp levels and employs one of the distribution parameters to incorporate stretch-driven damage on the microscopic level. To demonstrate its ability to represent the mechanical behavior of thrombus, including a recently reported Mullins type damage phenomenon, we fit our model to uniaxial tensile test data of early venous thrombus. Our model shows an agreement with these data comparable to previous models for damage in elastomers with the added advantages of a microstructural basis and fewer model parameters. We submit that our novel approach marks another important step toward modeling the evolving mechanics of intraluminal thrombus, specifically its damage, and hope it will aid in the study of physiological and pathological thrombotic events. PMID:26523784

  6. The contribution of a central pattern generator in a reflex-based neuromuscular model

    PubMed Central

    Dzeladini, Florin; van den Kieboom, Jesse; Ijspeert, Auke

    2014-01-01

    Although the concept of central pattern generators (CPGs) controlling locomotion in vertebrates is widely accepted, the presence of specialized CPGs in human locomotion is still a matter of debate. An interesting numerical model developed in the 90s’ demonstrated the important role CPGs could play in human locomotion, both in terms of stability against perturbations, and in terms of speed control. Recently, a reflex-based neuro-musculo-skeletal model has been proposed, showing a level of stability to perturbations similar to the previous model, without any CPG components. Although exhibiting striking similarities with human gaits, the lack of CPG makes the control of speed/step length in the model difficult. In this paper, we hypothesize that a CPG component will offer a meaningful way of controlling the locomotion speed. After introducing the CPG component in the reflex model, and taking advantage of the resulting properties, a simple model for gait modulation is presented. The results highlight the advantages of a CPG as feedforward component in terms of gait modulation. PMID:25018712

  7. Global distortion of GPS networks associated with satellite antenna model errors

    NASA Astrophysics Data System (ADS)

    Cardellach, E.; Elósegui, P.; Davis, J. L.

    2007-07-01

    Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by ˜1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PCO errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm yr-1 level, which will impact high-precision crustal deformation studies.

  8. Global Distortion of GPS Networks Associated with Satellite Antenna Model Errors

    NASA Technical Reports Server (NTRS)

    Cardellach, E.; Elosequi, P.; Davis, J. L.

    2007-01-01

    Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by approx.1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PC0 errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm/yr level, which will impact high-precision crustal deformation studies.

  9. Introducing Pathogen Reduction Technology in Poland: A Cost-Utility Analysis

    PubMed Central

    Agapova, Maria; Lachert, Elzbieta; Brojer, Ewa; Letowska, Magdalena; Grabarczyk, Piotr; Custer, Brian

    2015-01-01

    Background Mirasol® pathogen reduction technology (PRT) uses UV light and riboflavin to chemically inactivate pathogens and white blood cells in blood components. In the EU, Mirasol PRT is CE-marked for both plasma and platelet treatment. In Poland, the decision to introduce PRT treatment of the national supply of fresh frozen plasma has spurred interest in evaluating the cost-effectiveness of this strategy. Methods A decision-analytic model evaluated the incremental costs and benefits of introducing PRT to the existing blood safety protocols in Poland. Results Addition of PRT treatment of plasma to current screening in Poland is estimated to cost 2.595 million PLN per quality-adjusted life year (QALY) (610,000 EUR/QALY); treating both plasma and platelet components in addition to current safety interventions had a lower cost of 1.480 million PLN/QALY (348,000 EUR/QALY). Conclusions The results suggest that in Poland the cost per QALY of PRT is high albeit lower than found in previous economic analyses of PRT and nucleic acid testing in North America. Treating both platelets and plasma components is more cost-effective than treating plasma alone. Wide confidence intervals indicate high uncertainty; to improve the precision of the health economic evaluation of PRT, additional hemovigilance data are needed. PMID:26195929

  10. New recursive-least-squares algorithms for nonlinear active control of sound and vibration using neural networks.

    PubMed

    Bouchard, M

    2001-01-01

    In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.

  11. Model based estimation of image depth and displacement

    NASA Technical Reports Server (NTRS)

    Damour, Kevin T.

    1992-01-01

    Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal information into the restoration process. A summary on the conditions that indicate which type of filtering should be applied to a field is provided.

  12. Improved Modeling of Side-Chain–Base Interactions and Plasticity in Protein–DNA Interface Design

    PubMed Central

    Thyme, Summer B.; Baker, David; Bradley, Philip

    2012-01-01

    Combinatorial sequence optimization for protein design requires libraries of discrete side-chain conformations. The discreteness of these libraries is problematic, particularly for long, polar side chains, since favorable interactions can be missed. Previously, an approach to loop remodeling where protein backbone movement is directed by side-chain rotamers predicted to form interactions previously observed in native complexes (termed “motifs”) was described. Here, we show how such motif libraries can be incorporated into combinatorial sequence optimization protocols and improve native complex recapitulation. Guided by the motif rotamer searches, we made improvements to the underlying energy function, increasing recapitulation of native interactions. To further test the methods, we carried out a comprehensive experimental scan of amino acid preferences in the I-AniI protein–DNA interface and found that many positions tolerated multiple amino acids. This sequence plasticity is not observed in the computational results because of the fixed-backbone approximation of the model. We improved modeling of this diversity by introducing DNA flexibility and reducing the convergence of the simulated annealing algorithm that drives the design process. In addition to serving as a benchmark, this extensive experimental data set provides insight into the types of interactions essential to maintain the function of this potential gene therapy reagent. PMID:22426128

  13. Self-supervised ARTMAP.

    PubMed

    Amis, Gregory P; Carpenter, Gail A

    2010-03-01

    Computational models of learning typically train on labeled input patterns (supervised learning), unlabeled input patterns (unsupervised learning), or a combination of the two (semi-supervised learning). In each case input patterns have a fixed number of features throughout training and testing. Human and machine learning contexts present additional opportunities for expanding incomplete knowledge from formal training, via self-directed learning that incorporates features not previously experienced. This article defines a new self-supervised learning paradigm to address these richer learning contexts, introducing a neural network called self-supervised ARTMAP. Self-supervised learning integrates knowledge from a teacher (labeled patterns with some features), knowledge from the environment (unlabeled patterns with more features), and knowledge from internal model activation (self-labeled patterns). Self-supervised ARTMAP learns about novel features from unlabeled patterns without destroying partial knowledge previously acquired from labeled patterns. A category selection function bases system predictions on known features, and distributed network activation scales unlabeled learning to prediction confidence. Slow distributed learning on unlabeled patterns focuses on novel features and confident predictions, defining classification boundaries that were ambiguous in the labeled patterns. Self-supervised ARTMAP improves test accuracy on illustrative low-dimensional problems and on high-dimensional benchmarks. Model code and benchmark data are available from: http://techlab.eu.edu/SSART/. Copyright 2009 Elsevier Ltd. All rights reserved.

  14. Improved modeling of side-chain--base interactions and plasticity in protein--DNA interface design.

    PubMed

    Thyme, Summer B; Baker, David; Bradley, Philip

    2012-06-08

    Combinatorial sequence optimization for protein design requires libraries of discrete side-chain conformations. The discreteness of these libraries is problematic, particularly for long, polar side chains, since favorable interactions can be missed. Previously, an approach to loop remodeling where protein backbone movement is directed by side-chain rotamers predicted to form interactions previously observed in native complexes (termed "motifs") was described. Here, we show how such motif libraries can be incorporated into combinatorial sequence optimization protocols and improve native complex recapitulation. Guided by the motif rotamer searches, we made improvements to the underlying energy function, increasing recapitulation of native interactions. To further test the methods, we carried out a comprehensive experimental scan of amino acid preferences in the I-AniI protein-DNA interface and found that many positions tolerated multiple amino acids. This sequence plasticity is not observed in the computational results because of the fixed-backbone approximation of the model. We improved modeling of this diversity by introducing DNA flexibility and reducing the convergence of the simulated annealing algorithm that drives the design process. In addition to serving as a benchmark, this extensive experimental data set provides insight into the types of interactions essential to maintain the function of this potential gene therapy reagent. Published by Elsevier Ltd.

  15. Particle acceleration very near an x-line in a collisionless plasma

    NASA Technical Reports Server (NTRS)

    Lyons, L. R.; Pridmore-Brown, D. C.

    1995-01-01

    In a previous paper, we applied a simplified model for particle motion in the vicinity of a magnetic X-line that had been introduced by Dungey. We used the model to quantitatively show that an electric force along an X-line can be balanced by the gyroviscous force associated with the off-diagonal elements of the pressure tensor. Distribution functions near the X-line were shown to be skewed in azimuth about the magnetic field and to include particles accelerated to very high energies. In the present paper, we apply the previous model and use the distribution functions to evaluate the energization that results from particle interactions with the X-line. We find that, in general, this interaction gives a spectrum of energized particles that can be represented by a Maxwellian distribution. A power-law, high-energy tail does not develop. The thermal energy, K, of the Maxwellian can be expressed simply in terms of the field parameters and particle mass and charge. It is independent of the thermal energy, K(sub i), of the particle distribution incident upon the region of the X-line, provided that K(sub i) is less than K. Significant energization is not found for K(sub i) is greater than K.

  16. An artificial network model for estimating the network structure underlying partially observed neuronal signals.

    PubMed

    Komatsu, Misako; Namikawa, Jun; Chao, Zenas C; Nagasaka, Yasuo; Fujii, Naotaka; Nakamura, Kiyohiko; Tani, Jun

    2014-01-01

    Many previous studies have proposed methods for quantifying neuronal interactions. However, these methods evaluated the interactions between recorded signals in an isolated network. In this study, we present a novel approach for estimating interactions between observed neuronal signals by theorizing that those signals are observed from only a part of the network that also includes unobserved structures. We propose a variant of the recurrent network model that consists of both observable and unobservable units. The observable units represent recorded neuronal activity, and the unobservable units are introduced to represent activity from unobserved structures in the network. The network structures are characterized by connective weights, i.e., the interaction intensities between individual units, which are estimated from recorded signals. We applied this model to multi-channel brain signals recorded from monkeys, and obtained robust network structures with physiological relevance. Furthermore, the network exhibited common features that portrayed cortical dynamics as inversely correlated interactions between excitatory and inhibitory populations of neurons, which are consistent with the previous view of cortical local circuits. Our results suggest that the novel concept of incorporating an unobserved structure into network estimations has theoretical advantages and could provide insights into brain dynamics beyond what can be directly observed. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  17. Secure multiparty computation of a comparison problem.

    PubMed

    Liu, Xin; Li, Shundong; Liu, Jian; Chen, Xiubo; Xu, Gang

    2016-01-01

    Private comparison is fundamental to secure multiparty computation. In this study, we propose novel protocols to privately determine [Formula: see text], or [Formula: see text] in one execution. First, a 0-1-vector encoding method is introduced to encode a number into a vector, and the Goldwasser-Micali encryption scheme is used to compare integers privately. Then, we propose a protocol by using a geometric method to compare rational numbers privately, and the protocol is information-theoretical secure. Using the simulation paradigm, we prove the privacy-preserving property of our protocols in the semi-honest model. The complexity analysis shows that our protocols are more efficient than previous solutions.

  18. Mathematical Description of Dendrimer Structure

    NASA Technical Reports Server (NTRS)

    Majoros, Istvan J.; Mehta, Chandan B.; Baker, James R., Jr.

    2004-01-01

    Characteristics of starburst dendrimers can be easily attributed to the multiplicity of the monomers used to synthesize them. The molecular weight, degree of polymerization, number of terminal groups and branch points for each generation of a dendrimer can be calculated using mathematical formulas incorporating these variables. Mathematical models for the calculation of degree of polymerization, molecular weight, and number of terminal groups and branching groups previously published were revised and elaborated on for poly(amidoamine) (PAMAM) dendrimers, and introduced for poly(propyleneimine) (POPAM) dendrimers and the novel POPAM-PAMAM hybrid, which we call the POMAM dendrimer. Experimental verification of the relationship between theoretical and actual structure for the PAMAM dendrimer was also established.

  19. Performance comparison of the Prophecy (forecasting) Algorithm in FFT form for unseen feature and time-series prediction

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger; Handley, James

    2013-06-01

    We introduce a generalized numerical prediction and forecasting algorithm. We have previously published it for malware byte sequence feature prediction and generalized distribution modeling for disparate test article analysis. We show how non-trivial non-periodic extrapolation of a numerical sequence (forecast and backcast) from the starting data is possible. Our ancestor-progeny prediction can yield new options for evolutionary programming. Our equations enable analytical integrals and derivatives to any order. Interpolation is controllable from smooth continuous to fractal structure estimation. We show how our generalized trigonometric polynomial can be derived using a Fourier transform.

  20. Characterization of autoregressive processes using entropic quantifiers

    NASA Astrophysics Data System (ADS)

    Traversaro, Francisco; Redelico, Francisco O.

    2018-01-01

    The aim of the contribution is to introduce a novel information plane, the causal-amplitude informational plane. As previous works seems to indicate, Bandt and Pompe methodology for estimating entropy does not allow to distinguish between probability distributions which could be fundamental for simulation or for probability analysis purposes. Once a time series is identified as stochastic by the causal complexity-entropy informational plane, the novel causal-amplitude gives a deeper understanding of the time series, quantifying both, the autocorrelation strength and the probability distribution of the data extracted from the generating processes. Two examples are presented, one from climate change model and the other from financial markets.

  1. Evolution of Structure and Composition in Saturn's Rings Due to Ballistic Transport of Micrometeoroid Impact Ejecta

    NASA Astrophysics Data System (ADS)

    Estrada, P. R.; Durisen, R. H.; Cuzzi, J. N.

    2014-04-01

    We introduce improved numerical techniques for simulating the structural and compositional evolution of planetary rings due to micrometeoroid bombardment and subsequent ballistic transport of impact ejecta. Our current, robust code, which is based on the original structural code of [1] and on the pollution transport code of [3], is capable of modeling structural changes and pollution transport simultaneously over long times on both local and global scales. We provide demonstrative simulations to compare with, and extend upon previous work, as well as examples of how ballistic transport can maintain the observed structure in Saturn's rings using available Cassini occultation optical depth data.

  2. STS-121/Discovery: Imagery Quick-Look Briefing

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Kyle Herring (NASA Public Affairs) introduced Wayne Hale (Space Shuttle Program Manager) who stated that the imagery for the Space shuttle external tank showed the tank performed very well. Image analysis showed small pieces of foam falling off the rocket booster and external tank. There was no risk involved in these minor incidents. Statistical models were built to assist in risk analysis. The orbiter performed excellently. Wayne also provided some close-up pictures of small pieces of foam separating from the external tank during launching. He said the crew will also perform a 100% inspection of the heat shield. This flight showed great improvement over previous flights.

  3. Zone plate method for electronic holographic display using resolution redistribution technique.

    PubMed

    Takaki, Yasuhiro; Nakamura, Junya

    2011-07-18

    The resolution redistribution (RR) technique can increase the horizontal viewing-zone angle and screen size of electronic holographic display. The present study developed a zone plate method that would reduce hologram calculation time for the RR technique. This method enables calculation of an image displayed on a spatial light modulator by performing additions of the zone plates, while the previous calculation method required performing the Fourier transform twice. The derivation and modeling of the zone plate are shown. In addition, the look-up table approach was introduced for further reduction in computation time. Experimental verification using a holographic display module based on the RR technique is presented.

  4. Criticality of Adaptive Control Dynamics

    NASA Astrophysics Data System (ADS)

    Patzelt, Felix; Pawelzik, Klaus

    2011-12-01

    We show, that stabilization of a dynamical system can annihilate observable information about its structure. This mechanism induces critical points as attractors in locally adaptive control. It also reveals, that previously reported criticality in simple controllers is caused by adaptation and not by other controller details. We apply these results to a real-system example: human balancing behavior. A model of predictive adaptive closed-loop control subject to some realistic constraints is introduced and shown to reproduce experimental observations in unprecedented detail. Our results suggests, that observed error distributions in between the Lévy and Gaussian regimes may reflect a nearly optimal compromise between the elimination of random local trends and rare large errors.

  5. AdOn HDP-HMM: An Adaptive Online Model for Segmentation and Classification of Sequential Data.

    PubMed

    Bargi, Ava; Xu, Richard Yi Da; Piccardi, Massimo

    2017-09-21

    Recent years have witnessed an increasing need for the automated classification of sequential data, such as activities of daily living, social media interactions, financial series, and others. With the continuous flow of new data, it is critical to classify the observations on-the-fly and without being limited by a predetermined number of classes. In addition, a model should be able to update its parameters in response to a possible evolution in the distributions of the classes. This compelling problem, however, does not seem to have been adequately addressed in the literature, since most studies focus on offline classification over predefined class sets. In this paper, we present a principled solution for this problem based on an adaptive online system leveraging Markov switching models and hierarchical Dirichlet process priors. This adaptive online approach is capable of classifying the sequential data over an unlimited number of classes while meeting the memory and delay constraints typical of streaming contexts. In this paper, we introduce an adaptive ''learning rate'' that is responsible for balancing the extent to which the model retains its previous parameters or adapts to new observations. Experimental results on stationary and evolving synthetic data and two video data sets, TUM Assistive Kitchen and collated Weizmann, show a remarkable performance in terms of segmentation and classification, particularly for sequences from evolutionary distributions and/or those containing previously unseen classes.

  6. Automated crack detection in conductive smart-concrete structures using a resistor mesh model

    NASA Astrophysics Data System (ADS)

    Downey, Austin; D'Alessandro, Antonella; Ubertini, Filippo; Laflamme, Simon

    2018-03-01

    Various nondestructive evaluation techniques are currently used to automatically detect and monitor cracks in concrete infrastructure. However, these methods often lack the scalability and cost-effectiveness over large geometries. A solution is the use of self-sensing carbon-doped cementitious materials. These self-sensing materials are capable of providing a measurable change in electrical output that can be related to their damage state. Previous work by the authors showed that a resistor mesh model could be used to track damage in structural components fabricated from electrically conductive concrete, where damage was located through the identification of high resistance value resistors in a resistor mesh model. In this work, an automated damage detection strategy that works through placing high value resistors into the previously developed resistor mesh model using a sequential Monte Carlo method is introduced. Here, high value resistors are used to mimic the internal condition of damaged cementitious specimens. The proposed automated damage detection method is experimentally validated using a 500 × 500 × 50 mm3 reinforced cement paste plate doped with multi-walled carbon nanotubes exposed to 100 identical impact tests. Results demonstrate that the proposed Monte Carlo method is capable of detecting and localizing the most prominent damage in a structure, demonstrating that automated damage detection in smart-concrete structures is a promising strategy for real-time structural health monitoring of civil infrastructure.

  7. The Binary Offset Effect in CCDs: an Anomalous Readout Artifact Affecting Most Astronomical CCDs in Use

    NASA Astrophysics Data System (ADS)

    Boone, Kyle Robert; Aldering, Gregory; Copin, Yannick; Dixon, Samantha; Domagalski, Rachel; Gangler, Emmanuel; Pecontal, Emmanuel; Perlmutter, Saul; Nearby Supernova Factory Collaboration

    2018-01-01

    We discovered an anomalous behavior of CCD readout electronics that affects their use in many astronomical applications, which we call the “binary offset effect”. Due to feedback in the readout electronics, an offset is introduced in the values read out for each pixel that depends on the binary encoding of the previously read-out pixel values. One consequence of this effect is that a pathological local background offset can be introduced in images that only appears where science data are present on the CCD. The amplitude of this introduced offset does not scale monotonically with the amplitude of the objects in the image, and can be up to 4.5 ADU per pixel for certain instruments. Additionally, this background offset will be shifted by several pixels from the science data, potentially distorting the shape of objects in the image. We tested 22 instruments for signs of the binary offset effect and found evidence of it in 16 of them, including LRIS and DEIMOS on the Keck telescopes, WFC3-UVIS and STIS on HST, MegaCam on CFHT, SNIFS on the UH88 telescope, GMOS on the Gemini telescopes, HSC on Subaru, and FORS on VLT. A large amount of archival data is therefore affected by the binary offset effect, and conventional methods of reducing CCD images do not measure or remove the introduced offsets. As a demonstration of how to correct for the binary offset effect, we have developed a model that can accurately predict and remove the introduced offsets for the SNIFS instrument on the UH88 telescope. Accounting for the binary offset effect is essential for precision low-count astronomical observations with CCDs.

  8. Atwood's machine as a tool to introduce variable mass systems

    NASA Astrophysics Data System (ADS)

    de Sousa, Célia A.

    2012-03-01

    This article discusses an instructional strategy which explores eventual similarities and/or analogies between familiar problems and more sophisticated systems. In this context, the Atwood's machine problem is used to introduce students to more complex problems involving ropes and chains. The methodology proposed helps students to develop the ability needed to apply relevant concepts in situations not previously encountered. The pedagogical advantages are relevant for both secondary and high school students, showing that, through adequate examples, the question of the validity of Newton's second law may even be introduced to introductory level students.

  9. A Variation on the Use of Interactive Anonymous Quizzes in the Chemistry Classroom

    ERIC Educational Resources Information Center

    Wagner, Brian D.

    2009-01-01

    This article describes an interesting variation on the use of interactive anonymous quizzes (IAQs) in the chemistry classroom. In this variation, IAQs are used to introduce new material or topics in a course, as opposed to their traditional use for reviewing previously covered material. Two examples of IAQs used to introduce new topics in a…

  10. Data model, dictionaries, and desiderata for biomolecular simulation data indexing and sharing

    PubMed Central

    2014-01-01

    Background Few environments have been developed or deployed to widely share biomolecular simulation data or to enable collaborative networks to facilitate data exploration and reuse. As the amount and complexity of data generated by these simulations is dramatically increasing and the methods are being more widely applied, the need for new tools to manage and share this data has become obvious. In this paper we present the results of a process aimed at assessing the needs of the community for data representation standards to guide the implementation of future repositories for biomolecular simulations. Results We introduce a list of common data elements, inspired by previous work, and updated according to feedback from the community collected through a survey and personal interviews. These data elements integrate the concepts for multiple types of computational methods, including quantum chemistry and molecular dynamics. The identified core data elements were organized into a logical model to guide the design of new databases and application programming interfaces. Finally a set of dictionaries was implemented to be used via SQL queries or locally via a Java API built upon the Apache Lucene text-search engine. Conclusions The model and its associated dictionaries provide a simple yet rich representation of the concepts related to biomolecular simulations, which should guide future developments of repositories and more complex terminologies and ontologies. The model still remains extensible through the decomposition of virtual experiments into tasks and parameter sets, and via the use of extended attributes. The benefits of a common logical model for biomolecular simulations was illustrated through various use cases, including data storage, indexing, and presentation. All the models and dictionaries introduced in this paper are available for download at http://ibiomes.chpc.utah.edu/mediawiki/index.php/Downloads. PMID:24484917

  11. A Dynamic Finite Element Method for Simulating the Physics of Faults Systems

    NASA Astrophysics Data System (ADS)

    Saez, E.; Mora, P.; Gross, L.; Weatherley, D.

    2004-12-01

    We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.

  12. The life of a meander bend: Connecting shape and dynamics via analysis of a numerical model

    NASA Astrophysics Data System (ADS)

    Schwenk, Jon; Lanzoni, Stefano; Foufoula-Georgiou, Efi

    2015-04-01

    Analysis of bend-scale meandering river dynamics is a problem of theoretical and practical interest. This work introduces a method for extracting and analyzing the history of individual meander bends from inception until cutoff (called "atoms") by tracking backward through time the set of two cutoff nodes in numerical meander migration models. Application of this method to a simplified yet physically based model provides access to previously unavailable bend-scale meander dynamics over long times and at high temporal resolutions. We find that before cutoffs, the intrinsic model dynamics invariably simulate a prototypical cutoff atom shape we dub simple. Once perturbations from cutoffs occur, two other archetypal cutoff planform shapes emerge called long and round that are distinguished by a stretching along their long and perpendicular axes, respectively. Three measures of meander migration—growth rate, average migration rate, and centroid migration rate—are introduced to capture the dynamic lives of individual bends and reveal that similar cutoff atom geometries share similar dynamic histories. Specifically, through the lens of the three shape types, simples are seen to have the highest growth and average migration rates, followed by rounds, and finally longs. Using the maximum average migration rate as a metric describing an atom's dynamic past, we show a strong connection between it and two metrics of cutoff geometry. This result suggests both that early formative dynamics may be inferred from static cutoff planforms and that there exists a critical period early in a meander bend's life when its dynamic trajectory is most sensitive to cutoff perturbations. An example of how these results could be applied to Mississippi River oxbow lakes with unknown historic dynamics is shown. The results characterize the underlying model and provide a framework for comparisons against more complex models and observed dynamics.

  13. Climate driven crop planting date in the ACME Land Model (ALM): Impacts on productivity and yield

    NASA Astrophysics Data System (ADS)

    Drewniak, B.

    2017-12-01

    Climate is one of the key drivers of crop suitability and productivity in a region. The influence of climate and weather on the growing season determine the amount of time crops spend in each growth phase, which in turn impacts productivity and, more importantly, yields. Planting date can have a strong influence on yields with earlier planting generally resulting in higher yields, a sensitivity that is also present in some crop models. Furthermore, planting date is already changing and may continue, especially if longer growing seasons caused by future climate change drive early (or late) planting decisions. Crop models need an accurate method to predict plant date to allow these models to: 1) capture changes in crop management to adapt to climate change, 2) accurately model the timing of crop phenology, and 3) improve crop simulated influences on carbon, nutrient, energy, and water cycles. Previous studies have used climate as a predictor for planting date. Climate as a plant date predictor has more advantages than fixed plant dates. For example, crop expansion and other changes in land use (e.g., due to changing temperature conditions), can be accommodated without additional model inputs. As such, a new methodology to implement a predictive planting date based on climate inputs is added to the Accelerated Climate Model for Energy (ACME) Land Model (ALM). The model considers two main sources of climate data important for planting: precipitation and temperature. This method expands the current temperature threshold planting trigger and improves the estimated plant date in ALM. Furthermore, the precipitation metric for planting, which synchronizes the crop growing season with the wettest months, allows tropical crops to be introduced to the model. This presentation will demonstrate how the improved model enhances the ability of ALM to capture planting date compared with observations. More importantly, the impact of changing the planting date and introducing tropical crops will be explored. Those impacts include discussions on productivity, yield, and influences on carbon and energy fluxes.

  14. Improving 3D Genome Reconstructions Using Orthologous and Functional Constraints

    PubMed Central

    Diament, Alon; Tuller, Tamir

    2015-01-01

    The study of the 3D architecture of chromosomes has been advancing rapidly in recent years. While a number of methods for 3D reconstruction of genomic models based on Hi-C data were proposed, most of the analyses in the field have been performed on different 3D representation forms (such as graphs). Here, we reproduce most of the previous results on the 3D genomic organization of the eukaryote Saccharomyces cerevisiae using analysis of 3D reconstructions. We show that many of these results can be reproduced in sparse reconstructions, generated from a small fraction of the experimental data (5% of the data), and study the properties of such models. Finally, we propose for the first time a novel approach for improving the accuracy of 3D reconstructions by introducing additional predicted physical interactions to the model, based on orthologous interactions in an evolutionary-related organism and based on predicted functional interactions between genes. We demonstrate that this approach indeed leads to the reconstruction of improved models. PMID:26000633

  15. The asymmetric reactions of mean and volatility of stock returns to domestic and international information based on a four-regime double-threshold GARCH model

    NASA Astrophysics Data System (ADS)

    Chen, Cathy W. S.; Yang, Ming Jing; Gerlach, Richard; Jim Lo, H.

    2006-07-01

    In this paper, we investigate the asymmetric reactions of mean and volatility of stock returns in five major markets to their own local news and the US information via linear and nonlinear models. We introduce a four-regime Double-Threshold GARCH (DTGARCH) model, which allows asymmetry in both the conditional mean and variance equations simultaneously by employing two threshold variables, to analyze the stock markets’ reactions to different types of information (good/bad news) generated from the domestic markets and the US stock market. By applying the four-regime DTGARCH model, this study finds that the interaction between the information of domestic and US stock markets leads to the asymmetric reactions of stock returns and their variability. In addition, this research also finds that the positive autocorrelation reported in the previous studies of financial markets may in fact be mis-specified, and actually due to the local market's positive response to the US stock market.

  16. Modeling sound transmission and reflection in the pulmonary system and chest with application to diagnosis of a collapsed lung

    NASA Astrophysics Data System (ADS)

    Royston, Thomas J.; Zhang, Xiangling; Mansy, Hussein A.; Sandler, Richard H.

    2002-05-01

    Experimental studies have shown that a pneumothorax (collapsed lung) substantially alters the propagation of sound introduced at the mouth of an intubated subject and measured at the chest surface. Thus, it is hypothesized that an inexpensive diagnostic procedure could be developed for detection of a pneumothorax based on a simple acoustic test. In the present study, theoretical models of sound transmission through the pulmonary system and chest region are reviewed in the context of their ability to predict acoustic changes caused by a pneumothorax, as well as other pathologic conditions. Such models could aid in parametric design studies to develop acoustic means of diagnosing pneumothorax and other lung pathologies. Extensions of previously developed simple models of the authors are presented that are in more quantitative agreement with experimental results and that simulate both transmission from the bronchial airways to the chest wall, as well as reflection in the bronchial airways. [Research supported by NIH NCRR Grant No. 14250 and NIH NHLBI Grant No. 61108.

  17. Use of Combined A-Train Observations to Validate GEOS Model Simulated Dust Distributions During NAMMA

    NASA Technical Reports Server (NTRS)

    Nowottnick, E.

    2007-01-01

    During August 2006, the NASA African Multidisciplinary Analyses Mission (NAMMA) field experiment was conducted to characterize the structure of African Easterly Waves and their evolution into tropical storms. Mineral dust aerosols affect tropical storm development, although their exact role remains to be understood. To better understand the role of dust on tropical cyclogenesis, we have implemented a dust source, transport, and optical model in the NASA Goddard Earth Observing System (GEOS) atmospheric general circulation model and data assimilation system. Our dust source scheme is more physically based scheme than previous incarnations of the model, and we introduce improved dust optical and microphysical processes through inclusion of a detailed microphysical scheme. Here we use A-Train observations from MODIS, OMI, and CALIPSO with NAMMA DC-8 flight data to evaluate the simulated dust distributions and microphysical properties. Our goal is to synthesize the multi-spectral observations from the A-Train sensors to arrive at a consistent set of optical properties for the dust aerosols suitable for direct forcing calculations.

  18. GOTHiC, a probabilistic model to resolve complex biases and to identify real interactions in Hi-C data.

    PubMed

    Mifsud, Borbala; Martincorena, Inigo; Darbo, Elodie; Sugar, Robert; Schoenfelder, Stefan; Fraser, Peter; Luscombe, Nicholas M

    2017-01-01

    Hi-C is one of the main methods for investigating spatial co-localisation of DNA in the nucleus. However, the raw sequencing data obtained from Hi-C experiments suffer from large biases and spurious contacts, making it difficult to identify true interactions. Existing methods use complex models to account for biases and do not provide a significance threshold for detecting interactions. Here we introduce a simple binomial probabilistic model that resolves complex biases and distinguishes between true and false interactions. The model corrects biases of known and unknown origin and yields a p-value for each interaction, providing a reliable threshold based on significance. We demonstrate this experimentally by testing the method against a random ligation dataset. Our method outperforms previous methods and provides a statistical framework for further data analysis, such as comparisons of Hi-C interactions between different conditions. GOTHiC is available as a BioConductor package (http://www.bioconductor.org/packages/release/bioc/html/GOTHiC.html).

  19. Bayesian analysis of caustic-crossing microlensing events

    NASA Astrophysics Data System (ADS)

    Cassan, A.; Horne, K.; Kains, N.; Tsapras, Y.; Browne, P.

    2010-06-01

    Aims: Caustic-crossing binary-lens microlensing events are important anomalous events because they are capable of detecting an extrasolar planet companion orbiting the lens star. Fast and robust modelling methods are thus of prime interest in helping to decide whether a planet is detected by an event. Cassan introduced a new set of parameters to model binary-lens events, which are closely related to properties of the light curve. In this work, we explain how Bayesian priors can be added to this framework, and investigate on interesting options. Methods: We develop a mathematical formulation that allows us to compute analytically the priors on the new parameters, given some previous knowledge about other physical quantities. We explicitly compute the priors for a number of interesting cases, and show how this can be implemented in a fully Bayesian, Markov chain Monte Carlo algorithm. Results: Using Bayesian priors can accelerate microlens fitting codes by reducing the time spent considering physically implausible models, and helps us to discriminate between alternative models based on the physical plausibility of their parameters.

  20. Integrating the automatic and the controlled: Strategies in Semantic Priming in an Attractor Network with Latching Dynamics

    PubMed Central

    Lerner, Itamar; Bentin, Shlomo; Shriki, Oren

    2014-01-01

    Semantic priming has long been recognized to reflect, along with automatic semantic mechanisms, the contribution of controlled strategies. However, previous theories of controlled priming were mostly qualitative, lacking common grounds with modern mathematical models of automatic priming based on neural networks. Recently, we have introduced a novel attractor network model of automatic semantic priming with latching dynamics. Here, we extend this work to show how the same model can also account for important findings regarding controlled processes. Assuming the rate of semantic transitions in the network can be adapted using simple reinforcement learning, we show how basic findings attributed to controlled processes in priming can be achieved, including their dependency on stimulus onset asynchrony and relatedness proportion and their unique effect on associative, category-exemplar, mediated and backward prime-target relations. We discuss how our mechanism relates to the classic expectancy theory and how it can be further extended in future developments of the model. PMID:24890261

  1. Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method

    PubMed Central

    Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni

    2017-01-01

    The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508

  2. GenSSI 2.0: multi-experiment structural identifiability analysis of SBML models.

    PubMed

    Ligon, Thomas S; Fröhlich, Fabian; Chis, Oana T; Banga, Julio R; Balsa-Canto, Eva; Hasenauer, Jan

    2018-04-15

    Mathematical modeling using ordinary differential equations is used in systems biology to improve the understanding of dynamic biological processes. The parameters of ordinary differential equation models are usually estimated from experimental data. To analyze a priori the uniqueness of the solution of the estimation problem, structural identifiability analysis methods have been developed. We introduce GenSSI 2.0, an advancement of the software toolbox GenSSI (Generating Series for testing Structural Identifiability). GenSSI 2.0 is the first toolbox for structural identifiability analysis to implement Systems Biology Markup Language import, state/parameter transformations and multi-experiment structural identifiability analysis. In addition, GenSSI 2.0 supports a range of MATLAB versions and is computationally more efficient than its previous version, enabling the analysis of more complex models. GenSSI 2.0 is an open-source MATLAB toolbox and available at https://github.com/genssi-developer/GenSSI. thomas.ligon@physik.uni-muenchen.de or jan.hasenauer@helmholtz-muenchen.de. Supplementary data are available at Bioinformatics online.

  3. The (Mathematical) Modeling Process in Biosciences

    PubMed Central

    Torres, Nestor V.; Santos, Guido

    2015-01-01

    In this communication, we introduce a general framework and discussion on the role of models and the modeling process in the field of biosciences. The objective is to sum up the common procedures during the formalization and analysis of a biological problem from the perspective of Systems Biology, which approaches the study of biological systems as a whole. We begin by presenting the definitions of (biological) system and model. Particular attention is given to the meaning of mathematical model within the context of biology. Then, we present the process of modeling and analysis of biological systems. Three stages are described in detail: conceptualization of the biological system into a model, mathematical formalization of the previous conceptual model and optimization and system management derived from the analysis of the mathematical model. All along this work the main features and shortcomings of the process are analyzed and a set of rules that could help in the task of modeling any biological system are presented. Special regard is given to the formative requirements and the interdisciplinary nature of this approach. We conclude with some general considerations on the challenges that modeling is posing to current biology. PMID:26734063

  4. General form of a cooperative gradual maximal covering location problem

    NASA Astrophysics Data System (ADS)

    Bagherinejad, Jafar; Bashiri, Mahdi; Nikzad, Hamideh

    2018-07-01

    Cooperative and gradual covering are two new methods for developing covering location models. In this paper, a cooperative maximal covering location-allocation model is developed (CMCLAP). In addition, both cooperative and gradual covering concepts are applied to the maximal covering location simultaneously (CGMCLP). Then, we develop an integrated form of a cooperative gradual maximal covering location problem, which is called a general CGMCLP. By setting the model parameters, the proposed general model can easily be transformed into other existing models, facilitating general comparisons. The proposed models are developed without allocation for physical signals and with allocation for non-physical signals in discrete location space. Comparison of the previously introduced gradual maximal covering location problem (GMCLP) and cooperative maximal covering location problem (CMCLP) models with our proposed CGMCLP model in similar data sets shows that the proposed model can cover more demands and acts more efficiently. Sensitivity analyses are performed to show the effect of related parameters and the model's validity. Simulated annealing (SA) and a tabu search (TS) are proposed as solution algorithms for the developed models for large-sized instances. The results show that the proposed algorithms are efficient solution approaches, considering solution quality and running time.

  5. Specimen-level phylogenetics in paleontology using the Fossilized Birth-Death model with sampled ancestors.

    PubMed

    Cau, Andrea

    2017-01-01

    Bayesian phylogenetic methods integrating simultaneously morphological and stratigraphic information have been applied increasingly among paleontologists. Most of these studies have used Bayesian methods as an alternative to the widely-used parsimony analysis, to infer macroevolutionary patterns and relationships among species-level or higher taxa. Among recently introduced Bayesian methodologies, the Fossilized Birth-Death (FBD) model allows incorporation of hypotheses on ancestor-descendant relationships in phylogenetic analyses including fossil taxa. Here, the FBD model is used to infer the relationships among an ingroup formed exclusively by fossil individuals, i.e., dipnoan tooth plates from four localities in the Ain el Guettar Formation of Tunisia. Previous analyses of this sample compared the results of phylogenetic analysis using parsimony with stratigraphic methods, inferred a high diversity (five or more genera) in the Ain el Guettar Formation, and interpreted it as an artifact inflated by depositional factors. In the analysis performed here, the uncertainty on the chronostratigraphic relationships among the specimens was included among the prior settings. The results of the analysis confirm the referral of most of the specimens to the taxa Asiatoceratodus , Equinoxiodus, Lavocatodus and Neoceratodus , but reject those to Ceratodus and Ferganoceratodus . The resulting phylogeny constrained the evolution of the Tunisian sample exclusively in the Early Cretaceous, contrasting with the previous scenario inferred by the stratigraphically-calibrated topology resulting from parsimony analysis. The phylogenetic framework also suggests that (1) the sampled localities are laterally equivalent, (2) but three localities are restricted to the youngest part of the section; both results are in agreement with previous stratigraphic analyses of these localities. The FBD model of specimen-level units provides a novel tool for phylogenetic inference among fossils but also for independent tests of stratigraphic scenarios.

  6. Approximation methods for stochastic petri nets

    NASA Technical Reports Server (NTRS)

    Jungnitz, Hauke Joerg

    1992-01-01

    Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay equivalence often fails to converge, while flow equivalent aggregation can lead to potentially bad results if a strong dependence of the mean completion time on the interarrival process exists.

  7. Investigation of advanced phase-shifting projected fringe profilometry techniques

    NASA Astrophysics Data System (ADS)

    Liu, Hongyu

    1999-11-01

    The phase-shifting projected fringe profilometry (PSPFP) technique is a powerful tool in the profile measurements of rough engineering surfaces. Compared with other competing techniques, this technique is notable for its full-field measurement capacity, system simplicity, high measurement speed, and low environmental vulnerability. The main purpose of this dissertation is to tackle three important problems, which severely limit the capability and the accuracy of the PSPFP technique, with some new approaches. Chapter 1 provides some background information of the PSPFP technique including the measurement principles, basic features, and related techniques is briefly introduced. The objectives and organization of the thesis are also outlined. Chapter 2 gives a theoretical treatment to the absolute PSPFP measurement. The mathematical formulations and basic requirements of the absolute PSPFP measurement and its supporting techniques are discussed in detail. Chapter 3 introduces the experimental verification of the proposed absolute PSPFP technique. Some design details of a prototype system are discussed as supplements to the previous theoretical analysis. Various fundamental experiments performed for concept verification and accuracy evaluation are introduced together with some brief comments. Chapter 4 presents the theoretical study of speckle- induced phase measurement errors. In this analysis, the expression for speckle-induced phase errors is first derived based on the multiplicative noise model of image- plane speckles. The statistics and the system dependence of speckle-induced phase errors are then thoroughly studied through numerical simulations and analytical derivations. Based on the analysis, some suggestions on the system design are given to improve measurement accuracy. Chapter 5 discusses a new technique combating surface reflectivity variations. The formula used for error compensation is first derived based on a simplified model of the detection process. The techniques coping with two major effects of surface reflectivity variations are then introduced. Some fundamental problems in the proposed technique are studied through simulations. Chapter 6 briefly summarizes the major contributions of the current work and provides some suggestions for the future research.

  8. Inferring the demographic history from DNA sequences: An importance sampling approach based on non-homogeneous processes.

    PubMed

    Ait Kaci Azzou, S; Larribe, F; Froda, S

    2016-10-01

    In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.

  9. Higher order memories for objects encountered in different spatio-temporal contexts in mice: evidence for episodic memory.

    PubMed

    Dere, Ekrem; Silva, Maria A De Souza; Huston, Joseph P

    2004-01-01

    The ability to build higher order multi-modal memories comprising information about the spatio-temporal context of events has been termed 'episodic memory'. Deficits in episodic memory are apparent in a number of neuropsychiatric diseases. Unfortunately, the development of animal models of episodic memory has made little progress. Towards the goal of such a model we devised an object exploration task for mice, providing evidence that rodents can associate object, spatial and temporal information. In our task the mice learned the temporal sequence by which identical objects were introduced into two different contexts. The 'what' component of an episodic memory was operationalized via physically distinct objects; the 'where' component through physically different contexts, and, most importantly, the 'when' component via the context-specific inverted sequence in which four objects were presented. Our results suggest that mice are able to recollect the inverted temporal sequence in which identical objects were introduced into two distinct environments. During two consecutive test trials mice showed an inverse context-specific exploration pattern regarding identical objects that were previously encountered with even frequencies. It seems that the contexts served as discriminative stimuli signaling which of the two sequences are decisive during the two test trials.

  10. Performance of Goddard Earth Observing System GCM Column Radiation Models under Heterogeneous Cloud Conditions

    NASA Technical Reports Server (NTRS)

    Oreopoulos, L.; Chou, M.-D.; Khairoutdinov, M.; Barker, H. W.; Cahalan, R. F.

    2003-01-01

    We test the performance of the shortwave (SW) and longwave (LW) Column Radiation Models (CORAMs) of Chou and collaborators with heterogeneous cloud fields from a global single-day dataset produced by NCAR's Community Atmospheric Model with a 2-D CRM installed in each gridbox. The original SW version of the CORAM performs quite well compared to reference Independent Column Approximation (ICA) calculations for boundary fluxes, largely due to the success of a combined overlap and cloud scaling parameterization scheme. The absolute magnitude of errors relative to ICA are even smaller for the LW CORAM which applies similar overlap. The vertical distribution of heating and cooling within the atmosphere is also simulated quite well with daily-averaged zonal errors always below 0.3 K/d for SW heating rates and 0.6 K/d for LW cooling rates. The SW CORAM's performance improves by introducing a scheme that accounts for cloud inhomogeneity. These results suggest that previous studies demonstrating the inaccuracy of plane-parallel models may have unfairly focused on worst scenario cases, and that current radiative transfer algorithms of General Circulation Models (GCMs) may be more capable than previously thought in estimating realistic spatial and temporal averages of radiative fluxes, as long as they are provided with correct mean cloud profiles. However, even if the errors of the particular CORAMs are small, they seem to be systematic, and the impact of the biases can be fully assessed only with GCM climate simulations.

  11. SkyFACT: high-dimensional modeling of gamma-ray emission with adaptive templates and penalized likelihoods

    NASA Astrophysics Data System (ADS)

    Storm, Emma; Weniger, Christoph; Calore, Francesca

    2017-08-01

    We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (gtrsim 105) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that are motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |l|<90o and |b|<20o, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.

  12. Development and Application of ANN Model for Worker Assignment into Virtual Cells of Large Sized Configurations

    NASA Astrophysics Data System (ADS)

    Murali, R. V.; Puri, A. B.; Fathi, Khalid

    2010-10-01

    This paper presents an extended version of study already undertaken on development of an artificial neural networks (ANNs) model for assigning workforce into virtual cells under virtual cellular manufacturing systems (VCMS) environments. Previously, the same authors have introduced this concept and applied it to virtual cells of two-cell configuration and the results demonstrated that ANNs could be a worth applying tool for carrying out workforce assignments. In this attempt, three-cell configurations problems are considered for worker assignment task. Virtual cells are formed under dual resource constraint (DRC) context in which the number of available workers is less than the total number of machines available. Since worker assignment tasks are quite non-linear and highly dynamic in nature under varying inputs & conditions and, in parallel, ANNs have the ability to model complex relationships between inputs and outputs and find similar patterns effectively, an attempt was earlier made to employ ANNs into the above task. In this paper, the multilayered perceptron with feed forward (MLP-FF) neural network model has been reused for worker assignment tasks of three-cell configurations under DRC context and its performance at different time periods has been analyzed. The previously proposed worker assignment model has been reconfigured and cell formation solutions available for three-cell configuration in the literature are used in combination to generate datasets for training ANNs framework. Finally, results of the study have been presented and discussed.

  13. δ18O water isotope in the iLOVECLIM model (version 1.0) - Part 2: Evaluation of model results against observed δ18O in water samples

    NASA Astrophysics Data System (ADS)

    Roche, D. M.; Caley, T.

    2013-09-01

    The H218O stable isotope was previously introduced in the three coupled components of the earth system model iLOVECLIM: atmosphere, ocean and vegetation. The results of a long (5000 yr) pre-industrial equilibrium simulation are presented and evaluated against measurement of H218O abundance in present-day water for the atmospheric and oceanic components. For the atmosphere, it is found that the model reproduces the observed spatial distribution and relationships to climate variables with some merit, though limitations following our approach are highlighted. Indeed, we obtain the main gradients with a robust representation of the Rayleigh distillation but caveats appear in Antarctica and around the Mediterranean region due to model limitation. For the oceanic component, the agreement between the modelled and observed distribution of water δ18O is found to be very good. Mean ocean surface latitudinal gradients are faithfully reproduced as well as the mark of the main intermediate and deep water masses. This opens large prospects for the applications in palaeoclimatic context.

  14. δ18O water isotope in the iLOVECLIM model (version 1.0) - Part 2: Evaluation of model results against observed δ18O in water samples

    NASA Astrophysics Data System (ADS)

    Roche, D. M.; Caley, T.

    2013-03-01

    The H218O stable isotope was previously introduced in the three coupled components of the Earth System Model iLOVECLIM: atmosphere, ocean and vegetation. The results of a long (5000 yr) pre-industrial equilibrium simulation are presented and evaluated against measurement of H218O abundance in present-day water for the atmospheric and oceanic components. For the atmosphere, it is found that the model reproduces the observed spatial distribution and relationships to climate variables with some merit, though limitations following our approach are highlighted. Indeed, we obtain the main gradients with a robust representation of the Rayleigh distillation but caveats appear in Antarctica and around the Mediterranean region due to model limitation. For the oceanic component, the agreement between the modelled and observed distribution of water δ18O is found to be very good. Mean ocean surface latitudinal gradients are faithfully reproduced as well as the mark of the main intermediate and deep water masses. This opens large prospects for the applications in paleoclimatic context.

  15. Some unexamined aspects of analysis of covariance in pretest-posttest studies.

    PubMed

    Ganju, Jitendra

    2004-09-01

    The use of an analysis of covariance (ANCOVA) model in a pretest-posttest setting deserves to be studied separately from its use in other (non-pretest-posttest) settings. For pretest-posttest studies, the following points are made in this article: (a) If the familiar change from baseline model accurately describes the data-generating mechanism for a randomized study then it is impossible for unequal slopes to exist. Conversely, if unequal slopes exist, then it implies that the change from baseline model as a data-generating mechanism is inappropriate. An alternative data-generating model should be identified and the validity of the ANCOVA model should be demonstrated. (b) Under the usual assumptions of equal pretest and posttest within-subject error variances, the ratio of the standard error of a treatment contrast from a change from baseline analysis to that from ANCOVA is less than 2(1)/(2). (c) For an observational study it is possible for unequal slopes to exist even if the change from baseline model describes the data-generating mechanism. (d) Adjusting for the pretest variable in observational studies may actually introduce bias where none previously existed.

  16. An evolving model of online bipartite networks

    NASA Astrophysics Data System (ADS)

    Zhang, Chu-Xu; Zhang, Zi-Ke; Liu, Chuang

    2013-12-01

    Understanding the structure and evolution of online bipartite networks is a significant task since they play a crucial role in various e-commerce services nowadays. Recently, various attempts have been tried to propose different models, resulting in either power-law or exponential degree distributions. However, many empirical results show that the user degree distribution actually follows a shifted power-law distribution, the so-called Mandelbrot’s law, which cannot be fully described by previous models. In this paper, we propose an evolving model, considering two different user behaviors: random and preferential attachment. Extensive empirical results on two real bipartite networks, Delicious and CiteULike, show that the theoretical model can well characterize the structure of real networks for both user and object degree distributions. In addition, we introduce a structural parameter p, to demonstrate that the hybrid user behavior leads to the shifted power-law degree distribution, and the region of power-law tail will increase with the increment of p. The proposed model might shed some lights in understanding the underlying laws governing the structure of real online bipartite networks.

  17. Using experimental human influenza infections to validate a viral dynamic model and the implications for prediction.

    PubMed

    Chen, S C; You, S H; Liu, C Y; Chio, C P; Liao, C M

    2012-09-01

    The aim of this work was to use experimental infection data of human influenza to assess a simple viral dynamics model in epithelial cells and better understand the underlying complex factors governing the infection process. The developed study model expands on previous reports of a target cell-limited model with delayed virus production. Data from 10 published experimental infection studies of human influenza was used to validate the model. Our results elucidate, mechanistically, the associations between epithelial cells, human immune responses, and viral titres and were supported by the experimental infection data. We report that the maximum total number of free virions following infection is 10(3)-fold higher than the initial introduced titre. Our results indicated that the infection rates of unprotected epithelial cells probably play an important role in affecting viral dynamics. By simulating an advanced model of viral dynamics and applying it to experimental infection data of human influenza, we obtained important estimates of the infection rate. This work provides epidemiologically meaningful results, meriting further efforts to understand the causes and consequences of influenza A infection.

  18. Patterns in the English language: phonological networks, percolation and assembly models

    NASA Astrophysics Data System (ADS)

    Stella, Massimo; Brede, Markus

    2015-05-01

    In this paper we provide a quantitative framework for the study of phonological networks (PNs) for the English language by carrying out principled comparisons to null models, either based on site percolation, randomization techniques, or network growth models. In contrast to previous work, we mainly focus on null models that reproduce lower order characteristics of the empirical data. We find that artificial networks matching connectivity properties of the English PN are exceedingly rare: this leads to the hypothesis that the word repertoire might have been assembled over time by preferentially introducing new words which are small modifications of old words. Our null models are able to explain the ‘power-law-like’ part of the degree distributions and generally retrieve qualitative features of the PN such as high clustering, high assortativity coefficient and small-world characteristics. However, the detailed comparison to expectations from null models also points out significant differences, suggesting the presence of additional constraints in word assembly. Key constraints we identify are the avoidance of large degrees, the avoidance of triadic closure and the avoidance of large non-percolating clusters.

  19. Solving gap metabolites and blocked reactions in genome-scale models: application to the metabolic network of Blattabacterium cuenoti.

    PubMed

    Ponce-de-León, Miguel; Montero, Francisco; Peretó, Juli

    2013-10-31

    Metabolic reconstruction is the computational-based process that aims to elucidate the network of metabolites interconnected through reactions catalyzed by activities assigned to one or more genes. Reconstructed models may contain inconsistencies that appear as gap metabolites and blocked reactions. Although automatic methods for solving this problem have been previously developed, there are many situations where manual curation is still needed. We introduce a general definition of gap metabolite that allows its detection in a straightforward manner. Moreover, a method for the detection of Unconnected Modules, defined as isolated sets of blocked reactions connected through gap metabolites, is proposed. The method has been successfully applied to the curation of iCG238, the genome-scale metabolic model for the bacterium Blattabacterium cuenoti, obligate endosymbiont of cockroaches. We found the proposed approach to be a valuable tool for the curation of genome-scale metabolic models. The outcome of its application to the genome-scale model B. cuenoti iCG238 is a more accurate model version named as B. cuenoti iMP240.

  20. Hierarchical dose response of E. coli O157:H7 from human outbreaks incorporating heterogeneity in exposure.

    PubMed

    Teunis, P F M; Ogden, I D; Strachan, N J C

    2008-06-01

    The infectivity of pathogenic microorganisms is a key factor in the transmission of an infectious disease in a susceptible population. Microbial infectivity is generally estimated from dose-response studies in human volunteers. This can only be done with mildly pathogenic organisms. Here a hierarchical Beta-Poisson dose-response model is developed utilizing data from human outbreaks. On the lowest level each outbreak is modelled separately and these are then combined at a second level to produce a group dose-response relation. The distribution of foodborne pathogens often shows strong heterogeneity and this is incorporated by introducing an additional parameter to the dose-response model, accounting for the degree of overdispersion relative to Poisson distribution. It was found that heterogeneity considerably influences the shape of the dose-response relationship and increases uncertainty in predicted risk. This uncertainty is greater than previously reported surrogate and outbreak models using a single level of analysis. Monte Carlo parameter samples (alpha, beta of the Beta-Poisson model) can be readily incorporated in risk assessment models built using tools such as S-plus and @ Risk.

  1. Galerkin Models Enhancements for Flow Control

    NASA Astrophysics Data System (ADS)

    Tadmor, Gilead; Lehmann, Oliver; Noack, Bernd R.; Morzyński, Marek

    Low order Galerkin models were originally introduced as an effective tool for stability analysis of fixed points and, later, of attractors, in nonlinear distributed systems. An evolving interest in their use as low complexity dynamical models, goes well beyond that original intent. It exposes often severe weaknesses of low order Galerkin models as dynamic predictors and has motivated efforts, spanning nearly three decades, to alleviate these shortcomings. Transients across natural and enforced variations in the operating point, unsteady inflow, boundary actuation and both aeroelastic and actuated boundary motion, are hallmarks of current and envisioned needs in feedback flow control applications, bringing these shortcomings to even higher prominence. Building on the discussion in our previous chapters, we shall now review changes in the Galerkin paradigm that aim to create a mathematically and physically consistent modeling framework, that remove what are otherwise intractable roadblocks. We shall then highlight some guiding design principles that are especially important in the context of these models. We shall continue to use the simple example of wake flow instabilities to illustrate the various issues, ideas and methods that will be discussed in this chapter.

  2. A size-dependent constitutive model of bulk metallic glasses in the supercooled liquid region

    PubMed Central

    Yao, Di; Deng, Lei; Zhang, Mao; Wang, Xinyun; Tang, Na; Li, Jianjun

    2015-01-01

    Size effect is of great importance in micro forming processes. In this paper, micro cylinder compression was conducted to investigate the deformation behavior of bulk metallic glasses (BMGs) in supercooled liquid region with different deformation variables including sample size, temperature and strain rate. It was found that the elastic and plastic behaviors of BMGs have a strong dependence on the sample size. The free volume and defect concentration were introduced to explain the size effect. In order to demonstrate the influence of deformation variables on steady stress, elastic modulus and overshoot phenomenon, four size-dependent factors were proposed to construct a size-dependent constitutive model based on the Maxwell-pulse type model previously presented by the authors according to viscosity theory and free volume model. The proposed constitutive model was then adopted in finite element method simulations, and validated by comparing the micro cylinder compression and micro double cup extrusion experimental data with the numerical results. Furthermore, the model provides a new approach to understanding the size-dependent plastic deformation behavior of BMGs. PMID:25626690

  3. Satellite passive microwave detection of surface water inundation changes over the pan-Arctic from AMSR

    NASA Astrophysics Data System (ADS)

    Du, J.; Kimball, J. S.; Jones, L. A.; Watts, J. D.

    2016-12-01

    Climate is one of the key drivers of crop suitability and productivity in a region. The influence of climate and weather on the growing season determine the amount of time crops spend in each growth phase, which in turn impacts productivity and, more importantly, yields. Planting date can have a strong influence on yields with earlier planting generally resulting in higher yields, a sensitivity that is also present in some crop models. Furthermore, planting date is already changing and may continue, especially if longer growing seasons caused by future climate change drive early (or late) planting decisions. Crop models need an accurate method to predict plant date to allow these models to: 1) capture changes in crop management to adapt to climate change, 2) accurately model the timing of crop phenology, and 3) improve crop simulated influences on carbon, nutrient, energy, and water cycles. Previous studies have used climate as a predictor for planting date. Climate as a plant date predictor has more advantages than fixed plant dates. For example, crop expansion and other changes in land use (e.g., due to changing temperature conditions), can be accommodated without additional model inputs. As such, a new methodology to implement a predictive planting date based on climate inputs is added to the Accelerated Climate Model for Energy (ACME) Land Model (ALM). The model considers two main sources of climate data important for planting: precipitation and temperature. This method expands the current temperature threshold planting trigger and improves the estimated plant date in ALM. Furthermore, the precipitation metric for planting, which synchronizes the crop growing season with the wettest months, allows tropical crops to be introduced to the model. This presentation will demonstrate how the improved model enhances the ability of ALM to capture planting date compared with observations. More importantly, the impact of changing the planting date and introducing tropical crops will be explored. Those impacts include discussions on productivity, yield, and influences on carbon and energy fluxes.

  4. TRIM—3D: a three-dimensional model for accurate simulation of shallow water flow

    USGS Publications Warehouse

    Casulli, Vincenzo; Bertolazzi, Enrico; Cheng, Ralph T.

    1993-01-01

    A semi-implicit finite difference formulation for the numerical solution of three-dimensional tidal circulation is discussed. The governing equations are the three-dimensional Reynolds equations in which the pressure is assumed to be hydrostatic. A minimal degree of implicitness has been introduced in the finite difference formula so that the resulting algorithm permits the use of large time steps at a minimal computational cost. This formulation includes the simulation of flooding and drying of tidal flats, and is fully vectorizable for an efficient implementation on modern vector computers. The high computational efficiency of this method has made it possible to provide the fine details of circulation structure in complex regions that previous studies were unable to obtain. For proper interpretation of the model results suitable interactive graphics is also an essential tool.

  5. New Density Functional Approach for Solid-Liquid-Vapor Transitions in Pure Materials

    NASA Astrophysics Data System (ADS)

    Kocher, Gabriel; Provatas, Nikolas

    2015-04-01

    A new phase field crystal (PFC) type theory is presented, which accounts for the full spectrum of solid-liquid-vapor phase transitions within the framework of a single density order parameter. Its equilibrium properties show the most quantitative features to date in PFC modeling of pure substances, and full consistency with thermodynamics in pressure-volume-temperature space is demonstrated. A method to control either the volume or the pressure of the system is also introduced. Nonequilibrium simulations show that 2- and 3-phase growth of solid, vapor, and liquid can be achieved, while our formalism also allows for a full range of pressure-induced transformations. This model opens up a new window for the study of pressure driven interactions of condensed phases with vapor, an experimentally relevant paradigm previously missing from phase field crystal theories.

  6. 3 d printing of 2 d N=(0,2) gauge theories

    NASA Astrophysics Data System (ADS)

    Franco, Sebastián; Hasan, Azeem

    2018-05-01

    We introduce 3 d printing, a new algorithm for generating 2 d N=(0,2) gauge theories on D1-branes probing singular toric Calabi-Yau 4-folds using 4 d N=1 gauge theories on D3-branes probing toric Calabi-Yau 3-folds as starting points. Equivalently, this method produces brane brick models starting from brane tilings. 3 d printing represents a significant improvement with respect to previously available tools, allowing a straightforward determination of gauge theories for geometries that until now could only be tackled using partial resolution. We investigate the interplay between triality, an IR equivalence between different 2 d N=(0,2) gauge theories, and the freedom in 3 d printing given an underlying Calabi-Yau 4-fold. Finally, we present the first discussion of the consistency and reduction of brane brick models.

  7. On the recovery of electric currents in the liquid core of the Earth

    NASA Astrophysics Data System (ADS)

    Kuslits, Lukács; Prácser, Ernő; Lemperger, István

    2017-04-01

    Inverse geodynamo modelling has become a standard method to get a more accurate image of the processes within the outer core. In this poster excerpts from the preliminary results of an other approach are presented. This comes around the possibility of recovering the currents within the liquid core directly, using Main Magnetic Field data. The approximation of different systems of the flow of charge is possible with various geometries. Based on previous geodynamo simulations, current coils can furnish a good initial geometry for such an estimation. The presentation introduces our preliminary test results and the study of reliability of the applied inversion algorithm for different numbers of coils, distributed in a grid simbolysing the domain between the inner-core and core-mantle boundaries. We shall also present inverted current structures using Main Field model data.

  8. Influence of neutron irradiation on the microstructure of nuclear graphite: An X-ray diffraction study

    NASA Astrophysics Data System (ADS)

    Zhou, Z.; Bouwman, W. G.; Schut, H.; van Staveren, T. O.; Heijna, M. C. R.; Pappas, C.

    2017-04-01

    Neutron irradiation effects on the microstructure of nuclear graphite have been investigated by X-ray diffraction on virgin and low doses (∼ 1.3 and ∼ 2.2 dpa), high temperature (750° C) irradiated samples. The diffraction patterns were interpreted using a model, which takes into account the turbostratic disorder. Besides the lattice constants, the model introduces two distinct coherent lengths in the c-axis and the basal plane, that characterise the volumes from which X-rays are scattered coherently. The methodology used in this work allows to quantify the effect of irradiation damage on the microstructure of nuclear graphite seen by X-ray diffraction. The results show that the changes of the deduced structural parameters are in agreement with previous observations from electron microscopy, but not directly related to macroscopic changes.

  9. Diffusion via space discretization method to study the concentration dependence of self-diffusivity under confinement

    NASA Astrophysics Data System (ADS)

    Sant, Marco; Papadopoulos, George K.; Theodorou, Doros N.

    2010-04-01

    The concentration dependence of self-diffusivity is investigated by means of a novel method, extending our previously developed second-order Markov process model to periodic media. Introducing the concept of minimum-crossing surface, we obtain a unique decomposition of the self-diffusion coefficient into two parameters with specific physical meanings. Two case studies showing a maximum in self-diffusivity as a function of concentration are investigated, along with two cases where such a maximum cannot be present. Subsequently, the method is applied to the large cavity pore network of the ITQ-1 (Mobil tWenty tWo, MWW) zeolite for methane (displaying a maximum in self-diffusivity) and carbon dioxide (no maximum), explaining the diffusivity trend on the basis of the evolution of the model parameters as a function of concentration.

  10. High-throughput amplification of mature microRNAs in uncharacterized animal models using polyadenylated RNA and stem-loop reverse transcription polymerase chain reaction.

    PubMed

    Biggar, Kyle K; Wu, Cheng-Wei; Storey, Kenneth B

    2014-10-01

    This study makes a significant advancement on a microRNA amplification technique previously used for expression analysis and sequencing in animal models without annotated mature microRNA sequences. As research progresses into the post-genomic era of microRNA prediction and analysis, the need for a rapid and cost-effective method for microRNA amplification is critical to facilitate wide-scale analysis of microRNA expression. To facilitate this requirement, we have reoptimized the design of amplification primers and introduced a polyadenylation step to allow amplification of all mature microRNAs from a single RNA sample. Importantly, this method retains the ability to sequence reverse transcription polymerase chain reaction (RT-PCR) products, validating microRNA-specific amplification. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Bifurcation of elastic solids with sliding interfaces

    NASA Astrophysics Data System (ADS)

    Bigoni, D.; Bordignon, N.; Piccolroaz, A.; Stupkiewicz, S.

    2018-01-01

    Lubricated sliding contact between soft solids is an interesting topic in biomechanics and for the design of small-scale engineering devices. As a model of this mechanical set-up, two elastic nonlinear solids are considered jointed through a frictionless and bilateral surface, so that continuity of the normal component of the Cauchy traction holds across the surface, but the tangential component is null. Moreover, the displacement can develop only in a way that the bodies in contact do neither detach, nor overlap. Surprisingly, this finite strain problem has not been correctly formulated until now, so this formulation is the objective of the present paper. The incremental equations are shown to be non-trivial and different from previously (and erroneously) employed conditions. In particular, an exclusion condition for bifurcation is derived to show that previous formulations based on frictionless contact or `spring-type' interfacial conditions are not able to predict bifurcations in tension, while experiments-one of which, ad hoc designed, is reported-show that these bifurcations are a reality and become possible when the correct sliding interface model is used. The presented results introduce a methodology for the determination of bifurcations and instabilities occurring during lubricated sliding between soft bodies in contact.

  12. A Direct Method to Extract Transient Sub-Gap Density of State (DOS) Based on Dual Gate Pulse Spectroscopy

    NASA Astrophysics Data System (ADS)

    Dai, Mingzhi; Khan, Karim; Zhang, Shengnan; Jiang, Kemin; Zhang, Xingye; Wang, Weiliang; Liang, Lingyan; Cao, Hongtao; Wang, Pengjun; Wang, Peng; Miao, Lijing; Qin, Haiming; Jiang, Jun; Xue, Lixin; Chu, Junhao

    2016-06-01

    Sub-gap density of states (DOS) is a key parameter to impact the electrical characteristics of semiconductor materials-based transistors in integrated circuits. Previously, spectroscopy methodologies for DOS extractions include the static methods, temperature dependent spectroscopy and photonic spectroscopy. However, they might involve lots of assumptions, calculations, temperature or optical impacts into the intrinsic distribution of DOS along the bandgap of the materials. A direct and simpler method is developed to extract the DOS distribution from amorphous oxide-based thin-film transistors (TFTs) based on Dual gate pulse spectroscopy (GPS), introducing less extrinsic factors such as temperature and laborious numerical mathematical analysis than conventional methods. From this direct measurement, the sub-gap DOS distribution shows a peak value on the band-gap edge and in the order of 1017-1021/(cm3·eV), which is consistent with the previous results. The results could be described with the model involving both Gaussian and exponential components. This tool is useful as a diagnostics for the electrical properties of oxide materials and this study will benefit their modeling and improvement of the electrical properties and thus broaden their applications.

  13. PID-based error signal modeling

    NASA Astrophysics Data System (ADS)

    Yohannes, Tesfay

    1997-10-01

    This paper introduces a PID based signal error modeling. The error modeling is based on the betterment process. The resulting iterative learning algorithm is introduced and a detailed proof is provided for both linear and nonlinear systems.

  14. Introducing DAE Systems in Undergraduate and Graduate Chemical Engineering Curriculum

    ERIC Educational Resources Information Center

    Mandela, Ravi Kumar; Sridhar, L. N.; Rengaswamy, Raghunathan

    2010-01-01

    Models play an important role in understanding chemical engineering systems. While differential equation models are taught in standard modeling and control courses, Differential Algebraic Equation (DAE) system models are not usually introduced. These models appear naturally in several chemical engineering problems. In this paper, the introduction…

  15. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Jin; Yu, Yaming; Van Dyk, David A.

    2014-10-20

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use amore » principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.« less

  16. Design-for-manufacture of gradient-index optical systems using time-varying boundary condition diffusion

    NASA Astrophysics Data System (ADS)

    Harkrider, Curtis Jason

    2000-08-01

    The incorporation of gradient-index (GRIN) material into optical systems offers novel and practical solutions to lens design problems. However, widespread use of gradient-index optics has been limited by poor correlation between gradient-index designs and the refractive index profiles produced by ion exchange between glass and molten salt. Previously, a design-for- manufacture model was introduced that connected the design and fabrication processes through use of diffusion modeling linked with lens design software. This project extends the design-for-manufacture model into a time- varying boundary condition (TVBC) diffusion model. TVBC incorporates the time-dependent phenomenon of melt poisoning and introduces a new index profile control method, multiple-step diffusion. The ions displaced from the glass during the ion exchange fabrication process can reduce the total change in refractive index (Δn). Chemical equilibrium is used to model this melt poisoning process. Equilibrium experiments are performed in a titania silicate glass and chemically analyzed. The equilibrium model is fit to ion concentration data that is used to calculate ion exchange boundary conditions. The boundary conditions are changed purposely to control the refractive index profile in multiple-step TVBC diffusion. The glass sample is alternated between ion exchange with a molten salt bath and annealing. The time of each diffusion step can be used to exert control on the index profile. The TVBC computer model is experimentally verified and incorporated into the design- for-manufacture subroutine that runs in lens design software. The TVBC design-for-manufacture model is useful for fabrication-based tolerance analysis of gradient-index lenses and for the design of manufactureable GRIN lenses. Several optical elements are designed and fabricated using multiple-step diffusion, verifying the accuracy of the model. The strength of multiple-step diffusion process lies in its versatility. An axicon, imaging lens, and curved radial lens, all with different index profile requirements, are designed out of a single glass composition.

  17. Optimization of the freeze-drying cycle: adaptation of the pressure rise analysis model to non-instantaneous isolation valves.

    PubMed

    Chouvenc, P; Vessot, S; Andrieu, J; Vacus, P

    2005-01-01

    The principal aim of this study is to extend to a pilot freeze-dryer equipped with a non-instantaneous isolation valve the previously presented pressure rise analysis (PRA) model for monitoring the product temperature and the resistance to mass transfer of the dried layer during primary drying. This method, derived from the original MTM method previously published, consists of interrupting rapidly (a few seconds) the water vapour flow from the sublimation chamber to the condenser and analysing the resulting dynamics of the total chamber pressure increase. The valve effect on the pressure rise profile observed during the isolation valve closing period was corrected by introducing in the initial PRA model a valve characteristic function factor which turned out to be independent of the operating conditions. This new extended PRA model was validated by implementing successively the two types of valves and by analysing the pressure rise kinetics data with the corresponding PRA models in the same operating conditions. The coherence and consistency shown on the identified parameter values (sublimation front temperature, dried layer mass transfer resistance) allowed validation of this extended PRA model with a non-instantaneous isolation valve. These results confirm that the PRA method, with or without an instantaneous isolation valve, is appropriate for on-line monitoring of product characteristics during freeze-drying. The advantages of PRA are that the method is rapid, non-invasive, and global. Consequently, PRA might become a powerful and promising tool not only for the control of pilot freeze-dryers but also for industrial freeze-dryers equipped with external condensers.

  18. Score As You Lift (SAYL): A Statistical Relational Learning Approach to Uplift Modeling.

    PubMed

    Nassif, Houssam; Kuusisto, Finn; Burnside, Elizabeth S; Page, David; Shavlik, Jude; Costa, Vítor Santos

    We introduce Score As You Lift (SAYL), a novel Statistical Relational Learning (SRL) algorithm, and apply it to an important task in the diagnosis of breast cancer. SAYL combines SRL with the marketing concept of uplift modeling, uses the area under the uplift curve to direct clause construction and final theory evaluation, integrates rule learning and probability assignment, and conditions the addition of each new theory rule to existing ones. Breast cancer, the most common type of cancer among women, is categorized into two subtypes: an earlier in situ stage where cancer cells are still confined, and a subsequent invasive stage. Currently older women with in situ cancer are treated to prevent cancer progression, regardless of the fact that treatment may generate undesirable side-effects, and the woman may die of other causes. Younger women tend to have more aggressive cancers, while older women tend to have more indolent tumors. Therefore older women whose in situ tumors show significant dissimilarity with in situ cancer in younger women are less likely to progress, and can thus be considered for watchful waiting. Motivated by this important problem, this work makes two main contributions. First, we present the first multi-relational uplift modeling system, and introduce, implement and evaluate a novel method to guide search in an SRL framework. Second, we compare our algorithm to previous approaches, and demonstrate that the system can indeed obtain differential rules of interest to an expert on real data, while significantly improving the data uplift.

  19. TRM4: Range performance model for electro-optical imaging systems

    NASA Astrophysics Data System (ADS)

    Keßler, Stefan; Gal, Raanan; Wittenstein, Wolfgang

    2017-05-01

    TRM4 is a commonly used model for assessing device and range performance of electro-optical imagers. The latest version, TRM4.v2, has been released by Fraunhofer IOSB of Germany in June 2016. While its predecessor, TRM3, was developed for thermal imagers, assuming blackbody targets and backgrounds, TRM4 extends the TRM approach to assess three imager categories: imagers that exploit emitted radiation (TRM4 category Thermal), reflected radiation (TRM4 category Visible/NIR/SWIR), and both emitted and reflected radiation (TRM4 category General). Performance assessment in TRM3 and TRM4 is based on the perception of standard four-bar test patterns, whether distorted by under-sampling or not. Spatial and sampling characteristics are taken into account by the Average Modulation at Optimum Phase (AMOP), which replaces the system MTF used in previous models. The Minimum Temperature Difference Perceived (MTDP) figure of merit was introduced in TRM3 for assessing the range performance of thermal imagers. In TRM4, this concept is generalized to the MDSP (Minimum Difference Signal Perceived), which can be applied to all imager categories. In this paper, we outline and discuss the TRM approach and pinpoint differences between TRM4 and TRM3. In addition, an overview of the TRM4 software and its functionality is given. Features newly introduced in TRM4, such as atmospheric turbulence, irradiation sources, and libraries are addressed. We conclude with an outlook on future work and the new module for intensified CCD cameras that is currently under development

  20. Differential porosimetry and permeametry for random porous media.

    PubMed

    Hilfer, R; Lemmer, A

    2015-07-01

    Accurate determination of geometrical and physical properties of natural porous materials is notoriously difficult. Continuum multiscale modeling has provided carefully calibrated realistic microstructure models of reservoir rocks with floating point accuracy. Previous measurements using synthetic microcomputed tomography (μ-CT) were based on extrapolation of resolution-dependent properties for discrete digitized approximations of the continuum microstructure. This paper reports continuum measurements of volume and specific surface with full floating point precision. It also corrects an incomplete description of rotations in earlier publications. More importantly, the methods of differential permeametry and differential porosimetry are introduced as precision tools. The continuum microstructure chosen to exemplify the methods is a homogeneous, carefully calibrated and characterized model for Fontainebleau sandstone. The sample has been publicly available since 2010 on the worldwide web as a benchmark for methodical studies of correlated random media. High-precision porosimetry gives the volume and internal surface area of the sample with floating point accuracy. Continuum results with floating point precision are compared to discrete approximations. Differential porosities and differential surface area densities allow geometrical fluctuations to be discriminated from discretization effects and numerical noise. Differential porosimetry and Fourier analysis reveal subtle periodic correlations. The findings uncover small oscillatory correlations with a period of roughly 850μm, thus implying that the sample is not strictly stationary. The correlations are attributed to the deposition algorithm that was used to ensure the grain overlap constraint. Differential permeabilities are introduced and studied. Differential porosities and permeabilities provide scale-dependent information on geometry fluctuations, thereby allowing quantitative error estimates.

Top