Sample records for based model-free analysis

  1. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    PubMed

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  2. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    PubMed

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  3. Beyond the scope of Free-Wilson analysis: building interpretable QSAR models with machine learning algorithms.

    PubMed

    Chen, Hongming; Carlsson, Lars; Eriksson, Mats; Varkonyi, Peter; Norinder, Ulf; Nilsson, Ingemar

    2013-06-24

    A novel methodology was developed to build Free-Wilson like local QSAR models by combining R-group signatures and the SVM algorithm. Unlike Free-Wilson analysis this method is able to make predictions for compounds with R-groups not present in a training set. Eleven public data sets were chosen as test cases for comparing the performance of our new method with several other traditional modeling strategies, including Free-Wilson analysis. Our results show that the R-group signature SVM models achieve better prediction accuracy compared with Free-Wilson analysis in general. Moreover, the predictions of R-group signature models are also comparable to the models using ECFP6 fingerprints and signatures for the whole compound. Most importantly, R-group contributions to the SVM model can be obtained by calculating the gradient for R-group signatures. For most of the studied data sets, a significant correlation with that of a corresponding Free-Wilson analysis is shown. These results suggest that the R-group contribution can be used to interpret bioactivity data and highlight that the R-group signature based SVM modeling method is as interpretable as Free-Wilson analysis. Hence the signature SVM model can be a useful modeling tool for any drug discovery project.

  4. Comparing model-based adaptive LMS filters and a model-free hysteresis loop analysis method for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Zhou, Cong; Chase, J. Geoffrey; Rodgers, Geoffrey W.; Xu, Chao

    2017-02-01

    The model-free hysteresis loop analysis (HLA) method for structural health monitoring (SHM) has significant advantages over the traditional model-based SHM methods that require a suitable baseline model to represent the actual system response. This paper provides a unique validation against both an experimental reinforced concrete (RC) building and a calibrated numerical model to delineate the capability of the model-free HLA method and the adaptive least mean squares (LMS) model-based method in detecting, localizing and quantifying damage that may not be visible, observable in overall structural response. Results clearly show the model-free HLA method is capable of adapting to changes in how structures transfer load or demand across structural elements over time and multiple events of different size. However, the adaptive LMS model-based method presented an image of greater spread of lesser damage over time and story when the baseline model is not well defined. Finally, the two algorithms are tested over a simpler hysteretic behaviour typical steel structure to quantify the impact of model mismatch between the baseline model used for identification and the actual response. The overall results highlight the need for model-based methods to have an appropriate model that can capture the observed response, in order to yield accurate results, even in small events where the structure remains linear.

  5. Rapid acquisition and model-based analysis of cell-free transcription–translation reactions from nonmodel bacteria

    PubMed Central

    Wienecke, Sarah; Ishwarbhai, Alka; Tsipa, Argyro; Aw, Rochelle; Kylilis, Nicolas; Bell, David J.; McClymont, David W.; Jensen, Kirsten; Biedendieck, Rebekka

    2018-01-01

    Native cell-free transcription–translation systems offer a rapid route to characterize the regulatory elements (promoters, transcription factors) for gene expression from nonmodel microbial hosts, which can be difficult to assess through traditional in vivo approaches. One such host, Bacillus megaterium, is a giant Gram-positive bacterium with potential biotechnology applications, although many of its regulatory elements remain uncharacterized. Here, we have developed a rapid automated platform for measuring and modeling in vitro cell-free reactions and have applied this to B. megaterium to quantify a range of ribosome binding site variants and previously uncharacterized endogenous constitutive and inducible promoters. To provide quantitative models for cell-free systems, we have also applied a Bayesian approach to infer ordinary differential equation model parameters by simultaneously using time-course data from multiple experimental conditions. Using this modeling framework, we were able to infer previously unknown transcription factor binding affinities and quantify the sharing of cell-free transcription–translation resources (energy, ribosomes, RNA polymerases, nucleotides, and amino acids) using a promoter competition experiment. This allows insights into resource limiting-factors in batch cell-free synthesis mode. Our combined automated and modeling platform allows for the rapid acquisition and model-based analysis of cell-free transcription–translation data from uncharacterized microbial cell hosts, as well as resource competition within cell-free systems, which potentially can be applied to a range of cell-free synthetic biology and biotechnology applications. PMID:29666238

  6. Control algorithms and applications of the wavefront sensorless adaptive optics

    NASA Astrophysics Data System (ADS)

    Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen

    2017-10-01

    Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.

  7. Model-Based Reasoning in Humans Becomes Automatic with Training.

    PubMed

    Economides, Marcos; Kurth-Nelson, Zeb; Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J

    2015-09-01

    Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

  8. Effect of Cross-Linking on Free Volume Properties of PEG Based Thiol-Ene Networks

    NASA Astrophysics Data System (ADS)

    Ramakrishnan, Ramesh; Vasagar, Vivek; Nazarenko, Sergei

    According to the Fox and Loshaek theory, in elastomeric networks, free volume decreases linearly with the cross-link density increase. The aim of this study is to show whether the poly(ethylene glycol) (PEG) based multicomponent thiol-ene elastomeric networks demonstrate this model behavior? Networks with a broad cross-link density range were prepared by changing the ratio of the trithiol crosslinker to PEG dithiol and then UV cured with PEG diene while maintaining 1:1 thiol:ene stoichiometry. Pressure-volume-temperature (PVT) data of the networks was generated from the high pressure dilatometry experiments which was fit using the Simha-Somcynsky Equation-of-State analysis to obtain the fractional free volume of the networks. Using Positron Annihilation Lifetime Spectroscopy (PALS) analysis, the average free volume hole size of the networks was also quantified. The fractional free volume and the average free volume hole size showed a linear change with the cross-link density confirming that the Fox and Loshaek theory can be applied to this multicomponent system. Gas diffusivities of the networks showed a good correlation with free volume. A free volume based model was developed to describe the gas diffusivity trends as a function of cross-link density.

  9. Analysis of Physical and Numerical Factors for Prediction of UV Radiation from High Altitude Two-Phase Plumes

    DTIC Science & Technology

    2008-05-30

    varies from continuum inside the nozzle, to transitional in the near field, to free molecular in the far field of the plume. The scales of interest vary...unity based on the rocket length. This results in the formation of a viscous shock layer characterized by a bimodal molecular velocity distribution. The...transfer model. Previous analysis21 have shown that the heat transfer model implemented in CFD++ is reproduced closely by the free molecular model

  10. Cost-Benefit Arbitration Between Multiple Reinforcement-Learning Systems.

    PubMed

    Kool, Wouter; Gershman, Samuel J; Cushman, Fiery A

    2017-09-01

    Human behavior is sometimes determined by habit and other times by goal-directed planning. Modern reinforcement-learning theories formalize this distinction as a competition between a computationally cheap but inaccurate model-free system that gives rise to habits and a computationally expensive but accurate model-based system that implements planning. It is unclear, however, how people choose to allocate control between these systems. Here, we propose that arbitration occurs by comparing each system's task-specific costs and benefits. To investigate this proposal, we conducted two experiments showing that people increase model-based control when it achieves greater accuracy than model-free control, and especially when the rewards of accurate performance are amplified. In contrast, they are insensitive to reward amplification when model-based and model-free control yield equivalent accuracy. This suggests that humans adaptively balance habitual and planned action through on-line cost-benefit analysis.

  11. The Mixed Instrumental Controller: Using Value of Information to Combine Habitual Choice and Mental Simulation

    PubMed Central

    Pezzulo, Giovanni; Rigoli, Francesco; Chersi, Fabian

    2013-01-01

    Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available “cached” value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated “Value of Information” exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus – ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation. PMID:23459512

  12. The mixed instrumental controller: using value of information to combine habitual choice and mental simulation.

    PubMed

    Pezzulo, Giovanni; Rigoli, Francesco; Chersi, Fabian

    2013-01-01

    Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available "cached" value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated "Value of Information" exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus - ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation.

  13. Fine tuning breath-hold-based cerebrovascular reactivity analysis models.

    PubMed

    van Niftrik, Christiaan Hendrik Bas; Piccirelli, Marco; Bozinov, Oliver; Pangalu, Athina; Valavanis, Antonios; Regli, Luca; Fierstra, Jorn

    2016-02-01

    We elaborate on existing analysis methods for breath-hold (BH)-derived cerebrovascular reactivity (CVR) measurements and describe novel insights and models toward more exact CVR interpretation. Five blood-oxygen-level-dependent (BOLD) fMRI datasets of neurovascular patients with unilateral hemispheric hemodynamic impairment were used to test various BH CVR analysis methods. Temporal lag (phase), percent BOLD signal change (CVR), and explained variance (coherence) maps were calculated using three different sine models and two novel "Optimal Signal" model-free methods based on the unaffected hemisphere and the sagittal sinus fMRI signal time series, respectively. All models showed significant differences in CVR and coherence between the affected-hemodynamic impaired-and unaffected hemisphere. Voxel-wise phase determination significantly increases CVR (0.60 ± 0.18 vs. 0.82 ± 0.27; P < 0.05). Incorporating different durations of breath hold and resting period in one sine model (two-task) did increase coherence in the unaffected hemisphere, as well as eliminating negative phase commonly obtained by one-task frequency models. The novel model-free "optimal signal" methods both explained the BOLD MR data similar to the two task sine model. Our CVR analysis demonstrates an improved CVR and coherence after implementation of voxel-wise phase and frequency adjustment. The novel "optimal signal" methods provide a robust and feasible alternative to the sine models, as both are model-free and independent of compliance. Here, the sagittal sinus model may be advantageous, as it is independent of hemispheric CVR impairment.

  14. Nonlinear aeroservoelastic analysis of a controlled multiple-actuated-wing model with free-play

    NASA Astrophysics Data System (ADS)

    Huang, Rui; Hu, Haiyan; Zhao, Yonghui

    2013-10-01

    In this paper, the effects of structural nonlinearity due to free-play in both leading-edge and trailing-edge outboard control surfaces on the linear flutter control system are analyzed for an aeroelastic model of three-dimensional multiple-actuated-wing. The free-play nonlinearities in the control surfaces are modeled theoretically by using the fictitious mass approach. The nonlinear aeroelastic equations of the presented model can be divided into nine sub-linear modal-based aeroelastic equations according to the different combinations of deflections of the leading-edge and trailing-edge outboard control surfaces. The nonlinear aeroelastic responses can be computed based on these sub-linear aeroelastic systems. To demonstrate the effects of nonlinearity on the linear flutter control system, a single-input and single-output controller and a multi-input and multi-output controller are designed based on the unconstrained optimization techniques. The numerical results indicate that the free-play nonlinearity can lead to either limit cycle oscillations or divergent motions when the linear control system is implemented.

  15. Tertiary structure-based analysis of microRNA–target interactions

    PubMed Central

    Gan, Hin Hark; Gunsalus, Kristin C.

    2013-01-01

    Current computational analysis of microRNA interactions is based largely on primary and secondary structure analysis. Computationally efficient tertiary structure-based methods are needed to enable more realistic modeling of the molecular interactions underlying miRNA-mediated translational repression. We incorporate algorithms for predicting duplex RNA structures, ionic strength effects, duplex entropy and free energy, and docking of duplex–Argonaute protein complexes into a pipeline to model and predict miRNA–target duplex binding energies. To ensure modeling accuracy and computational efficiency, we use an all-atom description of RNA and a continuum description of ionic interactions using the Poisson–Boltzmann equation. Our method predicts the conformations of two constructs of Caenorhabditis elegans let-7 miRNA–target duplexes to an accuracy of ∼3.8 Å root mean square distance of their NMR structures. We also show that the computed duplex formation enthalpies, entropies, and free energies for eight miRNA–target duplexes agree with titration calorimetry data. Analysis of duplex–Argonaute docking shows that structural distortions arising from single-base-pair mismatches in the seed region influence the activity of the complex by destabilizing both duplex hybridization and its association with Argonaute. Collectively, these results demonstrate that tertiary structure-based modeling of miRNA interactions can reveal structural mechanisms not accessible with current secondary structure-based methods. PMID:23417009

  16. Free Fall Misconceptions: Results of a Graph Based Pre-Test of Sophomore Civil Engineering Students

    ERIC Educational Resources Information Center

    Montecinos, Alicia M.

    2014-01-01

    A partially unusual behaviour was found among 14 sophomore students of civil engineering who took a pre test for a free fall laboratory session, in the context of a general mechanics course. An analysis contemplating mathematics models and physics models consistency was made. In all cases, the students presented evidence favoring a correct free…

  17. Model-Based and Model-Free Pavlovian Reward Learning: Revaluation, Revision and Revelation

    PubMed Central

    Dayan, Peter; Berridge, Kent C.

    2014-01-01

    Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation. PMID:24647659

  18. Model-based and model-free Pavlovian reward learning: revaluation, revision, and revelation.

    PubMed

    Dayan, Peter; Berridge, Kent C

    2014-06-01

    Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations, and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response, and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation.

  19. Stability analysis of free piston Stirling engines

    NASA Astrophysics Data System (ADS)

    Bégot, Sylvie; Layes, Guillaume; Lanzetta, François; Nika, Philippe

    2013-03-01

    This paper presents a stability analysis of a free piston Stirling engine. The model and the detailed calculation of pressures losses are exposed. Stability of the machine is studied by the observation of the eigenvalues of the model matrix. Model validation based on the comparison with NASA experimental results is described. The influence of operational and construction parameters on performance and stability issues is exposed. The results show that most parameters that are beneficial for machine power seem to induce irregular mechanical characteristics with load, suggesting that self-sustained oscillations could be difficult to maintain and control.

  20. Wave models for turbulent free shear flows

    NASA Technical Reports Server (NTRS)

    Liou, W. W.; Morris, P. J.

    1991-01-01

    New predictive closure models for turbulent free shear flows are presented. They are based on an instability wave description of the dominant large scale structures in these flows using a quasi-linear theory. Three model were developed to study the structural dynamics of turbulent motions of different scales in free shear flows. The local characteristics of the large scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models were applied to the study of an incompressible free mixing layer. In all cases, predictions are made for the development of the mean flow field. In the last model, predictions of the time dependent motion of the large scale structure of the mixing region are made. The predictions show good agreement with experimental observations.

  1. A study of photon propagation in free-space based on hybrid radiosity-radiance theorem.

    PubMed

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Liang, Jimin; Wang, Lin; Yang, Da'an; Garofalakis, Anikitos; Ripoll, Jorge; Tian, Jie

    2009-08-31

    Noncontact optical imaging has attracted increasing attention in recent years due to its significant advantages on detection sensitivity, spatial resolution, image quality and system simplicity compared with contact measurement. However, photon transport simulation in free-space is still an extremely challenging topic for the complexity of the optical system. For this purpose, this paper proposes an analytical model for photon propagation in free-space based on hybrid radiosity-radiance theorem (HRRT). It combines Lambert's cosine law and the radiance theorem to handle the influence of the complicated lens and to simplify the photon transport process in the optical system. The performance of the proposed model is evaluated and validated with numerical simulations and physical experiments. Qualitative comparison results of flux distribution at the detector are presented. In particular, error analysis demonstrates the feasibility and potential of the proposed model for simulating photon propagation in free-space.

  2. An equation-free approach to agent-based computation: Bifurcation analysis and control of stationary states

    NASA Astrophysics Data System (ADS)

    Siettos, C. I.; Gear, C. W.; Kevrekidis, I. G.

    2012-08-01

    We show how the equation-free approach can be exploited to enable agent-based simulators to perform system-level computations such as bifurcation, stability analysis and controller design. We illustrate these tasks through an event-driven agent-based model describing the dynamic behaviour of many interacting investors in the presence of mimesis. Using short bursts of appropriately initialized runs of the detailed, agent-based simulator, we construct the coarse-grained bifurcation diagram of the (expected) density of agents and investigate the stability of its multiple solution branches. When the mimetic coupling between agents becomes strong enough, the stable stationary state loses its stability at a coarse turning point bifurcation. We also demonstrate how the framework can be used to design a wash-out dynamic controller that stabilizes open-loop unstable stationary states even under model uncertainty.

  3. FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses.

    PubMed

    Desai, Trunil S; Srivastava, Shireesh

    2018-01-01

    13 C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13 C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13 C-MFA software that works in various operating systems will enable more researchers to perform 13 C-MFA and to further modify and develop the package.

  4. FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses

    PubMed Central

    Desai, Trunil S.

    2018-01-01

    13C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13C-MFA software that works in various operating systems will enable more researchers to perform 13C-MFA and to further modify and develop the package. PMID:29736347

  5. Performance analysis on free-piston Stirling cryocooler based on an idealized mathematical model

    NASA Astrophysics Data System (ADS)

    Guo, Y. X.; Chao, Y. J.; Gan, Z. H.; Li, S. Z.; Wang, B.

    2017-12-01

    Free-piston Stirling cryocoolers have extensive applications for its simplicity in structure and decrease in mass. However, the elimination of the motor and the crankshaft has made its thermodynamic characteristic different from that of Stirling cryocoolers with displacer driving mechanism. Therefore, an idealized mathematical model has been established, and with this model, an attempt has been made to analyse the thermodynamic characteristic and the performance of free-piston Stirling cryocooler. To certify this mathematical model, a comparison has been made between the model and a numerical model. This study reveals that due to the displacer damping force necessary for the production of cooling capacity, the free-piston Stirling cryocooler is inherently less efficient than Stirling cryocooler with displacer driving mechanism. Viscous flow resistance and incomplete heat transfer in the regenerator are the two major causes of the discrepancy between the results of the idealized mathematical model and the numerical model.

  6. Tube Bulge Process : Theoretical Analysis and Finite Element Simulations

    NASA Astrophysics Data System (ADS)

    Velasco, Raphael; Boudeau, Nathalie

    2007-05-01

    This paper is focused on the determination of mechanics characteristics for tubular materials, using tube bulge process. A comparative study is made between two different models: theoretical model and finite element analysis. The theoretical model is completely developed, based first on a geometrical analysis of the tube profile during bulging, which is assumed to strain in arc of circles. Strain and stress analysis complete the theoretical model, which allows to evaluate tube thickness and state of stress, at any point of the free bulge region. Free bulging of a 304L stainless steel is simulated using Ls-Dyna 970. To validate FE simulations approach, a comparison between theoretical and finite elements models is led on several parameters such as: thickness variation at the free bulge region pole with bulge height, tube thickness variation with z axial coordinate, and von Mises stress variation with plastic strain. Finally, the influence of geometrical parameters deviations on flow stress curve is observed using analytical model: deviations of the tube outer diameter, its initial thickness and the bulge height measurement are taken into account to obtain a resulting error on plastic strain and von Mises stress.

  7. To Control False Positives in Gene-Gene Interaction Analysis: Two Novel Conditional Entropy-Based Approaches

    PubMed Central

    Lin, Meihua; Li, Haoli; Zhao, Xiaolei; Qin, Jiheng

    2013-01-01

    Genome-wide analysis of gene-gene interactions has been recognized as a powerful avenue to identify the missing genetic components that can not be detected by using current single-point association analysis. Recently, several model-free methods (e.g. the commonly used information based metrics and several logistic regression-based metrics) were developed for detecting non-linear dependence between genetic loci, but they are potentially at the risk of inflated false positive error, in particular when the main effects at one or both loci are salient. In this study, we proposed two conditional entropy-based metrics to challenge this limitation. Extensive simulations demonstrated that the two proposed metrics, provided the disease is rare, could maintain consistently correct false positive rate. In the scenarios for a common disease, our proposed metrics achieved better or comparable control of false positive error, compared to four previously proposed model-free metrics. In terms of power, our methods outperformed several competing metrics in a range of common disease models. Furthermore, in real data analyses, both metrics succeeded in detecting interactions and were competitive with the originally reported results or the logistic regression approaches. In conclusion, the proposed conditional entropy-based metrics are promising as alternatives to current model-based approaches for detecting genuine epistatic effects. PMID:24339984

  8. Free-space optical channel simulator for weak-turbulence conditions.

    PubMed

    Bykhovsky, Dima

    2015-11-01

    Free-space optical (FSO) communication may be severely influenced by the inevitable turbulence effect that results in channel gain fluctuations and fading. The objective of this paper is to provide a simple and effective simulator of the weak-turbulence FSO channel that emulates the influence of the temporal covariance effect. Specifically, the proposed model is based on lognormal distributed samples with a corresponding correlation time. The simulator is based on the solution of the first-order stochastic differential equation (SDE). The results of the provided SDE analysis reveal its efficacy for turbulent channel modeling.

  9. Postbuckling analysis of shear deformable composite flat panels taking into account geometrical imperfections

    NASA Technical Reports Server (NTRS)

    Librescu, L.; Stein, M.

    1990-01-01

    The effects of initial geometrical imperfections on the postbuckling response of flat laminated composite panels to uniaxial and biaxial compressive loading are investigated analytically. The derivation of the mathematical model on the basis of first-order transverse shear deformation theory is outlined, and numerical results for perfect and imperfect, single-layer and three-layer square plates with free-free, clamped-clamped, or free-clamped edges are presented in graphs and briefly characterized. The present approach is shown to be more accurate than analyses based on the classical Kirchhoff plate model.

  10. Multiscale geometric modeling of macromolecules I: Cartesian representation

    NASA Astrophysics Data System (ADS)

    Xia, Kelin; Feng, Xin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei

    2014-01-01

    This paper focuses on the geometric modeling and computational algorithm development of biomolecular structures from two data sources: Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB) in the Eulerian (or Cartesian) representation. Molecular surface (MS) contains non-smooth geometric singularities, such as cusps, tips and self-intersecting facets, which often lead to computational instabilities in molecular simulations, and violate the physical principle of surface free energy minimization. Variational multiscale surface definitions are proposed based on geometric flows and solvation analysis of biomolecular systems. Our approach leads to geometric and potential driven Laplace-Beltrami flows for biomolecular surface evolution and formation. The resulting surfaces are free of geometric singularities and minimize the total free energy of the biomolecular system. High order partial differential equation (PDE)-based nonlinear filters are employed for EMDB data processing. We show the efficacy of this approach in feature-preserving noise reduction. After the construction of protein multiresolution surfaces, we explore the analysis and characterization of surface morphology by using a variety of curvature definitions. Apart from the classical Gaussian curvature and mean curvature, maximum curvature, minimum curvature, shape index, and curvedness are also applied to macromolecular surface analysis for the first time. Our curvature analysis is uniquely coupled to the analysis of electrostatic surface potential, which is a by-product of our variational multiscale solvation models. As an expository investigation, we particularly emphasize the numerical algorithms and computational protocols for practical applications of the above multiscale geometric models. Such information may otherwise be scattered over the vast literature on this topic. Based on the curvature and electrostatic analysis from our multiresolution surfaces, we introduce a new concept, the polarized curvature, for the prediction of protein binding sites.

  11. RuleMonkey: software for stochastic simulation of rule-based models

    PubMed Central

    2010-01-01

    Background The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems. Results Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods. Conclusions RuleMonkey enables the simulation of rule-based models for which the underlying reaction networks are large. It is typically faster than DYNSTOC for benchmark problems that we have examined. RuleMonkey is freely available as a stand-alone application http://public.tgen.org/rulemonkey. It is also available as a simulation engine within GetBonNie, a web-based environment for building, analyzing and sharing rule-based models. PMID:20673321

  12. Instrumentation and telemetry systems for free-flight drop model testing

    NASA Technical Reports Server (NTRS)

    Hyde, Charles R.; Massie, Jeffrey J.

    1993-01-01

    This paper presents instrumentation and telemetry system techniques used in free-flight research drop model testing at the NASA Langley Research Center. The free-flight drop model test technique is used to conduct flight dynamics research of high performance aircraft using dynamically scaled models. The free-flight drop model flight testing supplements research using computer analysis and wind tunnel testing. The drop models are scaled to approximately 20 percent of the size of the actual aircraft. This paper presents an introduction to the Free-Flight Drop Model Program which is followed by a description of the current instrumentation and telemetry systems used at the NASA Langley Research Center, Plum Tree Test Site. The paper describes three telemetry downlinks used to acquire the data, video, and radar tracking information from the model. Also described are two telemetry uplinks, one used to fly the model employing a ground-based flight control computer and a second to activate commands for visual tracking and parachute recovery of the model. The paper concludes with a discussion of free-flight drop model instrumentation and telemetry system development currently in progress for future drop model projects at the NASA Langley Research Center.

  13. Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Bales, Ben; Pollock, Tresa; Petzold, Linda

    2017-06-01

    Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.

  14. Model-based influences on humans’ choices and striatal prediction errors

    PubMed Central

    Daw, Nathaniel D.; Gershman, Samuel J.; Seymour, Ben; Dayan, Peter; Dolan, Raymond J.

    2011-01-01

    Summary The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. PMID:21435563

  15. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task.

    PubMed

    Akam, Thomas; Costa, Rui; Dayan, Peter

    2015-12-01

    The recently developed 'two-step' behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects' investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues.

  16. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task

    PubMed Central

    Akam, Thomas; Costa, Rui; Dayan, Peter

    2015-01-01

    The recently developed ‘two-step’ behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects’ investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues. PMID:26657806

  17. Comparing Free-Free and Shaker Table Model Correlation Methods Using Jim Beam

    NASA Technical Reports Server (NTRS)

    Ristow, James; Smith, Kenneth Wayne, Jr.; Johnson, Nathaniel; Kinney, Jackson

    2018-01-01

    Finite element model correlation as part of a spacecraft program has always been a challenge. For any NASA mission, the coupled system response of the spacecraft and launch vehicle can be determined analytically through a Coupled Loads Analysis (CLA), as it is not possible to test the spacecraft and launch vehicle coupled system before launch. The value of the CLA is highly dependent on the accuracy of the frequencies and mode shapes extracted from the spacecraft model. NASA standards require the spacecraft model used in the final Verification Loads Cycle to be correlated by either a modal test or by comparison of the model with Frequency Response Functions (FRFs) obtained during the environmental qualification test. Due to budgetary and time constraints, most programs opt to correlate the spacecraft dynamic model during the environmental qualification test, conducted on a large shaker table. For any model correlation effort, the key has always been finding a proper definition of the boundary conditions. This paper is a correlation case study to investigate the difference in responses of a simple structure using a free-free boundary, a fixed boundary on the shaker table, and a base-drive vibration test, all using identical instrumentation. The NAVCON Jim Beam test structure, featured in the IMAC round robin modal test of 2009, was selected as a simple, well recognized and well characterized structure to conduct this investigation. First, a free-free impact modal test of the Jim Beam was done as an experimental control. Second, the Jim Beam was mounted to a large 20,000 lbf shaker, and an impact modal test in this fixed configuration was conducted. Lastly, a vibration test of the Jim Beam was conducted on the shaker table. The free-free impact test, the fixed impact test, and the base-drive test were used to assess the effect of the shaker modes, evaluate the validity of fixed-base modeling assumptions, and compare final model correlation results between these boundary conditions.

  18. Model free approach to kinetic analysis of real-time hyperpolarized 13C magnetic resonance spectroscopy data.

    PubMed

    Hill, Deborah K; Orton, Matthew R; Mariotti, Erika; Boult, Jessica K R; Panek, Rafal; Jafar, Maysam; Parkes, Harold G; Jamin, Yann; Miniotis, Maria Falck; Al-Saffar, Nada M S; Beloueche-Babari, Mounia; Robinson, Simon P; Leach, Martin O; Chung, Yuen-Li; Eykyn, Thomas R

    2013-01-01

    Real-time detection of the rates of metabolic flux, or exchange rates of endogenous enzymatic reactions, is now feasible in biological systems using Dynamic Nuclear Polarization Magnetic Resonance. Derivation of reaction rate kinetics from this technique typically requires multi-compartmental modeling of dynamic data, and results are therefore model-dependent and prone to misinterpretation. We present a model-free formulism based on the ratio of total areas under the curve (AUC) of the injected and product metabolite, for example pyruvate and lactate. A theoretical framework to support this novel analysis approach is described, and demonstrates that the AUC ratio is proportional to the forward rate constant k. We show that the model-free approach strongly correlates with k for whole cell in vitro experiments across a range of cancer cell lines, and detects response in cells treated with the pan-class I PI3K inhibitor GDC-0941 with comparable or greater sensitivity. The same result is seen in vivo with tumor xenograft-bearing mice, in control tumors and following drug treatment with dichloroacetate. An important finding is that the area under the curve is independent of both the input function and of any other metabolic pathways arising from the injected metabolite. This model-free approach provides a robust and clinically relevant alternative to kinetic model-based rate measurements in the clinical translation of hyperpolarized (13)C metabolic imaging in humans, where measurement of the input function can be problematic.

  19. Model Free Approach to Kinetic Analysis of Real-Time Hyperpolarized 13C Magnetic Resonance Spectroscopy Data

    PubMed Central

    Mariotti, Erika; Boult, Jessica K. R.; Panek, Rafal; Jafar, Maysam; Parkes, Harold G.; Jamin, Yann; Miniotis, Maria Falck; Al-Saffar, Nada M. S.; Beloueche-Babari, Mounia; Robinson, Simon P.; Leach, Martin O.; Chung, Yuen-Li; Eykyn, Thomas R.

    2013-01-01

    Real-time detection of the rates of metabolic flux, or exchange rates of endogenous enzymatic reactions, is now feasible in biological systems using Dynamic Nuclear Polarization Magnetic Resonance. Derivation of reaction rate kinetics from this technique typically requires multi-compartmental modeling of dynamic data, and results are therefore model-dependent and prone to misinterpretation. We present a model-free formulism based on the ratio of total areas under the curve (AUC) of the injected and product metabolite, for example pyruvate and lactate. A theoretical framework to support this novel analysis approach is described, and demonstrates that the AUC ratio is proportional to the forward rate constant k. We show that the model-free approach strongly correlates with k for whole cell in vitro experiments across a range of cancer cell lines, and detects response in cells treated with the pan-class I PI3K inhibitor GDC-0941 with comparable or greater sensitivity. The same result is seen in vivo with tumor xenograft-bearing mice, in control tumors and following drug treatment with dichloroacetate. An important finding is that the area under the curve is independent of both the input function and of any other metabolic pathways arising from the injected metabolite. This model-free approach provides a robust and clinically relevant alternative to kinetic model-based rate measurements in the clinical translation of hyperpolarized 13C metabolic imaging in humans, where measurement of the input function can be problematic. PMID:24023724

  20. Development of an accelerometer-based multivariate model to predict free-living energy expenditure in a large military cohort.

    PubMed

    Horner, Fleur; Bilzon, James L; Rayson, Mark; Blacker, Sam; Richmond, Victoria; Carter, James; Wright, Anthony; Nevill, Alan

    2013-01-01

    This study developed a multivariate model to predict free-living energy expenditure (EE) in independent military cohorts. Two hundred and eighty-eight individuals (20.6 ± 3.9 years, 67.9 ± 12.0 kg, 1.71 ± 0.10 m) from 10 cohorts wore accelerometers during observation periods of 7 or 10 days. Accelerometer counts (PAC) were recorded at 1-minute epochs. Total energy expenditure (TEE) and physical activity energy expenditure (PAEE) were derived using the doubly labelled water technique. Data were reduced to n = 155 based on wear-time. Associations between PAC and EE were assessed using allometric modelling. Models were derived using multiple log-linear regression analysis and gender differences assessed using analysis of covariance. In all models PAC, height and body mass were related to TEE (P < 0.01). For models predicting TEE (r (2) = 0.65, SE = 462 kcal · d(-1) (13.0%)), PAC explained 4% of the variance. For models predicting PAEE (r (2) = 0.41, SE = 490 kcal · d(-1) (32.0%)), PAC accounted for 6% of the variance. Accelerometry increases the accuracy of EE estimation in military populations. However, the unique nature of military life means accurate prediction of individual free-living EE is highly dependent on anthropometric measurements.

  1. Monte Carlo based statistical power analysis for mediation models: methods and software.

    PubMed

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  2. Probabilistic grammatical model for helix‐helix contact site classification

    PubMed Central

    2013-01-01

    Background Hidden Markov Models power many state‐of‐the‐art tools in the field of protein bioinformatics. While excelling in their tasks, these methods of protein analysis do not convey directly information on medium‐ and long‐range residue‐residue interactions. This requires an expressive power of at least context‐free grammars. However, application of more powerful grammar formalisms to protein analysis has been surprisingly limited. Results In this work, we present a probabilistic grammatical framework for problem‐specific protein languages and apply it to classification of transmembrane helix‐helix pairs configurations. The core of the model consists of a probabilistic context‐free grammar, automatically inferred by a genetic algorithm from only a generic set of expert‐based rules and positive training samples. The model was applied to produce sequence based descriptors of four classes of transmembrane helix‐helix contact site configurations. The highest performance of the classifiers reached AUCROC of 0.70. The analysis of grammar parse trees revealed the ability of representing structural features of helix‐helix contact sites. Conclusions We demonstrated that our probabilistic context‐free framework for analysis of protein sequences outperforms the state of the art in the task of helix‐helix contact site classification. However, this is achieved without necessarily requiring modeling long range dependencies between interacting residues. A significant feature of our approach is that grammar rules and parse trees are human‐readable. Thus they could provide biologically meaningful information for molecular biologists. PMID:24350601

  3. Model-free and model-based reward prediction errors in EEG.

    PubMed

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Model-based influences on humans' choices and striatal prediction errors.

    PubMed

    Daw, Nathaniel D; Gershman, Samuel J; Seymour, Ben; Dayan, Peter; Dolan, Raymond J

    2011-03-24

    The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors, and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Mapping loci influencing blood pressure in the Framingham pedigrees using model-free LOD score analysis of a quantitative trait.

    PubMed

    Knight, Jo; North, Bernard V; Sham, Pak C; Curtis, David

    2003-12-31

    This paper presents a method of performing model-free LOD-score based linkage analysis on quantitative traits. It is implemented in the QMFLINK program. The method is used to perform a genome screen on the Framingham Heart Study data. A number of markers that show some support for linkage in our study coincide substantially with those implicated in other linkage studies of hypertension. Although the new method needs further testing on additional real and simulated data sets we can already say that it is straightforward to apply and may offer a useful complementary approach to previously available methods for the linkage analysis of quantitative traits.

  6. Mapping loci influencing blood pressure in the Framingham pedigrees using model-free LOD score analysis of a quantitative trait

    PubMed Central

    Knight, Jo; North, Bernard V; Sham, Pak C; Curtis, David

    2003-01-01

    This paper presents a method of performing model-free LOD-score based linkage analysis on quantitative traits. It is implemented in the QMFLINK program. The method is used to perform a genome screen on the Framingham Heart Study data. A number of markers that show some support for linkage in our study coincide substantially with those implicated in other linkage studies of hypertension. Although the new method needs further testing on additional real and simulated data sets we can already say that it is straightforward to apply and may offer a useful complementary approach to previously available methods for the linkage analysis of quantitative traits. PMID:14975142

  7. Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models

    DTIC Science & Technology

    2015-03-16

    sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity Analysis of the Reduced Order Coagulation...sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the performance of the reduced order model [69]. We...Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates

  8. Future Issues and Approaches to Health Monitoring and Failure Prevention for Oil-Free Gas Turbines

    NASA Technical Reports Server (NTRS)

    DellaCorte, Christopher

    2004-01-01

    Recent technology advances in foil air bearings, high temperature solid lubricants and computer based modeling has enabled the development of small Oil-Free gas turbines. These turbomachines are currently commercialized as small (<100 kW) microturbine generators and larger machines are being developed. Based upon these successes and the high potential payoffs offered by Oil-Free systems, NASA, industry, and other government entities are anticipating Oil-Free gas turbine propulsion systems to proliferate future markets. Since an Oil-Free engine has no oil system, traditional approaches to health monitoring and diagnostics, such as chip detection, oil analysis, and possibly vibration signature analyses (e.g., ball pass frequency) will be unavailable. As such, new approaches will need to be considered. These could include shaft orbit analyses, foil bearing temperature measurements, embedded wear sensors and start-up/coast down speed analysis. In addition, novel, as yet undeveloped techniques may emerge based upon concurrent developments in MEMS technology. This paper introduces Oil-Free technology, reviews the current state of the art and potential for future turbomachinery applications and discusses possible approaches to health monitoring, diagnostics and failure prevention.

  9. Model-Based Safety Analysis

    NASA Technical Reports Server (NTRS)

    Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.

    2006-01-01

    System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.

  10. Analytical and experimental investigation of a 1/8-scale dynamic model of the shuttle orbiter. Volume 3B: Supporting data

    NASA Technical Reports Server (NTRS)

    Mason, P. W.; Harris, H. G.; Zalesak, J.; Bernstein, M.

    1974-01-01

    The NASA Structural Analysis System (NASTRAN) Model 1 finite element idealization, input data, and detailed analytical results are presented. The data presented include: substructuring analysis for normal modes, plots of member data, plots of symmetric free-free modes, plots of antisymmetric free-free modes, analysis of the wing, analysis of the cargo doors, analysis of the payload, and analysis of the orbiter.

  11. Regression-based model of skin diffuse reflectance for skin color analysis

    NASA Astrophysics Data System (ADS)

    Tsumura, Norimichi; Kawazoe, Daisuke; Nakaguchi, Toshiya; Ojima, Nobutoshi; Miyake, Yoichi

    2008-11-01

    A simple regression-based model of skin diffuse reflectance is developed based on reflectance samples calculated by Monte Carlo simulation of light transport in a two-layered skin model. This reflectance model includes the values of spectral reflectance in the visible spectra for Japanese women. The modified Lambert Beer law holds in the proposed model with a modified mean free path length in non-linear density space. The averaged RMS and maximum errors of the proposed model were 1.1 and 3.1%, respectively, in the above range.

  12. Analysis of Functional Coupling: Mitochondrial Creatine Kinase and Adenine Nucleotide Translocase

    PubMed Central

    Vendelin, Marko; Lemba, Maris; Saks, Valdur A.

    2004-01-01

    The mechanism of functional coupling between mitochondrial creatine kinase (MiCK) and adenine nucleotide translocase (ANT) in isolated heart mitochondria is analyzed. Two alternative mechanisms are studied: 1), dynamic compartmentation of ATP and ADP, which assumes the differences in concentrations of the substrates between intermembrane space and surrounding solution due to some diffusion restriction and 2), direct transfer of the substrates between MiCK and ANT. The mathematical models based on these possible mechanisms were composed and simulation results were compared with the available experimental data. The first model, based on a dynamic compartmentation mechanism, was not sufficient to reproduce the measured values of apparent dissociation constants of MiCK reaction coupled to oxidative phosphorylation. The second model, which assumes the direct transfer of substrates between MiCK and ANT, is shown to be in good agreement with experiments—i.e., the second model reproduced the measured constants and the estimated ADP flux, entering mitochondria after the MiCK reaction. This model is thermodynamically consistent, utilizing the free energy profiles of reactions. The analysis revealed the minimal changes in the free energy profile of the MiCK-ANT interaction required to reproduce the experimental data. A possible free energy profile of the coupled MiCK-ANT system is presented. PMID:15240503

  13. A Model-Free Machine Learning Method for Risk Classification and Survival Probability Prediction.

    PubMed

    Geng, Yuan; Lu, Wenbin; Zhang, Hao Helen

    2014-01-01

    Risk classification and survival probability prediction are two major goals in survival data analysis since they play an important role in patients' risk stratification, long-term diagnosis, and treatment selection. In this article, we propose a new model-free machine learning framework for risk classification and survival probability prediction based on weighted support vector machines. The new procedure does not require any specific parametric or semiparametric model assumption on data, and is therefore capable of capturing nonlinear covariate effects. We use numerous simulation examples to demonstrate finite sample performance of the proposed method under various settings. Applications to a glioma tumor data and a breast cancer gene expression survival data are shown to illustrate the new methodology in real data analysis.

  14. Theory for polymer analysis using nanopore-based single-molecule mass spectrometry

    PubMed Central

    Reiner, Joseph E.; Kasianowicz, John J.; Nablo, Brian J.; Robertson, Joseph W. F.

    2010-01-01

    Nanometer-scale pores have demonstrated potential for the electrical detection, quantification, and characterization of molecules for biomedical applications and the chemical analysis of polymers. Despite extensive research in the nanopore sensing field, there is a paucity of theoretical models that incorporate the interactions between chemicals (i.e., solute, solvent, analyte, and nanopore). Here, we develop a model that simultaneously describes both the current blockade depth and residence times caused by individual poly(ethylene glycol) (PEG) molecules in a single α-hemolysin ion channel. Modeling polymer-cation binding leads to a description of two significant effects: a reduction in the mobile cation concentration inside the pore and an increase in the affinity between the polymer and the pore. The model was used to estimate the free energy of formation for K+-PEG inside the nanopore (≈-49.7 meV) and the free energy of PEG partitioning into the nanopore (≈0.76 meV per ethylene glycol monomer). The results suggest that rational, physical models for the analysis of analyte-nanopore interactions will develop the full potential of nanopore-based sensing for chemical and biological applications. PMID:20566890

  15. Noisy scale-free networks

    NASA Astrophysics Data System (ADS)

    Scholz, Jan; Dejori, Mathäus; Stetter, Martin; Greiner, Martin

    2005-05-01

    The impact of observational noise on the analysis of scale-free networks is studied. Various noise sources are modeled as random link removal, random link exchange and random link addition. Emphasis is on the resulting modifications for the node-degree distribution and for a functional ranking based on betweenness centrality. The implications for estimated gene-expressed networks for childhood acute lymphoblastic leukemia are discussed.

  16. Introduction, comparison, and validation of Meta‐Essentials: A free and simple tool for meta‐analysis

    PubMed Central

    van Rhee, Henk; Hak, Tony

    2017-01-01

    We present a new tool for meta‐analysis, Meta‐Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta‐analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta‐Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta‐analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp‐Hartung adjustment of the DerSimonian‐Laird estimator. However, more advanced meta‐analysis methods such as meta‐analytical structural equation modelling and meta‐regression with multiple covariates are not available. In summary, Meta‐Essentials may prove a valuable resource for meta‐analysts, including researchers, teachers, and students. PMID:28801932

  17. Actin-based propulsion of a microswimmer.

    PubMed

    Leshansky, A M

    2006-07-01

    A simple hydrodynamic model of actin-based propulsion of microparticles in dilute cell-free cytoplasmic extracts is presented. Under the basic assumption that actin polymerization at the particle surface acts as a force dipole, pushing apart the load and the free (nonanchored) actin tail, the propulsive velocity of the microparticle is determined as a function of the tail length, porosity, and particle shape. The anticipated velocities of the cargo displacement and the rearward motion of the tail are in good agreement with recently reported results of biomimetic experiments. A more detailed analysis of the particle-tail hydrodynamic interaction is presented and compared to the prediction of the simplified model.

  18. Model-based learning and the contribution of the orbitofrontal cortex to the model-free world

    PubMed Central

    McDannald, Michael A.; Takahashi, Yuji K.; Lopatina, Nina; Pietras, Brad W.; Jones, Josh L.; Schoenbaum, Geoffrey

    2012-01-01

    Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. PMID:22487030

  19. One-year test-retest reliability of intrinsic connectivity network fMRI in older adults

    PubMed Central

    Guo, Cong C.; Kurth, Florian; Zhou, Juan; Mayer, Emeran A.; Eickhoff, Simon B; Kramer, Joel H.; Seeley, William W.

    2014-01-01

    “Resting-state” or task-free fMRI can assess intrinsic connectivity network (ICN) integrity in health and disease, suggesting a potential for use of these methods as disease-monitoring biomarkers. Numerous analytical options are available, including model-driven ROI-based correlation analysis and model-free, independent component analysis (ICA). High test-retest reliability will be a necessary feature of a successful ICN biomarker, yet available reliability data remains limited. Here, we examined ICN fMRI test-retest reliability in 24 healthy older subjects scanned roughly one year apart. We focused on the salience network, a disease-relevant ICN not previously subjected to reliability analysis. Most ICN analytical methods proved reliable (intraclass coefficients > 0.4) and could be further improved by wavelet analysis. Seed-based ROI correlation analysis showed high map-wise reliability, whereas graph theoretical measures and temporal concatenation group ICA produced the most reliable individual unit-wise outcomes. Including global signal regression in ROI-based correlation analyses reduced reliability. Our study provides a direct comparison between the most commonly used ICN fMRI methods and potential guidelines for measuring intrinsic connectivity in aging control and patient populations over time. PMID:22446491

  20. Diffusion of Super-Gaussian Profiles

    ERIC Educational Resources Information Center

    Rosenberg, C.-J.; Anderson, D.; Desaix, M.; Johannisson, P.; Lisak, M.

    2007-01-01

    The present analysis describes an analytically simple and systematic approximation procedure for modelling the free diffusive spreading of initially super-Gaussian profiles. The approach is based on a self-similar ansatz for the evolution of the diffusion profile, and the parameter functions involved in the modelling are determined by suitable…

  1. Optimal control strategy for a novel computer virus propagation model on scale-free networks

    NASA Astrophysics Data System (ADS)

    Zhang, Chunming; Huang, Haitao

    2016-06-01

    This paper aims to study the combined impact of reinstalling system and network topology on the spread of computer viruses over the Internet. Based on scale-free network, this paper proposes a novel computer viruses propagation model-SLBOSmodel. A systematic analysis of this new model shows that the virus-free equilibrium is globally asymptotically stable when its spreading threshold is less than one; nevertheless, it is proved that the viral equilibrium is permanent if the spreading threshold is greater than one. Then, the impacts of different model parameters on spreading threshold are analyzed. Next, an optimally controlled SLBOS epidemic model on complex networks is also studied. We prove that there is an optimal control existing for the control problem. Some numerical simulations are finally given to illustrate the main results.

  2. Multibody dynamic analysis using a rotation-free shell element with corotational frame

    NASA Astrophysics Data System (ADS)

    Shi, Jiabei; Liu, Zhuyong; Hong, Jiazhen

    2018-03-01

    Rotation-free shell formulation is a simple and effective method to model a shell with large deformation. Moreover, it can be compatible with the existing theories of finite element method. However, a rotation-free shell is seldom employed in multibody systems. Using a derivative of rigid body motion, an efficient nonlinear shell model is proposed based on the rotation-free shell element and corotational frame. The bending and membrane strains of the shell have been simplified by isolating deformational displacements from the detailed description of rigid body motion. The consistent stiffness matrix can be obtained easily in this form of shell model. To model the multibody system consisting of the presented shells, joint kinematic constraints including translational and rotational constraints are deduced in the context of geometric nonlinear rotation-free element. A simple node-to-surface contact discretization and penalty method are adopted for contacts between shells. A series of analyses for multibody system dynamics are presented to validate the proposed formulation. Furthermore, the deployment of a large scaled solar array is presented to verify the comprehensive performance of the nonlinear shell model.

  3. A Complex Network Approach to Distributional Semantic Models

    PubMed Central

    Utsumi, Akira

    2015-01-01

    A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models. PMID:26295940

  4. Risk Factors for Addiction and Their Association with Model-Based Behavioral Control.

    PubMed

    Reiter, Andrea M F; Deserno, Lorenz; Wilbertz, Tilmann; Heinze, Hans-Jochen; Schlagenhauf, Florian

    2016-01-01

    Addiction shows familial aggregation and previous endophenotype research suggests that healthy relatives of addicted individuals share altered behavioral and cognitive characteristics with individuals suffering from addiction. In this study we asked whether impairments in behavioral control proposed for addiction, namely a shift from goal-directed, model-based toward habitual, model-free control, extends toward an unaffected sample (n = 20) of adult children of alcohol-dependent fathers as compared to a sample without any personal or family history of alcohol addiction (n = 17). Using a sequential decision-making task designed to investigate model-free and model-based control combined with a computational modeling analysis, we did not find any evidence for altered behavioral control in individuals with a positive family history of alcohol addiction. Independent of family history of alcohol dependence, we however observed that the interaction of two different risk factors of addiction, namely impulsivity and cognitive capacities, predicts the balance of model-free and model-based behavioral control. Post-hoc tests showed a positive association of model-based behavior with cognitive capacity in the lower, but not in the higher impulsive group of the original sample. In an independent sample of particularly high- vs. low-impulsive individuals, we confirmed the interaction effect of cognitive capacities and high vs. low impulsivity on model-based control. In the confirmation sample, a positive association of omega with cognitive capacity was observed in highly impulsive individuals, but not in low impulsive individuals. Due to the moderate sample size of the study, further investigation of the association of risk factors for addiction with model-based behavior in larger sample sizes is warranted.

  5. Analysis of non-linear aeroelastic response of a supersonic thick fin with plunging, pinching and flapping free-plays

    NASA Astrophysics Data System (ADS)

    Firouz-Abadi, R. D.; Alavi, S. M.; Salarieh, H.

    2013-07-01

    The flutter of a 3-D rigid fin with double-wedge section and free-play in flapping, plunging and pitching degrees-of-freedom operating in supersonic and hypersonic flight speed regimes have been considered. Aerodynamic model is obtained by local usage of the piston theory behind the shock and expansion analysis, and structural model is obtained based on Lagrange equation of motion. Such model presents fast, accurate algorithm for studying the aeroelastic behavior of the thick supersonic fin in time domain. Dynamic behavior of the fin is considered over large number of parameters that characterize the aeroelastic system. Results show that the free-play in the pitching, plunging and flapping degrees-of-freedom has significant effects on the oscillation exhibited by the aeroelastic system in the supersonic/hypersonic flight speed regimes. The simulations also show that the aeroelastic system behavior is greatly affected by some parameters, such as the Mach number, thickness, angle of attack, hinge position and sweep angle.

  6. Thermodynamic Characterization of Hydration Sites from Integral Equation-Derived Free Energy Densities: Application to Protein Binding Sites and Ligand Series.

    PubMed

    Güssregen, Stefan; Matter, Hans; Hessler, Gerhard; Lionta, Evanthia; Heil, Jochen; Kast, Stefan M

    2017-07-24

    Water molecules play an essential role for mediating interactions between ligands and protein binding sites. Displacement of specific water molecules can favorably modulate the free energy of binding of protein-ligand complexes. Here, the nature of water interactions in protein binding sites is investigated by 3D RISM (three-dimensional reference interaction site model) integral equation theory to understand and exploit local thermodynamic features of water molecules by ranking their possible displacement in structure-based design. Unlike molecular dynamics-based approaches, 3D RISM theory allows for fast and noise-free calculations using the same detailed level of solute-solvent interaction description. Here we correlate molecular water entities instead of mere site density maxima with local contributions to the solvation free energy using novel algorithms. Distinct water molecules and hydration sites are investigated in multiple protein-ligand X-ray structures, namely streptavidin, factor Xa, and factor VIIa, based on 3D RISM-derived free energy density fields. Our approach allows the semiquantitative assessment of whether a given structural water molecule can potentially be targeted for replacement in structure-based design. Finally, PLS-based regression models from free energy density fields used within a 3D-QSAR approach (CARMa - comparative analysis of 3D RISM Maps) are shown to be able to extract relevant information for the interpretation of structure-activity relationship (SAR) trends, as demonstrated for a series of serine protease inhibitors.

  7. Stabilized High-order Galerkin Methods Based on a Parameter-free Dynamic SGS Model for LES

    DTIC Science & Technology

    2015-01-01

    stresses obtained via Dyn-SGS are residual-based, the effect of the artificial diffusion is minimal in the regions where the solution is smooth. The direct...used in the analysis of the results rather than in the definition and analysis of the LES equations described from now on. 2.1 LES and the Dyn-SGS model... definition is sucient given the scope of the current study; nevertheless, a more proper defi- nition of for LES should be used in future work

  8. Predicting pedestrian flow: a methodology and a proof of concept based on real-life data.

    PubMed

    Davidich, Maria; Köster, Gerta

    2013-01-01

    Building a reliable predictive model of pedestrian motion is very challenging: Ideally, such models should be based on observations made in both controlled experiments and in real-world environments. De facto, models are rarely based on real-world observations due to the lack of available data; instead, they are largely based on intuition and, at best, literature values and laboratory experiments. Such an approach is insufficient for reliable simulations of complex real-life scenarios: For instance, our analysis of pedestrian motion under natural conditions at a major German railway station reveals that the values for free-flow velocities and the flow-density relationship differ significantly from widely used literature values. It is thus necessary to calibrate and validate the model against relevant real-life data to make it capable of reproducing and predicting real-life scenarios. In this work we aim at constructing such realistic pedestrian stream simulation. Based on the analysis of real-life data, we present a methodology that identifies key parameters and interdependencies that enable us to properly calibrate the model. The success of the approach is demonstrated for a benchmark model, a cellular automaton. We show that the proposed approach significantly improves the reliability of the simulation and hence the potential prediction accuracy. The simulation is validated by comparing the local density evolution of the measured data to that of the simulated data. We find that for our model the most sensitive parameters are: the source-target distribution of the pedestrian trajectories, the schedule of pedestrian appearances in the scenario and the mean free-flow velocity. Our results emphasize the need for real-life data extraction and analysis to enable predictive simulations.

  9. Can model-free reinforcement learning explain deontological moral judgments?

    PubMed

    Ayars, Alisabeth

    2016-05-01

    Dual-systems frameworks propose that moral judgments are derived from both an immediate emotional response, and controlled/rational cognition. Recently Cushman (2013) proposed a new dual-system theory based on model-free and model-based reinforcement learning. Model-free learning attaches values to actions based on their history of reward and punishment, and explains some deontological, non-utilitarian judgments. Model-based learning involves the construction of a causal model of the world and allows for far-sighted planning; this form of learning fits well with utilitarian considerations that seek to maximize certain kinds of outcomes. I present three concerns regarding the use of model-free reinforcement learning to explain deontological moral judgment. First, many actions that humans find aversive from model-free learning are not judged to be morally wrong. Moral judgment must require something in addition to model-free learning. Second, there is a dearth of evidence for central predictions of the reinforcement account-e.g., that people with different reinforcement histories will, all else equal, make different moral judgments. Finally, to account for the effect of intention within the framework requires certain assumptions which lack support. These challenges are reasonable foci for future empirical/theoretical work on the model-free/model-based framework. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Template-Based Modeling of Protein-RNA Interactions.

    PubMed

    Zheng, Jinfang; Kundrotas, Petras J; Vakser, Ilya A; Liu, Shiyong

    2016-09-01

    Protein-RNA complexes formed by specific recognition between RNA and RNA-binding proteins play an important role in biological processes. More than a thousand of such proteins in human are curated and many novel RNA-binding proteins are to be discovered. Due to limitations of experimental approaches, computational techniques are needed for characterization of protein-RNA interactions. Although much progress has been made, adequate methodologies reliably providing atomic resolution structural details are still lacking. Although protein-RNA free docking approaches proved to be useful, in general, the template-based approaches provide higher quality of predictions. Templates are key to building a high quality model. Sequence/structure relationships were studied based on a representative set of binary protein-RNA complexes from PDB. Several approaches were tested for pairwise target/template alignment. The analysis revealed a transition point between random and correct binding modes. The results showed that structural alignment is better than sequence alignment in identifying good templates, suitable for generating protein-RNA complexes close to the native structure, and outperforms free docking, successfully predicting complexes where the free docking fails, including cases of significant conformational change upon binding. A template-based protein-RNA interaction modeling protocol PRIME was developed and benchmarked on a representative set of complexes.

  11. How Accumulated Real Life Stress Experience and Cognitive Speed Interact on Decision-Making Processes

    PubMed Central

    Friedel, Eva; Sebold, Miriam; Kuitunen-Paul, Sören; Nebe, Stephan; Veer, Ilya M.; Zimmermann, Ulrich S.; Schlagenhauf, Florian; Smolka, Michael N.; Rapp, Michael; Walter, Henrik; Heinz, Andreas

    2017-01-01

    Rationale: Advances in neurocomputational modeling suggest that valuation systems for goal-directed (deliberative) on one side, and habitual (automatic) decision-making on the other side may rely on distinct computational strategies for reinforcement learning, namely model-free vs. model-based learning. As a key theoretical difference, the model-based system strongly demands cognitive functions to plan actions prospectively based on an internal cognitive model of the environment, whereas valuation in the model-free system relies on rather simple learning rules from operant conditioning to retrospectively associate actions with their outcomes and is thus cognitively less demanding. Acute stress reactivity is known to impair model-based but not model-free choice behavior, with higher working memory capacity protecting the model-based system from acute stress. However, it is not clear which impact accumulated real life stress has on model-free and model-based decision systems and how this influence interacts with cognitive abilities. Methods: We used a sequential decision-making task distinguishing relative contributions of both learning strategies to choice behavior, the Social Readjustment Rating Scale questionnaire to assess accumulated real life stress, and the Digit Symbol Substitution Test to test cognitive speed in 95 healthy subjects. Results: Individuals reporting high stress exposure who had low cognitive speed showed reduced model-based but increased model-free behavioral control. In contrast, subjects exposed to accumulated real life stress with high cognitive speed displayed increased model-based performance but reduced model-free control. Conclusion: These findings suggest that accumulated real life stress exposure can enhance reliance on cognitive speed for model-based computations, which may ultimately protect the model-based system from the detrimental influences of accumulated real life stress. The combination of accumulated real life stress exposure and slower information processing capacities, however, might favor model-free strategies. Thus, the valence and preference of either system strongly depends on stressful experiences and individual cognitive capacities. PMID:28642696

  12. How Accumulated Real Life Stress Experience and Cognitive Speed Interact on Decision-Making Processes.

    PubMed

    Friedel, Eva; Sebold, Miriam; Kuitunen-Paul, Sören; Nebe, Stephan; Veer, Ilya M; Zimmermann, Ulrich S; Schlagenhauf, Florian; Smolka, Michael N; Rapp, Michael; Walter, Henrik; Heinz, Andreas

    2017-01-01

    Rationale: Advances in neurocomputational modeling suggest that valuation systems for goal-directed (deliberative) on one side, and habitual (automatic) decision-making on the other side may rely on distinct computational strategies for reinforcement learning, namely model-free vs. model-based learning. As a key theoretical difference, the model-based system strongly demands cognitive functions to plan actions prospectively based on an internal cognitive model of the environment, whereas valuation in the model-free system relies on rather simple learning rules from operant conditioning to retrospectively associate actions with their outcomes and is thus cognitively less demanding. Acute stress reactivity is known to impair model-based but not model-free choice behavior, with higher working memory capacity protecting the model-based system from acute stress. However, it is not clear which impact accumulated real life stress has on model-free and model-based decision systems and how this influence interacts with cognitive abilities. Methods: We used a sequential decision-making task distinguishing relative contributions of both learning strategies to choice behavior, the Social Readjustment Rating Scale questionnaire to assess accumulated real life stress, and the Digit Symbol Substitution Test to test cognitive speed in 95 healthy subjects. Results: Individuals reporting high stress exposure who had low cognitive speed showed reduced model-based but increased model-free behavioral control. In contrast, subjects exposed to accumulated real life stress with high cognitive speed displayed increased model-based performance but reduced model-free control. Conclusion: These findings suggest that accumulated real life stress exposure can enhance reliance on cognitive speed for model-based computations, which may ultimately protect the model-based system from the detrimental influences of accumulated real life stress. The combination of accumulated real life stress exposure and slower information processing capacities, however, might favor model-free strategies. Thus, the valence and preference of either system strongly depends on stressful experiences and individual cognitive capacities.

  13. Exact hybrid particle/population simulation of rule-based models of biochemical systems.

    PubMed

    Hogg, Justin S; Harris, Leonard A; Stover, Lori J; Nair, Niketh S; Faeder, James R

    2014-04-01

    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility.

  14. Model-based learning and the contribution of the orbitofrontal cortex to the model-free world.

    PubMed

    McDannald, Michael A; Takahashi, Yuji K; Lopatina, Nina; Pietras, Brad W; Jones, Josh L; Schoenbaum, Geoffrey

    2012-04-01

    Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  15. Characterizing structural transitions using localized free energy landscape analysis.

    PubMed

    Banavali, Nilesh K; Mackerell, Alexander D

    2009-01-01

    Structural changes in molecules are frequently observed during biological processes like replication, transcription and translation. These structural changes can usually be traced to specific distortions in the backbones of the macromolecules involved. Quantitative energetic characterization of such distortions can greatly advance the atomic-level understanding of the dynamic character of these biological processes. Molecular dynamics simulations combined with a variation of the Weighted Histogram Analysis Method for potential of mean force determination are applied to characterize localized structural changes for the test case of cytosine (underlined) base flipping in a GTCAGCGCATGG DNA duplex. Free energy landscapes for backbone torsion and sugar pucker degrees of freedom in the DNA are used to understand their behavior in response to the base flipping perturbation. By simplifying the base flipping structural change into a two-state model, a free energy difference of upto 14 kcal/mol can be attributed to the flipped state relative to the stacked Watson-Crick base paired state. This two-state classification allows precise evaluation of the effect of base flipping on local backbone degrees of freedom. The calculated free energy landscapes of individual backbone and sugar degrees of freedom expectedly show the greatest change in the vicinity of the flipping base itself, but specific delocalized effects can be discerned upto four nucleotide positions away in both 5' and 3' directions. Free energy landscape analysis thus provides a quantitative method to pinpoint the determinants of structural change on the atomic scale and also delineate the extent of propagation of the perturbation along the molecule. In addition to nucleic acids, this methodology is anticipated to be useful for studying conformational changes in all macromolecules, including carbohydrates, lipids, and proteins.

  16. The "proactive" model of learning: Integrative framework for model-free and model-based reinforcement learning utilizing the associative learning-based proactive brain concept.

    PubMed

    Zsuga, Judit; Biro, Klara; Papp, Csaba; Tajti, Gabor; Gesztelyi, Rudolf

    2016-02-01

    Reinforcement learning (RL) is a powerful concept underlying forms of associative learning governed by the use of a scalar reward signal, with learning taking place if expectations are violated. RL may be assessed using model-based and model-free approaches. Model-based reinforcement learning involves the amygdala, the hippocampus, and the orbitofrontal cortex (OFC). The model-free system involves the pedunculopontine-tegmental nucleus (PPTgN), the ventral tegmental area (VTA) and the ventral striatum (VS). Based on the functional connectivity of VS, model-free and model based RL systems center on the VS that by integrating model-free signals (received as reward prediction error) and model-based reward related input computes value. Using the concept of reinforcement learning agent we propose that the VS serves as the value function component of the RL agent. Regarding the model utilized for model-based computations we turned to the proactive brain concept, which offers an ubiquitous function for the default network based on its great functional overlap with contextual associative areas. Hence, by means of the default network the brain continuously organizes its environment into context frames enabling the formulation of analogy-based association that are turned into predictions of what to expect. The OFC integrates reward-related information into context frames upon computing reward expectation by compiling stimulus-reward and context-reward information offered by the amygdala and hippocampus, respectively. Furthermore we suggest that the integration of model-based expectations regarding reward into the value signal is further supported by the efferent of the OFC that reach structures canonical for model-free learning (e.g., the PPTgN, VTA, and VS). (c) 2016 APA, all rights reserved).

  17. Information theory applications for biological sequence analysis.

    PubMed

    Vinga, Susana

    2014-05-01

    Information theory (IT) addresses the analysis of communication systems and has been widely applied in molecular biology. In particular, alignment-free sequence analysis and comparison greatly benefited from concepts derived from IT, such as entropy and mutual information. This review covers several aspects of IT applications, ranging from genome global analysis and comparison, including block-entropy estimation and resolution-free metrics based on iterative maps, to local analysis, comprising the classification of motifs, prediction of transcription factor binding sites and sequence characterization based on linguistic complexity and entropic profiles. IT has also been applied to high-level correlations that combine DNA, RNA or protein features with sequence-independent properties, such as gene mapping and phenotype analysis, and has also provided models based on communication systems theory to describe information transmission channels at the cell level and also during evolutionary processes. While not exhaustive, this review attempts to categorize existing methods and to indicate their relation with broader transversal topics such as genomic signatures, data compression and complexity, time series analysis and phylogenetic classification, providing a resource for future developments in this promising area.

  18. Training-free compressed sensing for wireless neural recording using analysis model and group weighted {{\\ell}_{1}} -minimization

    NASA Astrophysics Data System (ADS)

    Sun, Biao; Zhao, Wenfeng; Zhu, Xinshan

    2017-06-01

    Objective. Data compression is crucial for resource-constrained wireless neural recording applications with limited data bandwidth, and compressed sensing (CS) theory has successfully demonstrated its potential in neural recording applications. In this paper, an analytical, training-free CS recovery method, termed group weighted analysis {{\\ell}1} -minimization (GWALM), is proposed for wireless neural recording. Approach. The GWALM method consists of three parts: (1) the analysis model is adopted to enforce sparsity of the neural signals, therefore overcoming the drawbacks of conventional synthesis models and enhancing the recovery performance. (2) A multi-fractional-order difference matrix is constructed as the analysis operator, thus avoiding the dictionary learning procedure and reducing the need for previously acquired data and computational complexities. (3) By exploiting the statistical properties of the analysis coefficients, a group weighting approach is developed to enhance the performance of analysis {{\\ell}1} -minimization. Main results. Experimental results on synthetic and real datasets reveal that the proposed approach outperforms state-of-the-art CS-based methods in terms of both spike recovery quality and classification accuracy. Significance. Energy and area efficiency of the GWALM make it an ideal candidate for resource-constrained, large scale wireless neural recording applications. The training-free feature of the GWALM further improves its robustness to spike shape variation, thus making it more practical for long term wireless neural recording.

  19. Training-free compressed sensing for wireless neural recording using analysis model and group weighted [Formula: see text]-minimization.

    PubMed

    Sun, Biao; Zhao, Wenfeng; Zhu, Xinshan

    2017-06-01

    Data compression is crucial for resource-constrained wireless neural recording applications with limited data bandwidth, and compressed sensing (CS) theory has successfully demonstrated its potential in neural recording applications. In this paper, an analytical, training-free CS recovery method, termed group weighted analysis [Formula: see text]-minimization (GWALM), is proposed for wireless neural recording. The GWALM method consists of three parts: (1) the analysis model is adopted to enforce sparsity of the neural signals, therefore overcoming the drawbacks of conventional synthesis models and enhancing the recovery performance. (2) A multi-fractional-order difference matrix is constructed as the analysis operator, thus avoiding the dictionary learning procedure and reducing the need for previously acquired data and computational complexities. (3) By exploiting the statistical properties of the analysis coefficients, a group weighting approach is developed to enhance the performance of analysis [Formula: see text]-minimization. Experimental results on synthetic and real datasets reveal that the proposed approach outperforms state-of-the-art CS-based methods in terms of both spike recovery quality and classification accuracy. Energy and area efficiency of the GWALM make it an ideal candidate for resource-constrained, large scale wireless neural recording applications. The training-free feature of the GWALM further improves its robustness to spike shape variation, thus making it more practical for long term wireless neural recording.

  20. Introduction, comparison, and validation of Meta-Essentials: A free and simple tool for meta-analysis.

    PubMed

    Suurmond, Robert; van Rhee, Henk; Hak, Tony

    2017-12-01

    We present a new tool for meta-analysis, Meta-Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta-analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta-Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta-analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp-Hartung adjustment of the DerSimonian-Laird estimator. However, more advanced meta-analysis methods such as meta-analytical structural equation modelling and meta-regression with multiple covariates are not available. In summary, Meta-Essentials may prove a valuable resource for meta-analysts, including researchers, teachers, and students. © 2017 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

  1. Role of delay and screening in controlling AIDS

    NASA Astrophysics Data System (ADS)

    Chauhan, Sudipa; Bhatia, Sumit Kaur; Gupta, Surbhi

    2016-06-01

    We propose a non-linear HIV/ AIDS model to analyse the spread and control of HIV/AIDS. The population is divided into three classes, susceptible, infective and AIDS patients. The model is developed under the assumptions of vertical transmission and time delay in infective class. Time delay is also included to show sexual maturity period of infected newborns. We study dynamics of the model and obtain the reproduction number. Now to control the epidemic, we study the model where aware infective class is also added, i.e., people are made aware of their medical status by way of screening. To make the model more realistic, we consider the situation where aware infective class also interacts with other people. The model is analysed qualitatively by stability theory of ODE. Stability analysis of both disease-free and endemic equilibrium is studied based on reproduction number. Also, it is proved that if (R0)1, R1 ≤ 1 then, disease free equilibrium point is locally asymptotically stable and if (R0)1, R1 > 1 then, disease free equilibrium is unstable. Also, the stability analysis of endemic equilibrium point has been done and it is shown that for (R0)1 > 1 endemic equilibrium point is stable. Global stability analysis of endemic equilibrium point has also been done. At last, it is shown numerically that the delay in sexual maturity of infected individuals result in less number of AIDS patients.

  2. The vibration characteristics of a coupled helicopter rotor-fuselage by a finite element analysis

    NASA Technical Reports Server (NTRS)

    Rutkowski, M. J.

    1983-01-01

    The dynamic coupling between the rotor system and the fuselage of a simplified helicopter model in hover was analytically investigated. Mass, aerodynamic damping, and elastic and centrifugal stiffness matrices are presented for the analytical model; the model is based on a beam finite element, with polynomial mass and stiffness distributions for both the rotor and fuselage representations. For this analytical model, only symmetric fuselage and collective blade degrees of freedom are treated. Real and complex eigen-analyses are carried out to obtain coupled rotor-fuselage natural modes and frequencies as a function of rotor speed. Vibration response results are obtained for the coupled system subjected to a radially uniform, harmonic blade loading. The coupled response results are compared with response results from an uncoupled analysis in which hub loads for an isolated rotor system subjected to the same sinusoidal blade loading as the coupled system are applied to a free-free fuselage.

  3. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  4. Evaluation of Drogue Parachute Damping Effects Utilizing the Apollo Legacy Parachute Model

    NASA Technical Reports Server (NTRS)

    Currin, Kelly M.; Gamble, Joe D.; Matz, Daniel A.; Bretz, David R.

    2011-01-01

    Drogue parachute damping is required to dampen the Orion Multi Purpose Crew Vehicle (MPCV) crew module (CM) oscillations prior to deployment of the main parachutes. During the Apollo program, drogue parachute damping was modeled on the premise that the drogue parachute force vector aligns with the resultant velocity of the parachute attach point on the CM. Equivalent Cm(sub q) and Cm(sub alpha) equations for drogue parachute damping resulting from the Apollo legacy parachute damping model premise have recently been developed. The MPCV computer simulations ANTARES and Osiris have implemented high fidelity two-body parachute damping models. However, high-fidelity model-based damping motion predictions do not match the damping observed during wind tunnel and full-scale free-flight oscillatory motion. This paper will present the methodology for comparing and contrasting the Apollo legacy parachute damping model with full-scale free-flight oscillatory motion. The analysis shows an agreement between the Apollo legacy parachute damping model and full-scale free-flight oscillatory motion.

  5. Quantitative analysis of the effect of supersaturation on in vivo drug absorption.

    PubMed

    Takano, Ryusuke; Takata, Noriyuki; Saito, Ryoichi; Furumoto, Kentaro; Higo, Shoichi; Hayashi, Yoshiki; Machida, Minoru; Aso, Yoshinori; Yamashita, Shinji

    2010-10-04

    The purpose of this study is to clarify the effects of intestinal drug supersaturation on solubility-limited nonlinear absorption. Oral absorption of a novel farnesyltransferase inhibitor (FTI-2600) from its crystalline free base and its HCl salt was determined in dogs. To clarify the contribution of supersaturation on improving drug absorption, in vivo intraluminal concentration of FTI-2600 after oral administration was estimated from the pharmacokinetics data using a physiologically based model. Dissolution and precipitation characteristics of FTI-2600 in a biorelevant media were investigated in vitro using a miniscale dissolution test and powder X-ray diffraction analysis. In the in vitro study, the HCl salt immediately dissolved but precipitated rapidly. The metastable amorphous free base precipitant, which did not convert into the stable crystalline free base in the simulated intestinal fluids for several hours, generated a 5-fold increase in dissolved concentration compared to the equilibrium solubility of the crystalline free base. By computer simulation, the intraluminal drug concentration after administration of the free base was estimated to reach the saturated solubility, indicating solubility-limited absorption. On the other hand, administration of the HCl salt resulted in an increased intraluminal concentration and the plasma concentration was 400% greater than that after administration of the free base. This in vivo/in vitro correlation of the increased drug concentrations in the small intestine provide clear evidence that not only the increase in the dissolution rate, but also the supersaturation phenomenon, improved the solubility-limited absorption of FTI-2600. These results indicate that formulation technologies that can induce supersaturation may be of great assistance to the successful development of poorly water-soluble drugs.

  6. Gravity Modeling for Variable Fidelity Environments

    NASA Technical Reports Server (NTRS)

    Madden, Michael M.

    2006-01-01

    Aerospace simulations can model worlds, such as the Earth, with differing levels of fidelity. The simulation may represent the world as a plane, a sphere, an ellipsoid, or a high-order closed surface. The world may or may not rotate. The user may select lower fidelity models based on computational limits, a need for simplified analysis, or comparison to other data. However, the user will also wish to retain a close semblance of behavior to the real world. The effects of gravity on objects are an important component of modeling real-world behavior. Engineers generally equate the term gravity with the observed free-fall acceleration. However, free-fall acceleration is not equal to all observers. To observers on the sur-face of a rotating world, free-fall acceleration is the sum of gravitational attraction and the centrifugal acceleration due to the world's rotation. On the other hand, free-fall acceleration equals gravitational attraction to an observer in inertial space. Surface-observed simulations (e.g. aircraft), which use non-rotating world models, may choose to model observed free fall acceleration as the gravity term; such a model actually combines gravitational at-traction with centrifugal acceleration due to the Earth s rotation. However, this modeling choice invites confusion as one evolves the simulation to higher fidelity world models or adds inertial observers. Care must be taken to model gravity in concert with the world model to avoid denigrating the fidelity of modeling observed free fall. The paper will go into greater depth on gravity modeling and the physical disparities and synergies that arise when coupling specific gravity models with world models.

  7. Gaze data reveal distinct choice processes underlying model-based and model-free reinforcement learning

    PubMed Central

    Konovalov, Arkady; Krajbich, Ian

    2016-01-01

    Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. PMID:27511383

  8. Development of a Novel Rabies Simulation Model for Application in a Non-endemic Environment

    PubMed Central

    Dürr, Salome; Ward, Michael P.

    2015-01-01

    Domestic dog rabies is an endemic disease in large parts of the developing world and also epidemic in previously free regions. For example, it continues to spread in eastern Indonesia and currently threatens adjacent rabies-free regions with high densities of free-roaming dogs, including remote northern Australia. Mathematical and simulation disease models are useful tools to provide insights on the most effective control strategies and to inform policy decisions. Existing rabies models typically focus on long-term control programs in endemic countries. However, simulation models describing the dog rabies incursion scenario in regions where rabies is still exotic are lacking. We here describe such a stochastic, spatially explicit rabies simulation model that is based on individual dog information collected in two remote regions in northern Australia. Illustrative simulations produced plausible results with epidemic characteristics expected for rabies outbreaks in disease free regions (mean R0 1.7, epidemic peak 97 days post-incursion, vaccination as the most effective response strategy). Systematic sensitivity analysis identified that model outcomes were most sensitive to seven of the 30 model parameters tested. This model is suitable for exploring rabies spread and control before an incursion in populations of largely free-roaming dogs that live close together with their owners. It can be used for ad-hoc contingency or response planning prior to and shortly after incursion of dog rabies in previously free regions. One challenge that remains is model parameterisation, particularly how dogs’ roaming and contacts and biting behaviours change following a rabies incursion in a previously rabies free population. PMID:26114762

  9. Robust failure detection filters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sanmartin, A. M.

    1985-01-01

    The robustness of detection filters applied to the detection of actuator failures on a free-free beam is analyzed. This analysis is based on computer simulation tests of the detection filters in the presence of different types of model mismatch, and on frequency response functions of the transfers corresponding to the model mismatch. The robustness of detection filters based on a model of the beam containing a large number of structural modes varied dramatically with the placement of some of the filter poles. The dynamics of these filters were very hard to analyze. The design of detection filters with a number of modes equal to the number of sensors was trivial. They can be configured to detect any number of actuator failure events. The dynamics of these filters were very easy to analyze and their robustness properties were much improved. A change of the output transformation allowed the filter to perform satisfactorily with realistic levels of model mismatch.

  10. A Petri Net Approach Based Elementary Siphons Supervisor for Flexible Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Abdul-Hussin, Mowafak Hassan

    2015-05-01

    This paper presents an approach to constructing a class of an S3PR net for modeling, simulation and control of processes occurring in the flexible manufacturing system (FMS) used based elementary siphons of a Petri net. Siphons are very important to the analysis and control of deadlocks of FMS that is significant objectives of siphons. Petri net models in the efficiency structure analysis, and utilization of the FMSs when different policy can be implemented lead to the deadlock prevention. We are representing an effective deadlock-free policy of a special class of Petri nets called S3PR. Simulation of Petri net structural analysis and reachability graph analysis is used for analysis and control of Petri nets. Petri nets contain been successfully as one of the most powerful tools for modelling of FMS, where Using structural analysis, we show that liveness of such systems can be attributed to the absence of under marked siphons.

  11. Surrogate marker analysis in cancer clinical trials through time-to-event mediation techniques.

    PubMed

    Vandenberghe, Sjouke; Duchateau, Luc; Slaets, Leen; Bogaerts, Jan; Vansteelandt, Stijn

    2017-01-01

    The meta-analytic approach is the gold standard for validation of surrogate markers, but has the drawback of requiring data from several trials. We refine modern mediation analysis techniques for time-to-event endpoints and apply them to investigate whether pathological complete response can be used as a surrogate marker for disease-free survival in the EORTC 10994/BIG 1-00 randomised phase 3 trial in which locally advanced breast cancer patients were randomised to either taxane or anthracycline based neoadjuvant chemotherapy. In the mediation analysis, the treatment effect is decomposed into an indirect effect via pathological complete response and the remaining direct effect. It shows that only 4.2% of the treatment effect on disease-free survival after five years is mediated by the treatment effect on pathological complete response. There is thus no evidence from our analysis that pathological complete response is a valuable surrogate marker to evaluate the effect of taxane versus anthracycline based chemotherapies on progression free survival of locally advanced breast cancer patients. The proposed analysis strategy is broadly applicable to mediation analyses of time-to-event endpoints, is easy to apply and outperforms existing strategies in terms of precision as well as robustness against model misspecification.

  12. Variable selection for distribution-free models for longitudinal zero-inflated count responses.

    PubMed

    Chen, Tian; Wu, Pan; Tang, Wan; Zhang, Hui; Feng, Changyong; Kowalski, Jeanne; Tu, Xin M

    2016-07-20

    Zero-inflated count outcomes arise quite often in research and practice. Parametric models such as the zero-inflated Poisson and zero-inflated negative binomial are widely used to model such responses. Like most parametric models, they are quite sensitive to departures from assumed distributions. Recently, new approaches have been proposed to provide distribution-free, or semi-parametric, alternatives. These methods extend the generalized estimating equations to provide robust inference for population mixtures defined by zero-inflated count outcomes. In this paper, we propose methods to extend smoothly clipped absolute deviation (SCAD)-based variable selection methods to these new models. Variable selection has been gaining popularity in modern clinical research studies, as determining differential treatment effects of interventions for different subgroups has become the norm, rather the exception, in the era of patent-centered outcome research. Such moderation analysis in general creates many explanatory variables in regression analysis, and the advantages of SCAD-based methods over their traditional counterparts render them a great choice for addressing this important and timely issues in clinical research. We illustrate the proposed approach with both simulated and real study data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Protein pharmacophore selection using hydration-site analysis

    PubMed Central

    Hu, Bingjie; Lill, Markus A.

    2012-01-01

    Virtual screening using pharmacophore models is an efficient method to identify potential lead compounds for target proteins. Pharmacophore models based on protein structures are advantageous because a priori knowledge of active ligands is not required and the models are not biased by the chemical space of previously identified actives. However, in order to capture most potential interactions between all potentially binding ligands and the protein, the size of the pharmacophore model, i.e. number of pharmacophore elements, is typically quite large and therefore reduces the efficiency of pharmacophore based screening. We have developed a new method to select important pharmacophore elements using hydration-site information. The basic premise is that ligand functional groups that replace water molecules in the apo protein contribute strongly to the overall binding affinity of the ligand, due to the additional free energy gained from releasing the water molecule into the bulk solvent. We computed the free energy of water released from the binding site for each hydration site using thermodynamic analysis of molecular dynamics (MD) simulations. Pharmacophores which are co-localized with hydration sites with estimated favorable contributions to the free energy of binding are selected to generate a reduced pharmacophore model. We constructed reduced pharmacophore models for three protein systems and demonstrated good enrichment quality combined with high efficiency. The reduction in pharmacophore model size reduces the required screening time by a factor of 200–500 compared to using all protein pharmacophore elements. We also describe a training process using a small set of known actives to reliably select the optimal set of criteria for pharmacophore selection for each protein system. PMID:22397751

  14. Estimation of the Viscosities of Liquid Sn-Based Binary Lead-Free Solder Alloys

    NASA Astrophysics Data System (ADS)

    Wu, Min; Li, Jinquan

    2018-01-01

    The viscosity of a binary Sn-based lead-free solder alloy was calculated by combining the predicted model with the Miedema model. The viscosity factor was proposed and the relationship between the viscosity and surface tension was analyzed as well. The investigation result shows that the viscosity of Sn-based lead-free solders predicted from the predicted model shows excellent agreement with the reported values. The viscosity factor is determined by three physical parameters: atomic volume, electronic density, and electro-negativity. In addition, the apparent correlation between the surface tension and viscosity of the binary Sn-based Pb-free solder was obtained based on the predicted model.

  15. A projection-based model reduction strategy for the wave and vibration analysis of rotating periodic structures

    NASA Astrophysics Data System (ADS)

    Beli, D.; Mencik, J.-M.; Silva, P. B.; Arruda, J. R. F.

    2018-05-01

    The wave finite element method has proved to be an efficient and accurate numerical tool to perform the free and forced vibration analysis of linear reciprocal periodic structures, i.e. those conforming to symmetrical wave fields. In this paper, its use is extended to the analysis of rotating periodic structures, which, due to the gyroscopic effect, exhibit asymmetric wave propagation. A projection-based strategy which uses reduced symplectic wave basis is employed, which provides a well-conditioned eigenproblem for computing waves in rotating periodic structures. The proposed formulation is applied to the free and forced response analysis of homogeneous, multi-layered and phononic ring structures. In all test cases, the following features are highlighted: well-conditioned dispersion diagrams, good accuracy, and low computational time. The proposed strategy is particularly convenient in the simulation of rotating structures when parametric analysis for several rotational speeds is usually required, e.g. for calculating Campbell diagrams. This provides an efficient and flexible framework for the analysis of rotordynamic problems.

  16. Cost Effectiveness of On-site versus Off-site Depression Collaborative Care in Rural Federally Qualified Health Centers

    PubMed Central

    Pyne, Jeffrey M.; Fortney, John C.; Mouden, Sip; Lu, Liya; Hudson, Teresa J; Mittal, Dinesh

    2018-01-01

    Objective Collaborative care for depression is effective and cost-effective in primary care settings. However, there is minimal evidence to inform the choice of on-site versus off-site models. This study examined the cost-effectiveness of on-site practice-based collaborative care (PBCC) versus off-site telemedicine-based collaborative care (TBCC) for depression in Federally Qualified Health Centers (FQHCs). Methods Multi-site randomized pragmatic comparative cost-effectiveness trial. 19,285 patients were screened for depression, 14.8% (n=2,863) screened positive (PHQ9 ≥10) and 364 were enrolled. Telephone interview data were collected at baseline, 6-, 12-, and 18-months. Base case analysis used Arkansas FQHC healthcare costs and secondary analysis used national cost estimates. Effectiveness measures were depression-free days and quality-adjusted life years (QALYs) derived from depression-free days, Medical Outcomes Study SF-12, and Quality of Well Being scale (QWB). Nonparametric bootstrap with replacement methods were used to generate an empirical joint distribution of incremental costs and QALYs and acceptability curves. Results Mean base case FQHC incremental cost-effectiveness ratio (ICER) using depression-free days was $10.78/depression-free day. Mean base case ICERs using QALYs ranged from $14,754/QALY (depression-free day QALY) to $37,261/QALY (QWB QALY). Mean secondary national ICER using depression-free days was $8.43/depression-free day and using QALYs ranged from $11,532/QALY (depression-free day QALY) to $29,234/QALY (QWB QALY). Conclusions These results support the cost-effectiveness of the TBCC intervention in medically underserved primary care settings. Results can inform the decision about whether to insource (make) or outsource (buy) depression care management in the FQHC setting within the current context of Patient-Centered Medical Home, value-based purchasing, and potential bundled payments for depression care. The www.clinicaltrials.gov # for this study is NCT00439452. PMID:25686811

  17. Linkage and related analyses of Barrett's esophagus and its associated adenocarcinomas.

    PubMed

    Sun, Xiangqing; Elston, Robert; Falk, Gary W; Grady, William M; Faulx, Ashley; Mittal, Sumeet K; Canto, Marcia I; Shaheen, Nicholas J; Wang, Jean S; Iyer, Prasad G; Abrams, Julian A; Willis, Joseph E; Guda, Kishore; Markowitz, Sanford; Barnholtz-Sloan, Jill S; Chandar, Apoorva; Brock, Wendy; Chak, Amitabh

    2016-07-01

    Familial aggregation and segregation analysis studies have provided evidence of a genetic basis for esophageal adenocarcinoma (EAC) and its premalignant precursor, Barrett's esophagus (BE). We aim to demonstrate the utility of linkage analysis to identify the genomic regions that might contain the genetic variants that predispose individuals to this complex trait (BE and EAC). We genotyped 144 individuals in 42 multiplex pedigrees chosen from 1000 singly ascertained BE/EAC pedigrees, and performed both model-based and model-free linkage analyses, using S.A.G.E. and other software. Segregation models were fitted, from the data on both the 42 pedigrees and the 1000 pedigrees, to determine parameters for performing model-based linkage analysis. Model-based and model-free linkage analyses were conducted in two sets of pedigrees: the 42 pedigrees and a subset of 18 pedigrees with female affected members that are expected to be more genetically homogeneous. Genome-wide associations were also tested in these families. Linkage analyses on the 42 pedigrees identified several regions consistently suggestive of linkage by different linkage analysis methods on chromosomes 2q31, 12q23, and 4p14. A linkage on 15q26 is the only consistent linkage region identified in the 18 female-affected pedigrees, in which the linkage signal is higher than in the 42 pedigrees. Other tentative linkage signals are also reported. Our linkage study of BE/EAC pedigrees identified linkage regions on chromosomes 2, 4, 12, and 15, with some reported associations located within our linkage peaks. Our linkage results can help prioritize association tests to delineate the genetic determinants underlying susceptibility to BE and EAC.

  18. Quantum-chemical study on the bioactive conformation of epothilones.

    PubMed

    Jiménez, Verónica A

    2010-12-27

    Herein, I report a DFT study on the bioactive conformation of epothilone A based on the analysis of 92 stable conformations of free and bound epothilone to a reduced model of tubulin receptor. The equilibrium structures and relative energies were studied using B3LYP and X3LYP functionals and the 6-31G(d) standard basis set, which was considered appropriate for the size of the systems under study. Calculated relative energies of free and bound epothilones led me to propose a new model for the bioactive conformation of epothilone A, which accounts for several structure-activity data.

  19. Predicting RNA folding thermodynamics with a reduced chain representation model

    PubMed Central

    CAO, SONG; CHEN, SHI-JIE

    2005-01-01

    Based on the virtual bond representation for the nucleotide backbone, we develop a reduced conformational model for RNA. We use the experimentally measured atomic coordinates to model the helices and use the self-avoiding walks in a diamond lattice to model the loop conformations. The atomic coordinates of the helices and the lattice representation for the loops are matched at the loop–helix junction, where steric viability is accounted for. Unlike the previous simplified lattice-based models, the present virtual bond model can account for the atomic details of realistic three-dimensional RNA structures. Based on the model, we develop a statistical mechanical theory for RNA folding energy landscapes and folding thermodynamics. Tests against experiments show that the theory can give much more improved predictions for the native structures, the thermal denaturation curves, and the equilibrium folding/unfolding pathways than the previous models. The application of the model to the P5abc region of Tetrahymena group I ribozyme reveals the misfolded intermediates as well as the native-like intermediates in the equilibrium folding process. Moreover, based on the free energy landscape analysis for each and every loop mutation, the model predicts five lethal mutations that can completely alter the free energy landscape and the folding stability of the molecule. PMID:16251382

  20. Structural analysis of gluten-free doughs by fractional rheological model

    NASA Astrophysics Data System (ADS)

    Orczykowska, Magdalena; Dziubiński, Marek; Owczarz, Piotr

    2015-02-01

    This study examines the effects of various components of tested gluten-free doughs, such as corn starch, amaranth flour, pea protein isolate, and cellulose in the form of plantain fibers on rheological properties of such doughs. The rheological properties of gluten-free doughs were assessed by using the rheological fractional standard linear solid model (FSLSM). Parameter analysis of the Maxwell-Wiechert fractional derivative rheological model allows to state that gluten-free doughs present a typical behavior of viscoelastic quasi-solid bodies. We obtained the contribution dependence of each component used in preparations of gluten-free doughs (either hard-gel or soft-gel structure). The complicate analysis of the mechanical structure of gluten-free dough was done by applying the FSLSM to explain quite precisely the effects of individual ingredients of the dough on its rheological properties.

  1. PSEA-Quant: a protein set enrichment analysis on label-free and label-based protein quantification data.

    PubMed

    Lavallée-Adam, Mathieu; Rauniyar, Navin; McClatchy, Daniel B; Yates, John R

    2014-12-05

    The majority of large-scale proteomics quantification methods yield long lists of quantified proteins that are often difficult to interpret and poorly reproduced. Computational approaches are required to analyze such intricate quantitative proteomics data sets. We propose a statistical approach to computationally identify protein sets (e.g., Gene Ontology (GO) terms) that are significantly enriched with abundant proteins with reproducible quantification measurements across a set of replicates. To this end, we developed PSEA-Quant, a protein set enrichment analysis algorithm for label-free and label-based protein quantification data sets. It offers an alternative approach to classic GO analyses, models protein annotation biases, and allows the analysis of samples originating from a single condition, unlike analogous approaches such as GSEA and PSEA. We demonstrate that PSEA-Quant produces results complementary to GO analyses. We also show that PSEA-Quant provides valuable information about the biological processes involved in cystic fibrosis using label-free protein quantification of a cell line expressing a CFTR mutant. Finally, PSEA-Quant highlights the differences in the mechanisms taking place in the human, rat, and mouse brain frontal cortices based on tandem mass tag quantification. Our approach, which is available online, will thus improve the analysis of proteomics quantification data sets by providing meaningful biological insights.

  2. PSEA-Quant: A Protein Set Enrichment Analysis on Label-Free and Label-Based Protein Quantification Data

    PubMed Central

    2015-01-01

    The majority of large-scale proteomics quantification methods yield long lists of quantified proteins that are often difficult to interpret and poorly reproduced. Computational approaches are required to analyze such intricate quantitative proteomics data sets. We propose a statistical approach to computationally identify protein sets (e.g., Gene Ontology (GO) terms) that are significantly enriched with abundant proteins with reproducible quantification measurements across a set of replicates. To this end, we developed PSEA-Quant, a protein set enrichment analysis algorithm for label-free and label-based protein quantification data sets. It offers an alternative approach to classic GO analyses, models protein annotation biases, and allows the analysis of samples originating from a single condition, unlike analogous approaches such as GSEA and PSEA. We demonstrate that PSEA-Quant produces results complementary to GO analyses. We also show that PSEA-Quant provides valuable information about the biological processes involved in cystic fibrosis using label-free protein quantification of a cell line expressing a CFTR mutant. Finally, PSEA-Quant highlights the differences in the mechanisms taking place in the human, rat, and mouse brain frontal cortices based on tandem mass tag quantification. Our approach, which is available online, will thus improve the analysis of proteomics quantification data sets by providing meaningful biological insights. PMID:25177766

  3. A fluid-mechanic-based model for the sedimentation of flocculated suspensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chhabra, R.P.; Prasad, D.

    1991-02-01

    Due to the wide occurrence of the suspensions of fine particles in mineral and chemical processing industries, considerable interest has been shown in modeling the hydrodynamic behavior of such systems. A fluid-mechanic-based analysis is presented for the settling behavior of flocculated4d suspensions. Flocs have been modeled as composite spheres consisting of a solid core embedded in a shell of homogeneous and isotropic porous medium. Theoretical estimates of the rates of sedimentation for flocculated suspensions are obtained by solving the equations of continuity and of motion. The interparticle interactions are incorporated into the analysis by employing the Happel free surface cellmore » model. The results reported embrace wide ranges of conditions of floc size and concentration.« less

  4. Analysis of Free Modeling Predictions by RBO Aleph in CASP11

    PubMed Central

    Mabrouk, Mahmoud; Werner, Tim; Schneider, Michael; Putz, Ines; Brock, Oliver

    2015-01-01

    The CASP experiment is a biannual benchmark for assessing protein structure prediction methods. In CASP11, RBO Aleph ranked as one of the top-performing automated servers in the free modeling category. This category consists of targets for which structural templates are not easily retrievable. We analyze the performance of RBO Aleph and show that its success in CASP was a result of its ab initio structure prediction protocol. A detailed analysis of this protocol demonstrates that two components unique to our method greatly contributed to prediction quality: residue–residue contact prediction by EPC-map and contact–guided conformational space search by model-based search (MBS). Interestingly, our analysis also points to a possible fundamental problem in evaluating the performance of protein structure prediction methods: Improvements in components of the method do not necessarily lead to improvements of the entire method. This points to the fact that these components interact in ways that are poorly understood. This problem, if indeed true, represents a significant obstacle to community-wide progress. PMID:26492194

  5. Area variations in multiple morbidity using a life table methodology.

    PubMed

    Congdon, Peter

    Analysis of healthy life expectancy is typically based on a binary distinction between health and ill-health. By contrast, this paper considers spatial modelling of disease free life expectancy taking account of the number of chronic conditions. Thus the analysis is based on population sub-groups with no disease, those with one disease only, and those with two or more diseases (multiple morbidity). Data on health status is accordingly modelled using a multinomial likelihood. The analysis uses data for 258 small areas in north London, and shows wide differences in the disease burden related to multiple morbidity. Strong associations between area socioeconomic deprivation and multiple morbidity are demonstrated, as well as strong spatial clustering.

  6. Role of Desolvation in Thermodynamics and Kinetics of Ligand Binding to a Kinase

    PubMed Central

    2015-01-01

    Computer simulations are used to determine the free energy landscape for the binding of the anticancer drug Dasatinib to its src kinase receptor and show that before settling into a free energy basin the ligand must surmount a free energy barrier. An analysis based on using both the ligand-pocket separation and the pocket-water occupancy as reaction coordinates shows that the free energy barrier is a result of the free energy cost for almost complete desolvation of the binding pocket. The simulations further show that the barrier is not a result of the reorganization free energy of the binding pocket. Although a continuum solvent model gives the location of free energy minima, it is not able to reproduce the intermediate free energy barrier. Finally, it is shown that a kinetic model for the on rate constant in which the ligand diffuses up to a doorway state and then surmounts the desolvation free energy barrier is consistent with published microsecond time-scale simulations of the ligand binding kinetics for this system [Shaw, D. E. et al. J. Am. Chem. Soc.2011, 133, 9181−918321545110]. PMID:25516727

  7. A simple shape-free model for pore-size estimation with positron annihilation lifetime spectroscopy

    NASA Astrophysics Data System (ADS)

    Wada, Ken; Hyodo, Toshio

    2013-06-01

    Positron annihilation lifetime spectroscopy is one of the methods for estimating pore size in insulating materials. We present a shape-free model to be used conveniently for such analysis. A basic model in classical picture is modified by introducing a parameter corresponding to an effective size of the positronium (Ps). This parameter is adjusted so that its Ps-lifetime to pore-size relation merges smoothly with that of the well-established Tao-Eldrup model (with modification involving the intrinsic Ps annihilation rate) applicable to very small pores. The combined model, i.e., modified Tao-Eldrup model for smaller pores and the modified classical model for larger pores, agrees surprisingly well with the quantum-mechanics based extended Tao-Eldrup model, which deals with Ps trapped in and thermally equilibrium with a rectangular pore.

  8. Event-based analysis of free-living behaviour.

    PubMed

    Granat, Malcolm H

    2012-11-01

    The quantification of free-living physical activities is important in understanding how physical activity and sedentary behaviour impact on health and also on how interventions might modify free-living behaviour to enhance health. Quantification, and the terminology used, has in many ways been determined by the choice of measurement technique. The inter-related issues around measurement devices and terminology used are explored. This paper proposes a terminology and a systematic approach for the analysis of free-living activity information using event-based activity data. The event-based approach uses a flexible hierarchical classification of events and, dependent on the research question, analysis can then be undertaken on a selection of these events. The quantification of free-living behaviour is therefore the result of the analysis on the patterns of these chosen events. The application of this approach is illustrated with results from a range of published studies by our group showing how event-based analysis provides a flexible yet robust method of addressing the research question(s) and provides a deeper insight into free-living behaviour. It is proposed that it is through event-based analysis we can more clearly understand how behaviour is related to health and also how we can produce more relevant outcome measures.

  9. Singularity-free dynamic equations of spacecraft-manipulator systems

    NASA Astrophysics Data System (ADS)

    From, Pål J.; Ytterstad Pettersen, Kristin; Gravdahl, Jan T.

    2011-12-01

    In this paper we derive the singularity-free dynamic equations of spacecraft-manipulator systems using a minimal representation. Spacecraft are normally modeled using Euler angles, which leads to singularities, or Euler parameters, which is not a minimal representation and thus not suited for Lagrange's equations. We circumvent these issues by introducing quasi-coordinates which allows us to derive the dynamics using minimal and globally valid non-Euclidean configuration coordinates. This is a great advantage as the configuration space of a spacecraft is non-Euclidean. We thus obtain a computationally efficient and singularity-free formulation of the dynamic equations with the same complexity as the conventional Lagrangian approach. The closed form formulation makes the proposed approach well suited for system analysis and model-based control. This paper focuses on the dynamic properties of free-floating and free-flying spacecraft-manipulator systems and we show how to calculate the inertia and Coriolis matrices in such a way that this can be implemented for simulation and control purposes without extensive knowledge of the mathematical background. This paper represents the first detailed study of modeling of spacecraft-manipulator systems with a focus on a singularity free formulation using the proposed framework.

  10. Template-Based Modeling of Protein-RNA Interactions

    PubMed Central

    Zheng, Jinfang; Kundrotas, Petras J.; Vakser, Ilya A.

    2016-01-01

    Protein-RNA complexes formed by specific recognition between RNA and RNA-binding proteins play an important role in biological processes. More than a thousand of such proteins in human are curated and many novel RNA-binding proteins are to be discovered. Due to limitations of experimental approaches, computational techniques are needed for characterization of protein-RNA interactions. Although much progress has been made, adequate methodologies reliably providing atomic resolution structural details are still lacking. Although protein-RNA free docking approaches proved to be useful, in general, the template-based approaches provide higher quality of predictions. Templates are key to building a high quality model. Sequence/structure relationships were studied based on a representative set of binary protein-RNA complexes from PDB. Several approaches were tested for pairwise target/template alignment. The analysis revealed a transition point between random and correct binding modes. The results showed that structural alignment is better than sequence alignment in identifying good templates, suitable for generating protein-RNA complexes close to the native structure, and outperforms free docking, successfully predicting complexes where the free docking fails, including cases of significant conformational change upon binding. A template-based protein-RNA interaction modeling protocol PRIME was developed and benchmarked on a representative set of complexes. PMID:27662342

  11. Modeling and analysis on ring-type piezoelectric transformers.

    PubMed

    Ho, Shine-Tzong

    2007-11-01

    This paper presents an electromechanical model for a ring-type piezoelectric transformer (PT). To establish this model, vibration characteristics of the piezoelectric ring with free boundary conditions are analyzed in advance. Based on the vibration analysis of the piezoelectric ring, the operating frequency and vibration mode of the PT are chosen. Then, electromechanical equations of motion for the PT are derived based on Hamilton's principle, which can be used to simulate the coupled electromechanical system for the transformer. Such as voltage stepup ratio, input impedance, output impedance, input power, output power, and efficiency are calculated by the equations. The optimal load resistance and the maximum efficiency for the PT will be presented in this paper. Experiments also were conducted to verify the theoretical analysis, and a good agreement was obtained.

  12. An information-based approach to change-point analysis with applications to biophysics and cell biology.

    PubMed

    Wiggins, Paul A

    2015-07-21

    This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  13. Efficient alignment-free DNA barcode analytics.

    PubMed

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-11-10

    In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.

  14. Body composition analysis: Cellular level modeling of body component ratios.

    PubMed

    Wang, Z; Heymsfield, S B; Pi-Sunyer, F X; Gallagher, D; Pierson, R N

    2008-01-01

    During the past two decades, a major outgrowth of efforts by our research group at St. Luke's-Roosevelt Hospital is the development of body composition models that include cellular level models, models based on body component ratios, total body potassium models, multi-component models, and resting energy expenditure-body composition models. This review summarizes these models with emphasis on component ratios that we believe are fundamental to understanding human body composition during growth and development and in response to disease and treatments. In-vivo measurements reveal that in healthy adults some component ratios show minimal variability and are relatively 'stable', for example total body water/fat-free mass and fat-free mass density. These ratios can be effectively applied for developing body composition methods. In contrast, other ratios, such as total body potassium/fat-free mass, are highly variable in vivo and therefore are less useful for developing body composition models. In order to understand the mechanisms governing the variability of these component ratios, we have developed eight cellular level ratio models and from them we derived simplified models that share as a major determining factor the ratio of extracellular to intracellular water ratio (E/I). The E/I value varies widely among adults. Model analysis reveals that the magnitude and variability of each body component ratio can be predicted by correlating the cellular level model with the E/I value. Our approach thus provides new insights into and improved understanding of body composition ratios in adults.

  15. Analysis of a Physics Teacher's Pedagogical "Micro-Actions" That Support 17-Year-Olds' Learning of Free Body Diagrams via a Modelling Approach

    ERIC Educational Resources Information Center

    Tay, Su Lynn; Yeo, Jennifer

    2018-01-01

    Great teaching is characterised by the specific actions a teacher takes in the classroom to bring about learning. In the context of model-based teaching (MBT), teachers' difficulty in working with students' models that are not scientifically consistent is troubling. To address this problem, the aim of this study is to identify the pedagogical…

  16. Generation of an Atlas of the Proximal Femur and Its Application to Trabecular Bone Analysis

    PubMed Central

    Carballido-Gamio, Julio; Folkesson, Jenny; Karampinos, Dimitrios C.; Baum, Thomas; Link, Thomas M.; Majumdar, Sharmila; Krug, Roland

    2013-01-01

    Automatic placement of anatomically corresponding volumes of interest and comparison of parameters against a standard of reference are essential components in studies of trabecular bone. Only recently, in vivo MR images of the proximal femur, an important fracture site, could be acquired with high-spatial resolution. The purpose of this MRI trabecular bone study was two-fold: (1) to generate an atlas of the proximal femur to automatically place anatomically corresponding volumes of interest in a population study and (2) to demonstrate how mean models of geodesic topological analysis parameters can be generated to be used as potential standard of reference. Ten females were used to generate the atlas and geodesic topological analysis models, and 10 females were used to demonstrate the atlas-based trabecular bone analysis. All alignments were based on three-dimensional (3D) multiresolution affine transformations followed by 3D multiresolution free-form deformations. Mean distances less than 1 mm between aligned femora, and sharp edges in the atlas and in fused gray-level images of registered femora indicated that the anatomical variability was well accommodated and explained by the free-form deformations. PMID:21432904

  17. Genealogical and evolutionary inference with the human Y chromosome.

    PubMed

    Stumpf, M P; Goldstein, D B

    2001-03-02

    Population genetics has emerged as a powerful tool for unraveling human history. In addition to the study of mitochondrial and autosomal DNA, attention has recently focused on Y-chromosome variation. Ambiguities and inaccuracies in data analysis, however, pose an important obstacle to further development of the field. Here we review the methods available for genealogical inference using Y-chromosome data. Approaches can be divided into those that do and those that do not use an explicit population model in genealogical inference. We describe the strengths and weaknesses of these model-based and model-free approaches, as well as difficulties associated with the mutation process that affect both methods. In the case of genealogical inference using microsatellite loci, we use coalescent simulations to show that relatively simple generalizations of the mutation process can greatly increase the accuracy of genealogical inference. Because model-free and model-based approaches have different biases and limitations, we conclude that there is considerable benefit in the continued use of both types of approaches.

  18. RECURSIVE PROTEIN MODELING: A DIVIDE AND CONQUER STRATEGY FOR PROTEIN STRUCTURE PREDICTION AND ITS CASE STUDY IN CASP9

    PubMed Central

    CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN

    2013-01-01

    After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379

  19. Spatially distributed modal signals of free shallow membrane shell structronic system

    NASA Astrophysics Data System (ADS)

    Yue, H. H.; Deng, Z. Q.; Tzou, H. S.

    2008-11-01

    Based on the smart material and structronics technology, distributed sensor and control of shell structures have been rapidly developed for the last 20 years. This emerging technology has been utilized in aerospace, telecommunication, micro-electromechanical systems and other engineering applications. However, distributed monitoring technique and its resulting global spatially distributed sensing signals of shallow paraboloidal membrane shells are not clearly understood. In this paper, modeling of free flexible paraboloidal shell with spatially distributed sensor, micro-sensing signal characteristics, and location of distributed piezoelectric sensor patches are investigated based on a new set of assumed mode shape functions. Parametric analysis indicates that the signal generation depends on modal membrane strains in the meridional and circumferential directions in which the latter is more significant than the former, when all bending strains vanish in membrane shells. This study provides a modeling and analysis technique for distributed sensors laminated on lightweight paraboloidal flexible structures and identifies critical components and regions that generate significant signals.

  20. Numerical analysis of the transportation characteristics of a self-running sliding stage based on near-field acoustic levitation.

    PubMed

    Feng, Kai; Liu, Yuanyuan; Cheng, Miaomiao

    2015-12-01

    Owing to its distinct non-contact and oil-free characteristics, a self-running sliding stage based on near-field acoustic levitation can be used in an environment, which demands clean rooms and zero noise. This paper presents a numerical analysis on the lifting and transportation capacity of a non-contact transportation system. Two simplified structure models, namely, free vibration and force vibration models, are proposed for the study of the displacement amplitude distribution of two cases using the finite element method. After coupling the stage displacement into the film thickness, the Reynolds equation is solved by the finite difference method to obtain the lifting and thrusting forces. Parametric analyses of the effects of amplitude, frequency, and standing wave ratio (SWR) on the sliding stage dynamic performance are investigated. Numerical results show good agreement with published experimental values. The predictions also reveal that greater transportation capacity of the self-running sliding stage is generally achieved at less SWR and at higher amplitude.

  1. Dynamic Histogram Analysis To Determine Free Energies and Rates from Biased Simulations.

    PubMed

    Stelzl, Lukas S; Kells, Adam; Rosta, Edina; Hummer, Gerhard

    2017-12-12

    We present an algorithm to calculate free energies and rates from molecular simulations on biased potential energy surfaces. As input, it uses the accumulated times spent in each state or bin of a histogram and counts of transitions between them. Optimal unbiased equilibrium free energies for each of the states/bins are then obtained by maximizing the likelihood of a master equation (i.e., first-order kinetic rate model). The resulting free energies also determine the optimal rate coefficients for transitions between the states or bins on the biased potentials. Unbiased rates can be estimated, e.g., by imposing a linear free energy condition in the likelihood maximization. The resulting "dynamic histogram analysis method extended to detailed balance" (DHAMed) builds on the DHAM method. It is also closely related to the transition-based reweighting analysis method (TRAM) and the discrete TRAM (dTRAM). However, in the continuous-time formulation of DHAMed, the detailed balance constraints are more easily accounted for, resulting in compact expressions amenable to efficient numerical treatment. DHAMed produces accurate free energies in cases where the common weighted-histogram analysis method (WHAM) for umbrella sampling fails because of slow dynamics within the windows. Even in the limit of completely uncorrelated data, where WHAM is optimal in the maximum-likelihood sense, DHAMed results are nearly indistinguishable. We illustrate DHAMed with applications to ion channel conduction, RNA duplex formation, α-helix folding, and rate calculations from accelerated molecular dynamics. DHAMed can also be used to construct Markov state models from biased or replica-exchange molecular dynamics simulations. By using binless WHAM formulated as a numerical minimization problem, the bias factors for the individual states can be determined efficiently in a preprocessing step and, if needed, optimized globally afterward.

  2. Text line extraction in free style document

    NASA Astrophysics Data System (ADS)

    Shen, Xiaolu; Liu, Changsong; Ding, Xiaoqing; Zou, Yanming

    2009-01-01

    This paper addresses to text line extraction in free style document, such as business card, envelope, poster, etc. In free style document, global property such as character size, line direction can hardly be concluded, which reveals a grave limitation in traditional layout analysis. 'Line' is the most prominent and the highest structure in our bottom-up method. First, we apply a novel intensity function found on gradient information to locate text areas where gradient within a window have large magnitude and various directions, and split such areas into text pieces. We build a probability model of lines consist of text pieces via statistics on training data. For an input image, we group text pieces to lines using a simulated annealing algorithm with cost function based on the probability model.

  3. Replica and extreme-value analysis of the Jarzynski free-energy estimator

    NASA Astrophysics Data System (ADS)

    Palassini, Matteo; Ritort, Felix

    2008-03-01

    We analyze the Jarzynski estimator of free-energy differences from nonequilibrium work measurements. By a simple mapping onto Derrida's Random Energy Model, we obtain a scaling limit for the expectation of the bias of the estimator. We then derive analytical approximations in three different regimes of the scaling parameter x = log(N)/W, where N is the number of measurements and W the mean dissipated work. Our approach is valid for a generic distribution of the dissipated work, and is based on a replica symmetry breaking scheme for x >> 1, the asymptotic theory of extreme value statistics for x << 1, and a direct approach for x near one. The combination of the three analytic approximations describes well Monte Carlo data for the expectation value of the estimator, for a wide range of values of N, from N=1 to large N, and for different work distributions. Based on these results, we introduce improved free-energy estimators and discuss the application to the analysis of experimental data.

  4. Complex modal analysis of transverse free vibrations for axially moving nanobeams based on the nonlocal strain gradient theory

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Shen, Huoming; Zhang, Bo; Liu, Juan; Zhang, Yingrong

    2018-07-01

    We investigate the transverse free vibration behaviour of axially moving nanobeams based on the nonlocal strain gradient theory. Considering the geometrical nonlinearity, which takes the form of von Kármán strains, the coupled plane motion equations and related boundary conditions of a new size-dependent beam model of Euler-Bernoulli type are developed using the generalized Hamilton principle. Using the simply supported axially moving nanobeams as an example, the complex modal analysis method is adopted to solve the governing equation; then, the effect of the order of modal truncation on the natural frequencies is discussed. Subsequently, the roles of the nonlocal parameter, material characteristic parameter, axial speed, stiffness and axial support rigidity parameter on the free vibration are comprehensively addressed. The material characteristic parameter induces the stiffness hardening of nanobeams, while the nonlocal parameter induces stiffness softening. In addition, the roles of small-scale parameters on the flutter critical velocity and stability are explained.

  5. Pressure distribution under flexible polishing tools. I - Conventional aspheric optics

    NASA Astrophysics Data System (ADS)

    Mehta, Pravin K.; Hufnagel, Robert E.

    1990-10-01

    The paper presents a mathematical model, based on Kirchoff's thin flat plate theory, developed to determine polishing pressure distribution for a flexible polishing tool. A two-layered tool in which bending and compressive stiffnesses are equal is developed, which is formulated as a plate on a linearly elastic foundation. An equivalent eigenvalue problem and solution for a free-free plate are created from the plate formulation. For aspheric, anamorphic optical surfaces, the tool misfit is derived; it is defined as the result of movement from the initial perfect fit on the optic to any other position. The Polisher Design (POD) software for circular tools on aspheric optics is introduced. NASTRAN-based finite element analysis results are compared with the POD software, showing high correlation. By employing existing free-free eigenvalues and eigenfunctions, the work may be extended to rectangular polishing tools as well.

  6. Predicting employees' well-being using work-family conflict and job strain models.

    PubMed

    Karimi, Leila; Karimi, Hamidreza; Nouri, Aboulghassem

    2011-04-01

    The present study examined the effects of two models of work–family conflict (WFC) and job-strain on the job-related and context-free well-being of employees. The participants of the study consisted of Iranian employees from a variety of organizations. The effects of three dimensions of the job-strain model and six forms of WFC on affective well-being were assessed. The results of hierarchical multiple regression analysis revealed that the number of working hours, strain-based work interfering with family life (WIF) along with job characteristic variables (i.e. supervisory support, job demands and job control) all make a significant contribution to the prediction of job-related well-being. On the other hand, strain-based WIF and family interfering with work (FIW) significantly predicted context-free well-being. Implications are drawn and recommendations made regarding future research and interventions in the workplace.

  7. Social power and opinion formation in complex networks

    NASA Astrophysics Data System (ADS)

    Jalili, Mahdi

    2013-02-01

    In this paper we investigate the effects of social power on the evolution of opinions in model networks as well as in a number of real social networks. A continuous opinion formation model is considered and the analysis is performed through numerical simulation. Social power is given to a proportion of agents selected either randomly or based on their degrees. As artificial network structures, we consider scale-free networks constructed through preferential attachment and Watts-Strogatz networks. Numerical simulations show that scale-free networks with degree-based social power on the hub nodes have an optimal case where the largest number of the nodes reaches a consensus. However, given power to a random selection of nodes could not improve consensus properties. Introducing social power in Watts-Strogatz networks could not significantly change the consensus profile.

  8. Free-free and fixed base modal survey tests of the Space Station Common Module Prototype

    NASA Technical Reports Server (NTRS)

    Driskill, T. C.; Anderson, J. B.; Coleman, A. D.

    1992-01-01

    This paper describes the testing aspects and the problems encountered during the free-free and fixed base modal surveys completed on the original Space Station Common Module Prototype (CMP). The CMP is a 40-ft long by 14.5-ft diameter 'waffle-grid' cylinder built by the Boeing Company and housed at the Marshall Space Flight Center (MSFC) near Huntsville, AL. The CMP modal survey tests were conducted at MSFC by the Dynamics Test Branch. The free-free modal survey tests (June '90 to Sept. '90) included interface verification tests (IFVT), often referred to as impedance measurements, mass-additive testing and linearity studies. The fixed base modal survey tests (Feb. '91 to April '91), including linearity studies, were conducted in a fixture designed to constrain the CMP in 7 total degrees-of-freedom at five trunnion interfaces (two primary, two secondary, and the keel). The fixture also incorporated an airbag off-load system designed to alleviate the non-linear effects of friction in the primary and secondary trunnion interfaces. Numerous test configurations were performed with the objective of providing a modal data base for evaluating the various testing methodologies to verify dynamic finite element models used for input to coupled load analysis.

  9. Model-Free Adaptive Control for Unknown Nonlinear Zero-Sum Differential Game.

    PubMed

    Zhong, Xiangnan; He, Haibo; Wang, Ding; Ni, Zhen

    2018-05-01

    In this paper, we present a new model-free globalized dual heuristic dynamic programming (GDHP) approach for the discrete-time nonlinear zero-sum game problems. First, the online learning algorithm is proposed based on the GDHP method to solve the Hamilton-Jacobi-Isaacs equation associated with optimal regulation control problem. By setting backward one step of the definition of performance index, the requirement of system dynamics, or an identifier is relaxed in the proposed method. Then, three neural networks are established to approximate the optimal saddle point feedback control law, the disturbance law, and the performance index, respectively. The explicit updating rules for these three neural networks are provided based on the data generated during the online learning along the system trajectories. The stability analysis in terms of the neural network approximation errors is discussed based on the Lyapunov approach. Finally, two simulation examples are provided to show the effectiveness of the proposed method.

  10. On the Problem of Filtration to an Imperfect Gallery in a Pressureless Bed

    NASA Astrophysics Data System (ADS)

    Bereslavskii, É. N.; Dudina, L. M.

    2018-01-01

    The problem of plane steady-state filtration in a pressureless bed to an imperfect gallery in the presence of evaporation from the flow free surface is considered. To study such type of flow, a mixed boundary-value problem of the theory of analytical functions is formulated and solved with application of the Polubarinova-Kochina method. Based on the model suggested, an algorithm for computing the discharge of the gallery and the ordinate of free surface emergence to the impermeable screen is developed. A detailed hydrodynamic analysis of the influence of all physical parameters of the model on the desired filtration characteristics is given.

  11. Differential geometry based solvation model. III. Quantum formulation

    PubMed Central

    Chen, Zhan; Wei, Guo-Wei

    2011-01-01

    Solvation is of fundamental importance to biomolecular systems. Implicit solvent models, particularly those based on the Poisson-Boltzmann equation for electrostatic analysis, are established approaches for solvation analysis. However, ad hoc solvent-solute interfaces are commonly used in the implicit solvent theory. Recently, we have introduced differential geometry based solvation models which allow the solvent-solute interface to be determined by the variation of a total free energy functional. Atomic fixed partial charges (point charges) are used in our earlier models, which depends on existing molecular mechanical force field software packages for partial charge assignments. As most force field models are parameterized for a certain class of molecules or materials, the use of partial charges limits the accuracy and applicability of our earlier models. Moreover, fixed partial charges do not account for the charge rearrangement during the solvation process. The present work proposes a differential geometry based multiscale solvation model which makes use of the electron density computed directly from the quantum mechanical principle. To this end, we construct a new multiscale total energy functional which consists of not only polar and nonpolar solvation contributions, but also the electronic kinetic and potential energies. By using the Euler-Lagrange variation, we derive a system of three coupled governing equations, i.e., the generalized Poisson-Boltzmann equation for the electrostatic potential, the generalized Laplace-Beltrami equation for the solvent-solute boundary, and the Kohn-Sham equations for the electronic structure. We develop an iterative procedure to solve three coupled equations and to minimize the solvation free energy. The present multiscale model is numerically validated for its stability, consistency and accuracy, and is applied to a few sets of molecules, including a case which is difficult for existing solvation models. Comparison is made to many other classic and quantum models. By using experimental data, we show that the present quantum formulation of our differential geometry based multiscale solvation model improves the prediction of our earlier models, and outperforms some explicit solvation model. PMID:22112067

  12. Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer

    NASA Astrophysics Data System (ADS)

    Zhang, Yucheng; Oikonomou, Anastasia; Wong, Alexander; Haider, Masoom A.; Khalvati, Farzad

    2017-04-01

    Radiomics characterizes tumor phenotypes by extracting large numbers of quantitative features from radiological images. Radiomic features have been shown to provide prognostic value in predicting clinical outcomes in several studies. However, several challenges including feature redundancy, unbalanced data, and small sample sizes have led to relatively low predictive accuracy. In this study, we explore different strategies for overcoming these challenges and improving predictive performance of radiomics-based prognosis for non-small cell lung cancer (NSCLC). CT images of 112 patients (mean age 75 years) with NSCLC who underwent stereotactic body radiotherapy were used to predict recurrence, death, and recurrence-free survival using a comprehensive radiomics analysis. Different feature selection and predictive modeling techniques were used to determine the optimal configuration of prognosis analysis. To address feature redundancy, comprehensive analysis indicated that Random Forest models and Principal Component Analysis were optimum predictive modeling and feature selection methods, respectively, for achieving high prognosis performance. To address unbalanced data, Synthetic Minority Over-sampling technique was found to significantly increase predictive accuracy. A full analysis of variance showed that data endpoints, feature selection techniques, and classifiers were significant factors in affecting predictive accuracy, suggesting that these factors must be investigated when building radiomics-based predictive models for cancer prognosis.

  13. Dynamical analysis of a fractional SIR model with birth and death on heterogeneous complex networks

    NASA Astrophysics Data System (ADS)

    Huo, Jingjing; Zhao, Hongyong

    2016-04-01

    In this paper, a fractional SIR model with birth and death rates on heterogeneous complex networks is proposed. Firstly, we obtain a threshold value R0 based on the existence of endemic equilibrium point E∗, which completely determines the dynamics of the model. Secondly, by using Lyapunov function and Kirchhoff's matrix tree theorem, the globally asymptotical stability of the disease-free equilibrium point E0 and the endemic equilibrium point E∗ of the model are investigated. That is, when R0 < 1, the disease-free equilibrium point E0 is globally asymptotically stable and the disease always dies out; when R0 > 1, the disease-free equilibrium point E0 becomes unstable and in the meantime there exists a unique endemic equilibrium point E∗, which is globally asymptotically stable and the disease is uniformly persistent. Finally, the effects of various immunization schemes are studied and compared. Numerical simulations are given to demonstrate the main results.

  14. Trajectory-Based Loads for the Ares I-X Test Flight Vehicle

    NASA Technical Reports Server (NTRS)

    Vause, Roland F.; Starr, Brett R.

    2011-01-01

    In trajectory-based loads, the structural engineer treats each point on the trajectory as a load case. Distributed aero, inertial, and propulsion forces are developed for the structural model which are equivalent to the integrated values of the trajectory model. Free-body diagrams are then used to solve for the internal forces, or loads, that keep the applied aero, inertial, and propulsion forces in dynamic equilibrium. There are several advantages to using trajectory-based loads. First, consistency is maintained between the integrated equilibrium equations of the trajectory analysis and the distributed equilibrium equations of the structural analysis. Second, the structural loads equations are tied to the uncertainty model for the trajectory systems analysis model. Atmosphere, aero, propulsion, mass property, and controls uncertainty models all feed into the dispersions that are generated for the trajectory systems analysis model. Changes in any of these input models will affect structural loads response. The trajectory systems model manages these inputs as well as the output from the structural model over thousands of dispersed cases. Large structural models with hundreds of thousands of degrees of freedom would execute too slowly to be an efficient part of several thousand system analyses. Trajectory-based loads provide a means for the structures discipline to be included in the integrated systems analysis. Successful applications of trajectory-based loads methods for the Ares I-X vehicle are covered in this paper. Preliminary design loads were based on 2000 trajectories using Monte Carlo dispersions. Range safety loads were tied to 8423 malfunction turn trajectories. In addition, active control system loads were based on 2000 preflight trajectories using Monte Carlo dispersions.

  15. The sagittal stem alignment and the stem version clearly influence the impingement-free range of motion in total hip arthroplasty: a computer model-based analysis.

    PubMed

    Müller, Michael; Duda, Georg; Perka, Carsten; Tohtz, Stephan

    2016-03-01

    The component alignment in total hip arthroplasty influences the impingement-free range of motion (ROM). While substantiated data is available for the cup positioning, little is known about the stem alignment. Especially stem rotation and the sagittal alignment influence the position of the cone in relation to the edge of the socket and thus the impingement-free functioning. Hence, the question arises as to what influence do these parameters have on the impingement-free ROM? With the help of a computer model the influence of the sagittal stem alignment and rotation on the impingement-free ROM were investigated. The computer model was based on the CT dataset of a patient with a non-cemented THA. In the model the stem version was set at 10°/0°/-10° and the sagittal alignment at 5°/0°/-5°, which resulted in nine alternative stem positions. For each position, the maximum impingement-free ROM was investigated. Both stem version and sagittal stem alignment have a relevant influence on the impingement-free ROM. In particular, flexion and extension as well as internal and external rotation capability present evident differences. In the position intervals of 10° sagittal stem alignment and 20° stem version a difference was found of about 80° in the flexion and 50° in the extension capability. Likewise, differences were evidenced of up to 72° in the internal and up to 36° in the external rotation. The sagittal stem alignment and the stem torsion have a relevant influence on the impingement-free ROM. To clarify the causes of an impingement or accompanying problems, both parameters should be examined and, if possible, a combined assessment of these factors should be made.

  16. Free vibration analysis of embedded magneto-electro-thermo-elastic cylindrical nanoshell based on the modified couple stress theory

    NASA Astrophysics Data System (ADS)

    Ghadiri, Majid; Safarpour, Hamed

    2016-09-01

    In this paper, size-dependent effect of an embedded magneto-electro-elastic (MEE) nanoshell subjected to thermo-electro-magnetic loadings on free vibration behavior is investigated. Also, the surrounding elastic medium has been considered as the model of Winkler characterized by the spring. The size-dependent MEE nanoshell is investigated on the basis of the modified couple stress theory. Taking attention to the first-order shear deformation theory (FSDT), the modeled nanoshell and its equations of motion are derived using principle of minimum potential energy. The accuracy of the presented model is validated with some cases in the literature. Finally, using the Navier-type method, an analytical solution of governing equations for vibration behavior of simply supported MEE cylindrical nanoshell under combined loadings is presented and the effects of material length scale parameter, temperature changes, external electric potential, external magnetic potential, circumferential wave numbers, constant of spring, shear correction factor and length-to-radius ratio of the nanoshell on natural frequency are identified. Since there has been no research about size-dependent analysis MEE cylindrical nanoshell under combined loadings based on FSDT, numerical results are presented to be served as benchmarks for future analysis of MEE nanoshells using the modified couple stress theory.

  17. A statistical model of the human core-temperature circadian rhythm

    NASA Technical Reports Server (NTRS)

    Brown, E. N.; Choe, Y.; Luithardt, H.; Czeisler, C. A.

    2000-01-01

    We formulate a statistical model of the human core-temperature circadian rhythm in which the circadian signal is modeled as a van der Pol oscillator, the thermoregulatory response is represented as a first-order autoregressive process, and the evoked effect of activity is modeled with a function specific for each circadian protocol. The new model directly links differential equation-based simulation models and harmonic regression analysis methods and permits statistical analysis of both static and dynamical properties of the circadian pacemaker from experimental data. We estimate the model parameters by using numerically efficient maximum likelihood algorithms and analyze human core-temperature data from forced desynchrony, free-run, and constant-routine protocols. By representing explicitly the dynamical effects of ambient light input to the human circadian pacemaker, the new model can estimate with high precision the correct intrinsic period of this oscillator ( approximately 24 h) from both free-run and forced desynchrony studies. Although the van der Pol model approximates well the dynamical features of the circadian pacemaker, the optimal dynamical model of the human biological clock may have a harmonic structure different from that of the van der Pol oscillator.

  18. Value of the distant future: Model-independent results

    NASA Astrophysics Data System (ADS)

    Katz, Yuri A.

    2017-01-01

    This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.

  19. SPLASH program for three dimensional fluid dynamics with free surface boundaries

    NASA Astrophysics Data System (ADS)

    Yamaguchi, A.

    1996-05-01

    This paper describes a three dimensional computer program SPLASH that solves Navier-Stokes equations based on the Arbitrary Lagrangian Eulerian (ALE) finite element method. SPLASH has been developed for application to the fluid dynamics problems including the moving boundary of a liquid metal cooled Fast Breeder Reactor (FBR). To apply SPLASH code to the free surface behavior analysis, a capillary model using a cubic Spline function has been developed. Several sample problems, e.g., free surface oscillation, vortex shedding development, and capillary tube phenomena, are solved to verify the computer program. In the analyses, the numerical results are in good agreement with the theoretical value or experimental observance. Also SPLASH code has been applied to an analysis of a free surface sloshing experiment coupled with forced circulation flow in a rectangular tank. This is a simplified situation of the flow field in a reactor vessel of the FBR. The computational simulation well predicts the general behavior of the fluid flow inside and the free surface behavior. Analytical capability of the SPLASH code has been verified in this study and the application to more practical problems such as FBR design and safety analysis is under way.

  20. A predictive framework for evaluating models of semantic organization in free recall

    PubMed Central

    Morton, Neal W; Polyn, Sean M.

    2016-01-01

    Research in free recall has demonstrated that semantic associations reliably influence the organization of search through episodic memory. However, the specific structure of these associations and the mechanisms by which they influence memory search remain unclear. We introduce a likelihood-based model-comparison technique, which embeds a model of semantic structure within the context maintenance and retrieval (CMR) model of human memory search. Within this framework, model variants are evaluated in terms of their ability to predict the specific sequence in which items are recalled. We compare three models of semantic structure, latent semantic analysis (LSA), global vectors (GloVe), and word association spaces (WAS), and find that models using WAS have the greatest predictive power. Furthermore, we find evidence that semantic and temporal organization is driven by distinct item and context cues, rather than a single context cue. This finding provides important constraint for theories of memory search. PMID:28331243

  1. Syntheses, structural, computational, and thermal analysis of acid-base complexes of picric acid with N-heterocyclic bases.

    PubMed

    Goel, Nidhi; Singh, Udai P

    2013-10-10

    Four new acid-base complexes using picric acid [(OH)(NO2)3C6H2] (PA) and N-heterocyclic bases (1,10-phenanthroline (phen)/2,2';6',2"-terpyridine (terpy)/hexamethylenetetramine (hmta)/2,4,6-tri(2-pyridyl)-1,3,5-triazine (tptz)) were prepared and characterized by elemental analysis, IR, NMR and X-ray crystallography. Crystal structures provide detailed information of the noncovalent interactions present in different complexes. The optimized structures of the complexes were calculated in terms of the density functional theory. The thermolysis of these complexes was investigated by TG-DSC and ignition delay measurements. The model-free isoconversional and model-fitting kinetic approaches have been applied to isothermal TG data for kinetics investigation of thermal decomposition of these complexes.

  2. ExoMars Entry Demonstrator Module Dynamic Stability

    NASA Astrophysics Data System (ADS)

    Dormieux, Marc; Gulhan, Ali; Berner, Claude

    2011-05-01

    In the frame of ExoMars DM aerodynamics characterization, pitch damping derivatives determination is required as it drives the parachute deployment conditions. Series of free-flight and free- oscillation tests (captive model) have been conducted with particular attention for data reduction. 6 Degrees- of-Freedom (DoF) analysis tools require the knowledge of local damping derivatives. In general ground tests do not provide them directly but only effective damping derivatives. Free-flight (ballistic range) tests with full oscillations around trim angle have been performed at ISL for 0.5

  3. Composite multi-parameter ranking of real and virtual compounds for design of MC4R agonists: renaissance of the Free-Wilson methodology.

    PubMed

    Nilsson, Ingemar; Polla, Magnus O

    2012-10-01

    Drug design is a multi-parameter task present in the analysis of experimental data for synthesized compounds and in the prediction of new compounds with desired properties. This article describes the implementation of a binned scoring and composite ranking scheme for 11 experimental parameters that were identified as key drivers in the MC4R project. The composite ranking scheme was implemented in an AstraZeneca tool for analysis of project data, thereby providing an immediate re-ranking as new experimental data was added. The automated ranking also highlighted compounds overlooked by the project team. The successful implementation of a composite ranking on experimental data led to the development of an equivalent virtual score, which was based on Free-Wilson models of the parameters from the experimental ranking. The individual Free-Wilson models showed good to high predictive power with a correlation coefficient between 0.45 and 0.97 based on the external test set. The virtual ranking adds value to the selection of compounds for synthesis but error propagation must be controlled. The experimental ranking approach adds significant value, is parameter independent and can be tuned and applied to any drug discovery project.

  4. Has Childhood Smoking Reduced Following Smoke-Free Public Places Legislation? A Segmented Regression Analysis of Cross-Sectional UK School-Based Surveys

    PubMed Central

    Der, Geoff; Roberts, Chris; Haw, Sally

    2016-01-01

    Introduction: Smoke-free legislation has been a great success for tobacco control but its impact on smoking uptake remains under-explored. We investigated if trends in smoking uptake amongst adolescents differed before and after the introduction of smoke-free legislation in the United Kingdom. Methods: Prevalence estimates for regular smoking were obtained from representative school-based surveys for the four countries of the United Kingdom. Post-intervention status was represented using a dummy variable and to allow for a change in trend, the number of years since implementation was included. To estimate the association between smoke-free legislation and adolescent smoking, the percentage of regular smokers was modeled using linear regression adjusted for trends over time and country. All models were stratified by age (13 and 15 years) and sex. Results: For 15-year-old girls, the implementation of smoke-free legislation in the United Kingdom was associated with a 4.3% reduction in the prevalence of regular smoking (P = .029). In addition, regular smoking fell by an additional 1.5% per annum post-legislation in this group (P = .005). Among 13-year-old girls, there was a reduction of 2.8% in regular smoking (P = .051), with no evidence of a change in trend post-legislation. Smaller and nonsignificant reductions in regular smoking were observed for 15- and 13-year-old boys (P = .175 and P = .113, respectively). Conclusions: Smoke-free legislation may help reduce smoking uptake amongst teenagers, with stronger evidence for an association seen in females. Further research that analyses longitudinal data across more countries is required. Implications: Previous research has established that smoke-free legislation has led to many improvements in population health, including reductions in heart attack, stroke, and asthma. However, the impacts of smoke-free legislation on the rates of smoking amongst children have been less investigated. Analysis of repeated cross-sectional surveys across the four countries of the United Kingdom shows smoke-free legislation may be associated with a reduction in regular smoking among school-aged children. If this association is causal, comprehensive smoke-free legislation could help prevent future generations from taking up smoking. PMID:26911840

  5. Porcine experimental model for perforator flap raising in reconstructive microsurgery.

    PubMed

    González-García, José A; Chiesa-Estomba, Carlos M; Álvarez, Leire; Altuna, Xabier; García-Iza, Leire; Thomas, Izaskun; Sistiaga, Jon A; Larruscain, Ekhiñe

    2018-07-01

    Perforator free flap-based reconstruction of the head and neck is a challenging surgical procedure and needs a steep learning curve. A reproducible mammal large animal model with similarities to human anatomy is relevant for perforator flap raising and microanastomosis. The aim of this study was to assess the feasibility of a swine model for perforator-based free flaps in reconstructive microsurgery. Eleven procedures were performed under general anesthesia in a porcine model, elevating a skin flap vascularized by perforating musculocutaneous branches of the superior epigastric artery to evaluate the relevance of this model for head and neck reconstructive microsurgery. The anterior abdominal skin perforator-based free flap in a swine model irrigated by the superior epigastric artery was elevated in eleven procedures. In six of these procedures, we could perform an arterial and venous microanastomosis to the great vessels located in the base of the neck. The porcine experimental model of superior epigastric artery perforator-based free flap reconstruction offers relevant similarities to the human deep inferior epigastric artery perforator flap. We could demonstrate this model as acceptable for perforator free flap training due to the necessity of perforator and pedicle dissection and transfer to a distant area. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Efficient alignment-free DNA barcode analytics

    PubMed Central

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-01-01

    Background In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. Results New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Conclusion Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding. PMID:19900305

  7. Lost in folding space? Comparing four variants of the thermodynamic model for RNA secondary structure prediction.

    PubMed

    Janssen, Stefan; Schudoma, Christian; Steger, Gerhard; Giegerich, Robert

    2011-11-03

    Many bioinformatics tools for RNA secondary structure analysis are based on a thermodynamic model of RNA folding. They predict a single, "optimal" structure by free energy minimization, they enumerate near-optimal structures, they compute base pair probabilities and dot plots, representative structures of different abstract shapes, or Boltzmann probabilities of structures and shapes. Although all programs refer to the same physical model, they implement it with considerable variation for different tasks, and little is known about the effects of heuristic assumptions and model simplifications used by the programs on the outcome of the analysis. We extract four different models of the thermodynamic folding space which underlie the programs RNAFOLD, RNASHAPES, and RNASUBOPT. Their differences lie within the details of the energy model and the granularity of the folding space. We implement probabilistic shape analysis for all models, and introduce the shape probability shift as a robust measure of model similarity. Using four data sets derived from experimentally solved structures, we provide a quantitative evaluation of the model differences. We find that search space granularity affects the computed shape probabilities less than the over- or underapproximation of free energy by a simplified energy model. Still, the approximations perform similar enough to implementations of the full model to justify their continued use in settings where computational constraints call for simpler algorithms. On the side, we observe that the rarely used level 2 shapes, which predict the complete arrangement of helices, multiloops, internal loops and bulges, include the "true" shape in a rather small number of predicted high probability shapes. This calls for an investigation of new strategies to extract high probability members from the (very large) level 2 shape space of an RNA sequence. We provide implementations of all four models, written in a declarative style that makes them easy to be modified. Based on our study, future work on thermodynamic RNA folding may make a choice of model based on our empirical data. It can take our implementations as a starting point for further program development.

  8. Wrinkle-free design of thin membrane structures using stress-based topology optimization

    NASA Astrophysics Data System (ADS)

    Luo, Yangjun; Xing, Jian; Niu, Yanzhuang; Li, Ming; Kang, Zhan

    2017-05-01

    Thin membrane structures would experience wrinkling due to local buckling deformation when compressive stresses are induced in some regions. Using the stress criterion for membranes in wrinkled and taut states, this paper proposed a new stress-based topology optimization methodology to seek the optimal wrinkle-free design of macro-scale thin membrane structures under stretching. Based on the continuum model and linearly elastic assumption in the taut state, the optimization problem is defined as to maximize the structural stiffness under membrane area and principal stress constraints. In order to make the problem computationally tractable, the stress constraints are reformulated into equivalent ones and relaxed by a cosine-type relaxation scheme. The reformulated optimization problem is solved by a standard gradient-based algorithm with the adjoint-variable sensitivity analysis. Several examples with post-bulking simulations and experimental tests are given to demonstrate the effectiveness of the proposed optimization model for eliminating stress-related wrinkles in the novel design of thin membrane structures.

  9. Note: Model identification and analysis of bivalent analyte surface plasmon resonance data.

    PubMed

    Tiwari, Purushottam Babu; Üren, Aykut; He, Jin; Darici, Yesim; Wang, Xuewen

    2015-10-01

    Surface plasmon resonance (SPR) is a widely used, affinity based, label-free biophysical technique to investigate biomolecular interactions. The extraction of rate constants requires accurate identification of the particular binding model. The bivalent analyte model involves coupled non-linear differential equations. No clear procedure to identify the bivalent analyte mechanism has been established. In this report, we propose a unique signature for the bivalent analyte model. This signature can be used to distinguish the bivalent analyte model from other biphasic models. The proposed method is demonstrated using experimentally measured SPR sensorgrams.

  10. Model-Free Feature Screening for Ultrahigh Dimensional Discriminant Analysis

    PubMed Central

    Cui, Hengjian; Li, Runze

    2014-01-01

    This work is concerned with marginal sure independence feature screening for ultra-high dimensional discriminant analysis. The response variable is categorical in discriminant analysis. This enables us to use conditional distribution function to construct a new index for feature screening. In this paper, we propose a marginal feature screening procedure based on empirical conditional distribution function. We establish the sure screening and ranking consistency properties for the proposed procedure without assuming any moment condition on the predictors. The proposed procedure enjoys several appealing merits. First, it is model-free in that its implementation does not require specification of a regression model. Second, it is robust to heavy-tailed distributions of predictors and the presence of potential outliers. Third, it allows the categorical response having a diverging number of classes in the order of O(nκ) with some κ ≥ 0. We assess the finite sample property of the proposed procedure by Monte Carlo simulation studies and numerical comparison. We further illustrate the proposed methodology by empirical analyses of two real-life data sets. PMID:26392643

  11. Analysis of free modeling predictions by RBO aleph in CASP11.

    PubMed

    Mabrouk, Mahmoud; Werner, Tim; Schneider, Michael; Putz, Ines; Brock, Oliver

    2016-09-01

    The CASP experiment is a biannual benchmark for assessing protein structure prediction methods. In CASP11, RBO Aleph ranked as one of the top-performing automated servers in the free modeling category. This category consists of targets for which structural templates are not easily retrievable. We analyze the performance of RBO Aleph and show that its success in CASP was a result of its ab initio structure prediction protocol. A detailed analysis of this protocol demonstrates that two components unique to our method greatly contributed to prediction quality: residue-residue contact prediction by EPC-map and contact-guided conformational space search by model-based search (MBS). Interestingly, our analysis also points to a possible fundamental problem in evaluating the performance of protein structure prediction methods: Improvements in components of the method do not necessarily lead to improvements of the entire method. This points to the fact that these components interact in ways that are poorly understood. This problem, if indeed true, represents a significant obstacle to community-wide progress. Proteins 2016; 84(Suppl 1):87-104. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  12. POD Analysis of Jet-Plume/Afterbody-Wake Interaction

    NASA Astrophysics Data System (ADS)

    Murray, Nathan E.; Seiner, John M.; Jansen, Bernard J.; Gui, Lichuan; Sockwell, Shuan; Joachim, Matthew

    2009-11-01

    The understanding of the flow physics in the base region of a powered rocket is one of the keys to designing the next generation of reusable launchers. The base flow features affect the aerodynamics and the heat loading at the base of the vehicle. Recent efforts at the National Center for Physical Acoustics at the University of Mississippi have refurbished two models for studying jet-plume/afterbody-wake interactions in the NCPA's 1-foot Tri-Sonic Wind Tunnel Facility. Both models have a 2.5 inch outer diameter with a nominally 0.5 inch diameter centered exhaust nozzle. One of the models is capable of being powered with gaseous H2 and O2 to study the base flow in a fully combusting senario. The second model uses hi-pressure air to drive the exhaust providing an unheated representative flow field. This unheated model was used to acquire PIV data of the base flow. Subsequently, a POD analysis was performed to provide a first look at the large-scale structures present for the interaction between an axisymmetric jet and an axisymmetric afterbody wake. PIV and Schlieren data are presented for a single jet-exhaust to free-stream flow velocity along with the POD analysis of the base flow field.

  13. Design and analysis of SEIQR worm propagation model in mobile internet

    NASA Astrophysics Data System (ADS)

    Xiao, Xi; Fu, Peng; Dou, Changsheng; Li, Qing; Hu, Guangwu; Xia, Shutao

    2017-02-01

    The mobile Internet has considerably facilitated daily life in recent years. However, it has become the breeding ground for lots of new worms, including the Bluetooth-based worm, the SMS/MMS-based worm and the Wi-Fi-based worm. At present, Wi-Fi is widely used for mobile devices to connect to the Internet. But it exposes these devices to the dangerous environment. Most current worm propagation models aim to solve the problems of computer worms. They cannot be used directly in the mobile environment, particularly in the Wi-Fi scenario, because of the differences between computers and mobile devices. In this paper, we propose a worm propagation model in the Wi-Fi environment, called SEIQR (Susceptible-Exposed-Infectious- Quarantined-Recovered). In the model, infected nodes can be quarantined by the Wi-Fi base station, and a new state named the Quarantined state (Q) is established to represent these infected nodes. Based on this model, we present an effective method to inhibit the spread of the Wi-Fi-based worms. Furthermore, related stabilities of the worm-free and endemic equilibriums are studied based on the basic reproduction number R0. The worm-free equilibrium is locally and globally asymptotically stable if R0 < 1, whereas the endemic equilibrium is locally asymptotically stable if R0 < 1. Finally, we evaluate the performance of our model by comprehensive experiments with different infection rates and quarantine rates. The results indicate that our mechanism can combat the worms propagated via Wi-Fi.

  14. Differential geometry based solvation model II: Lagrangian formulation.

    PubMed

    Chen, Zhan; Baker, Nathan A; Wei, G W

    2011-12-01

    Solvation is an elementary process in nature and is of paramount importance to more sophisticated chemical, biological and biomolecular processes. The understanding of solvation is an essential prerequisite for the quantitative description and analysis of biomolecular systems. This work presents a Lagrangian formulation of our differential geometry based solvation models. The Lagrangian representation of biomolecular surfaces has a few utilities/advantages. First, it provides an essential basis for biomolecular visualization, surface electrostatic potential map and visual perception of biomolecules. Additionally, it is consistent with the conventional setting of implicit solvent theories and thus, many existing theoretical algorithms and computational software packages can be directly employed. Finally, the Lagrangian representation does not need to resort to artificially enlarged van der Waals radii as often required by the Eulerian representation in solvation analysis. The main goal of the present work is to analyze the connection, similarity and difference between the Eulerian and Lagrangian formalisms of the solvation model. Such analysis is important to the understanding of the differential geometry based solvation model. The present model extends the scaled particle theory of nonpolar solvation model with a solvent-solute interaction potential. The nonpolar solvation model is completed with a Poisson-Boltzmann (PB) theory based polar solvation model. The differential geometry theory of surfaces is employed to provide a natural description of solvent-solute interfaces. The optimization of the total free energy functional, which encompasses the polar and nonpolar contributions, leads to coupled potential driven geometric flow and PB equations. Due to the development of singularities and nonsmooth manifolds in the Lagrangian representation, the resulting potential-driven geometric flow equation is embedded into the Eulerian representation for the purpose of computation, thanks to the equivalence of the Laplace-Beltrami operator in the two representations. The coupled partial differential equations (PDEs) are solved with an iterative procedure to reach a steady state, which delivers desired solvent-solute interface and electrostatic potential for problems of interest. These quantities are utilized to evaluate the solvation free energies and protein-protein binding affinities. A number of computational methods and algorithms are described for the interconversion of Lagrangian and Eulerian representations, and for the solution of the coupled PDE system. The proposed approaches have been extensively validated. We also verify that the mean curvature flow indeed gives rise to the minimal molecular surface and the proposed variational procedure indeed offers minimal total free energy. Solvation analysis and applications are considered for a set of 17 small compounds and a set of 23 proteins. The salt effect on protein-protein binding affinity is investigated with two protein complexes by using the present model. Numerical results are compared to the experimental measurements and to those obtained by using other theoretical methods in the literature. © Springer-Verlag 2011

  15. Differential geometry based solvation model II: Lagrangian formulation

    PubMed Central

    Chen, Zhan; Baker, Nathan A.; Wei, G. W.

    2010-01-01

    Solvation is an elementary process in nature and is of paramount importance to more sophisticated chemical, biological and biomolecular processes. The understanding of solvation is an essential prerequisite for the quantitative description and analysis of biomolecular systems. This work presents a Lagrangian formulation of our differential geometry based solvation model. The Lagrangian representation of biomolecular surfaces has a few utilities/advantages. First, it provides an essential basis for biomolecular visualization, surface electrostatic potential map and visual perception of biomolecules. Additionally, it is consistent with the conventional setting of implicit solvent theories and thus, many existing theoretical algorithms and computational software packages can be directly employed. Finally, the Lagrangian representation does not need to resort to artificially enlarged van der Waals radii as often required by the Eulerian representation in solvation analysis. The main goal of the present work is to analyze the connection, similarity and difference between the Eulerian and Lagrangian formalisms of the solvation model. Such analysis is important to the understanding of the differential geometry based solvation model. The present model extends the scaled particle theory (SPT) of nonpolar solvation model with a solvent-solute interaction potential. The nonpolar solvation model is completed with a Poisson-Boltzmann (PB) theory based polar solvation model. The differential geometry theory of surfaces is employed to provide a natural description of solvent-solute interfaces. The minimization of the total free energy functional, which encompasses the polar and nonpolar contributions, leads to coupled potential driven geometric flow and Poisson-Boltzmann equations. Due to the development of singularities and nonsmooth manifolds in the Lagrangian representation, the resulting potential-driven geometric flow equation is embedded into the Eulerian representation for the purpose of computation, thanks to the equivalence of the Laplace-Beltrami operator in the two representations. The coupled partial differential equations (PDEs) are solved with an iterative procedure to reach a steady state, which delivers desired solvent-solute interface and electrostatic potential for problems of interest. These quantities are utilized to evaluate the solvation free energies and protein-protein binding affinities. A number of computational methods and algorithms are described for the interconversion of Lagrangian and Eulerian representations, and for the solution of the coupled PDE system. The proposed approaches have been extensively validated. We also verify that the mean curvature flow indeed gives rise to the minimal molecular surface (MMS) and the proposed variational procedure indeed offers minimal total free energy. Solvation analysis and applications are considered for a set of 17 small compounds and a set of 23 proteins. The salt effect on protein-protein binding affinity is investigated with two protein complexes by using the present model. Numerical results are compared to the experimental measurements and to those obtained by using other theoretical methods in the literature. PMID:21279359

  16. From creatures of habit to goal-directed learners: Tracking the developmental emergence of model-based reinforcement learning

    PubMed Central

    Decker, Johannes H.; Otto, A. Ross; Daw, Nathaniel D.; Hartley, Catherine A.

    2016-01-01

    Theoretical models distinguish two decision-making strategies that have been formalized in reinforcement-learning theory. A model-based strategy leverages a cognitive model of potential actions and their consequences to make goal-directed choices, whereas a model-free strategy evaluates actions based solely on their reward history. Research in adults has begun to elucidate the psychological mechanisms and neural substrates underlying these learning processes and factors that influence their relative recruitment. However, the developmental trajectory of these evaluative strategies has not been well characterized. In this study, children, adolescents, and adults, performed a sequential reinforcement-learning task that enables estimation of model-based and model-free contributions to choice. Whereas a model-free strategy was evident in choice behavior across all age groups, evidence of a model-based strategy only emerged during adolescence and continued to increase into adulthood. These results suggest that recruitment of model-based valuation systems represents a critical cognitive component underlying the gradual maturation of goal-directed behavior. PMID:27084852

  17. Highly Reproducible Label Free Quantitative Proteomic Analysis of RNA Polymerase Complexes*

    PubMed Central

    Mosley, Amber L.; Sardiu, Mihaela E.; Pattenden, Samantha G.; Workman, Jerry L.; Florens, Laurence; Washburn, Michael P.

    2011-01-01

    The use of quantitative proteomics methods to study protein complexes has the potential to provide in-depth information on the abundance of different protein components as well as their modification state in various cellular conditions. To interrogate protein complex quantitation using shotgun proteomic methods, we have focused on the analysis of protein complexes using label-free multidimensional protein identification technology and studied the reproducibility of biological replicates. For these studies, we focused on three highly related and essential multi-protein enzymes, RNA polymerase I, II, and III from Saccharomyces cerevisiae. We found that label-free quantitation using spectral counting is highly reproducible at the protein and peptide level when analyzing RNA polymerase I, II, and III. In addition, we show that peptide sampling does not follow a random sampling model, and we show the need for advanced computational models to predict peptide detection probabilities. In order to address these issues, we used the APEX protocol to model the expected peptide detectability based on whole cell lysate acquired using the same multidimensional protein identification technology analysis used for the protein complexes. Neither method was able to predict the peptide sampling levels that we observed using replicate multidimensional protein identification technology analyses. In addition to the analysis of the RNA polymerase complexes, our analysis provides quantitative information about several RNAP associated proteins including the RNAPII elongation factor complexes DSIF and TFIIF. Our data shows that DSIF and TFIIF are the most highly enriched RNAP accessory factors in Rpb3-TAP purifications and demonstrate our ability to measure low level associated protein abundance across biological replicates. In addition, our quantitative data supports a model in which DSIF and TFIIF interact with RNAPII in a dynamic fashion in agreement with previously published reports. PMID:21048197

  18. A simple computational algorithm of model-based choice preference.

    PubMed

    Toyama, Asako; Katahira, Kentaro; Ohira, Hideki

    2017-08-01

    A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.

  19. Thermal Analysis of a Disposable, Instrument-Free DNA Amplification Lab-on-a-Chip Platform.

    PubMed

    Pardy, Tamás; Rang, Toomas; Tulp, Indrek

    2018-06-04

    Novel second-generation rapid diagnostics based on nucleic acid amplification tests (NAAT) offer performance metrics on par with clinical laboratories in detecting infectious diseases at the point of care. The diagnostic assay is typically performed within a Lab-on-a-Chip (LoC) component with integrated temperature regulation. However, constraints on device dimensions, cost and power supply inherent with the device format apply to temperature regulation as well. Thermal analysis on simplified thermal models for the device can help overcome these barriers by speeding up thermal optimization. In this work, we perform experimental thermal analysis on the simplified thermal model for our instrument-free, single-use LoC NAAT platform. The system is evaluated further by finite element modelling. Steady-state as well as transient thermal analysis are performed to evaluate the performance of a self-regulating polymer resin heating element in the proposed device geometry. Reaction volumes in the target temperature range of the amplification reaction are estimated in the simulated model to assess compliance with assay requirements. Using the proposed methodology, we demonstrated our NAAT device concept capable of performing loop-mediated isothermal amplification in the 20⁻25 °C ambient temperature range with 32 min total assay time.

  20. Finite element modelling to assess the effect of surface mounted piezoelectric patch size on vibration response of a hybrid beam

    NASA Astrophysics Data System (ADS)

    Rahman, N.; Alam, M. N.

    2018-02-01

    Vibration response analysis of a hybrid beam with surface mounted patch piezoelectric layer is presented in this work. A one dimensional finite element (1D-FE) model based on efficient layerwise (zigzag) theory is used for the analysis. The beam element has eight mechanical and a variable number of electrical degrees of freedom. The beams are also modelled in 2D-FE (ABAQUS) using a plane stress piezoelectric quadrilateral element for piezo layers and a plane stress quadrilateral element for the elastic layers of hybrid beams. Results are presented to assess the effect of size of piezoelectric patch layer on the free and forced vibration responses of thin and moderately thick beams under clamped-free and clamped-clamped configurations. The beams are subjected to unit step loading and harmonic loading to obtain the forced vibration responses. The vibration control using in phase actuation potential on piezoelectric patches is also studied. The 1D-FE results are compared with the 2D-FE results.

  1. TU-CD-BRB-08: Radiomic Analysis of FDG-PET Identifies Novel Prognostic Imaging Biomarkers in Locally Advanced Pancreatic Cancer Patients Treated with SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Y; Shirato, H; Song, J

    2015-06-15

    Purpose: This study aims to identify novel prognostic imaging biomarkers in locally advanced pancreatic cancer (LAPC) using quantitative, high-throughput image analysis. Methods: 86 patients with LAPC receiving chemotherapy followed by SBRT were retrospectively studied. All patients had a baseline FDG-PET scan prior to SBRT. For each patient, we extracted 435 PET imaging features of five types: statistical, morphological, textural, histogram, and wavelet. These features went through redundancy checks, robustness analysis, as well as a prescreening process based on their concordance indices with respect to the relevant outcomes. We then performed principle component analysis on the remaining features (number ranged frommore » 10 to 16), and fitted a Cox proportional hazard regression model using the first 3 principle components. Kaplan-Meier analysis was used to assess the ability to distinguish high versus low-risk patients separated by median predicted survival. To avoid overfitting, all evaluations were based on leave-one-out cross validation (LOOCV), in which each holdout patient was assigned to a risk group according to the model obtained from a separate training set. Results: For predicting overall survival (OS), the most dominant imaging features were wavelet coefficients. There was a statistically significant difference in OS between patients with predicted high and low-risk based on LOOCV (hazard ratio: 2.26, p<0.001). Similar imaging features were also strongly associated with local progression-free survival (LPFS) (hazard ratio: 1.53, p=0.026) on LOOCV. In comparison, neither SUVmax nor TLG was associated with LPFS (p=0.103, p=0.433) (Table 1). Results for progression-free survival and distant progression-free survival showed similar trends. Conclusion: Radiomic analysis identified novel imaging features that showed improved prognostic value over conventional methods. These features characterize the degree of intra-tumor heterogeneity reflected on FDG-PET images, and their biological underpinnings warrant further investigation. If validated in large, prospective cohorts, this method could be used to stratify patients based on individualized risk.« less

  2. Circulating Tumor Cells in Breast Cancer Patients Treated by Neoadjuvant Chemotherapy: A Meta-analysis.

    PubMed

    Bidard, François-Clément; Michiels, Stefan; Riethdorf, Sabine; Mueller, Volkmar; Esserman, Laura J; Lucci, Anthony; Naume, Bjørn; Horiguchi, Jun; Gisbert-Criado, Rafael; Sleijfer, Stefan; Toi, Masakazu; Garcia-Saenz, Jose A; Hartkopf, Andreas; Generali, Daniele; Rothé, Françoise; Smerage, Jeffrey; Muinelo-Romay, Laura; Stebbing, Justin; Viens, Patrice; Magbanua, Mark Jesus M; Hall, Carolyn S; Engebraaten, Olav; Takata, Daisuke; Vidal-Martínez, José; Onstenk, Wendy; Fujisawa, Noriyoshi; Diaz-Rubio, Eduardo; Taran, Florin-Andrei; Cappelletti, Maria Rosa; Ignatiadis, Michail; Proudhon, Charlotte; Wolf, Denise M; Bauldry, Jessica B; Borgen, Elin; Nagaoka, Rin; Carañana, Vicente; Kraan, Jaco; Maestro, Marisa; Brucker, Sara Yvonne; Weber, Karsten; Reyal, Fabien; Amara, Dominic; Karhade, Mandar G; Mathiesen, Randi R; Tokiniwa, Hideaki; Llombart-Cussac, Antonio; Meddis, Alessandra; Blanche, Paul; d'Hollander, Koenraad; Cottu, Paul; Park, John W; Loibl, Sibylle; Latouche, Aurélien; Pierga, Jean-Yves; Pantel, Klaus

    2018-04-12

    We conducted a meta-analysis in nonmetastatic breast cancer patients treated by neoadjuvant chemotherapy (NCT) to assess the clinical validity of circulating tumor cell (CTC) detection as a prognostic marker. We collected individual patient data from 21 studies in which CTC detection by CellSearch was performed in early breast cancer patients treated with NCT. The primary end point was overall survival, analyzed according to CTC detection, using Cox regression models stratified by study. Secondary end points included distant disease-free survival, locoregional relapse-free interval, and pathological complete response. All statistical tests were two-sided. Data from patients were collected before NCT (n = 1574) and before surgery (n = 1200). CTC detection revealed one or more CTCs in 25.2% of patients before NCT; this was associated with tumor size (P < .001). The number of CTCs detected had a detrimental and decremental impact on overall survival (P < .001), distant disease-free survival (P < .001), and locoregional relapse-free interval (P < .001), but not on pathological complete response. Patients with one, two, three to four, and five or more CTCs before NCT displayed hazard ratios of death of 1.09 (95% confidence interval [CI] = 0.65 to 1.69), 2.63 (95% CI = 1.42 to 4.54), 3.83 (95% CI = 2.08 to 6.66), and 6.25 (95% CI = 4.34 to 9.09), respectively. In 861 patients with full data available, adding CTC detection before NCT increased the prognostic ability of multivariable prognostic models for overall survival (P < .001), distant disease-free survival (P < .001), and locoregional relapse-free interval (P = .008). CTC count is an independent and quantitative prognostic factor in early breast cancer patients treated by NCT. It complements current prognostic models based on tumor characteristics and response to therapy.

  3. GVIPS Models and Software

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Gendy, Atef; Saleeb, Atef F.; Mark, John; Wilt, Thomas E.

    2007-01-01

    Two reports discuss, respectively, (1) the generalized viscoplasticity with potential structure (GVIPS) class of mathematical models and (2) the Constitutive Material Parameter Estimator (COMPARE) computer program. GVIPS models are constructed within a thermodynamics- and potential-based theoretical framework, wherein one uses internal state variables and derives constitutive equations for both the reversible (elastic) and the irreversible (viscoplastic) behaviors of materials. Because of the underlying potential structure, GVIPS models not only capture a variety of material behaviors but also are very computationally efficient. COMPARE comprises (1) an analysis core and (2) a C++-language subprogram that implements a Windows-based graphical user interface (GUI) for controlling the core. The GUI relieves the user of the sometimes tedious task of preparing data for the analysis core, freeing the user to concentrate on the task of fitting experimental data and ultimately obtaining a set of material parameters. The analysis core consists of three modules: one for GVIPS material models, an analysis module containing a specialized finite-element solution algorithm, and an optimization module. COMPARE solves the problem of finding GVIPS material parameters in the manner of a design-optimization problem in which the parameters are the design variables.

  4. Experimental Verification of the Rudder-Free Stability Theory for an Airplane Model Equipped with Rudders Having Negative Floating Tendency and Negligible Friction

    NASA Technical Reports Server (NTRS)

    McKinney, Marion O.; Maggin, Bernard

    1944-01-01

    An investigation has been made in the Langley free-flight tunnel to obtain an experimental verification of the theoretical rudder-free stability characteristics of an airplane model equipped with conventional rudders having negative floating tendencies and negligible friction. The model used in the tests was equipped with a conventional single vertical tail having rudder area 40 percent of the vertical tail area. The model was tested both in free flight and mounted on a strut that allowed freedom only in yaw. Tests were made with three different amounts of rudder aerodynamic balance and with various values of mass, moment of inertia, and center-of-gravity location of the rudder. Most of the stability derivatives required for the theoretical calculations were determined from forced and free-oscillation tests of the particular model tested. The theoretical analysis showed that the rudder-free motions of an airplane consist largely of two oscillatory modes - a long-period oscillation somewhat similar to the normal rudder-fixed oscillation and a short-period oscillation introduced only when the rudder is set free. It was found possible in the tests to create lateral instability of the rudder-free short-period mode by large values of rudder mass parameters even though the rudder-fixed condition was highly stable. The results of the tests and calculation indicated that for most present-day airplanes having rudders of negative floating tendency, the rudder-free stability characteristics may be examined by simply considering the dynamic lateral stability using the value of the directional-stability parameter Cn(sub p) for the rudder-free condition in the conventional controls-fixed lateral-stability equations. For very large airplanes having relatively high values of the rudder mass parameters with respect to the rudder aerodynamic parameters, however, analysis of the rudder-free stability should be made with the complete equations of motion. Good agreement between calculated and measured rudder-free stability characteristics was obtained by use of the general rudder-free stability theory, in which four degrees of lateral freedom are considered. When this assumption is made that the rolling motions alone or the lateral and rolling motions may be neglected in the calculations of rudder-free stability, it is possible to predict satisfactorily the characteristics of the long-period (Dutch roll type) rudder-free oscillation for airplanes only when the effective-dihedral angle is small. With these simplifying assumptions, however, satisfactory prediction of the short-period oscillation may be obtained for any dihedral. Further simplification of the theory based on the assumption that the rudder moment of inertia might be disregarded was found to be invalid because this assumption made it impossible to calculate the characteristics of the short-period oscillations.

  5. Plumes in the mantle. [free air and isostatic gravity anomalies for geophysical interpretation

    NASA Technical Reports Server (NTRS)

    Khan, M. A.

    1973-01-01

    Free air and isostatic gravity anomalies for the purposes of geophysical interpretation are presented. Evidence for the existance of hotspots in the mantle is reviewed. The prosposed locations of these hotspots are not always associated with positive gravity anomalies. Theoretical analysis based on simplified flow models for the plumes indicates that unless the frictional viscosities are several orders of magnitude smaller than the present estimates of mantle viscosity or alternately, the vertical flows are reduced by about two orders of magnitude, the plume flow will generate implausibly high temperatures.

  6. When Does Model-Based Control Pay Off?

    PubMed

    Kool, Wouter; Cushman, Fiery A; Gershman, Samuel J

    2016-08-01

    Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to "model-free" and "model-based" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand.

  7. Pooling-analysis on hMLH1 polymorphisms and cancer risk: evidence based on 31,484 cancer cases and 45,494 cancer-free controls.

    PubMed

    Li, Sha; Zheng, Yi; Tian, Tian; Wang, Meng; Liu, Xinghan; Liu, Kang; Zhai, Yajing; Dai, Cong; Deng, Yujiao; Li, Shanli; Dai, Zhijun; Lu, Jun

    2017-11-03

    To elucidate the veritable relationship between three hMLH1 polymorphisms (rs1800734, rs1799977, rs63750447) and cancer risk, we performed this meta-analysis based on overall published data up to May 2017, from PubMed, Web of knowledge, VIP, WanFang and CNKI database, and the references of the original studies or review articles. 57 publications including 31,484 cancer cases and 45,494 cancer-free controls were obtained. The quality assessment of six articles obtained a summarized score less than 6 in terms of the Newcastle-Ottawa Scale (NOS). All statistical analyses were calculated with the software STATA (Version 14.0; Stata Corp, College Station, TX). We found all the three polymorphisms can enhance overall cancer risk, especially in Asians, under different genetic comparisons. In the subgroup analysis by cancer type, we found a moderate association between rs1800734 and the risk of gastric cancer (allele model: OR = 1.14, P = 0.017; homozygote model: OR = 1.33, P = 0.019; dominant model: OR = 1.27, P = 0.024) and lung cancer in recessive model (OR = 1.27, P = 0.024). The G allele of rs1799977 polymorphism was proved to connect with susceptibility of colorectal cancer (allele model: OR = 1.21, P = 0.023; dominate model: OR = 1.32, P <0.0001) and prostate cancer (dominate model: OR = 1.36, P <0.0001). Rs63750447 showed an increased risk of colorectal cancer, endometrial cancer and gastric cancer under all genetic models. These findings provide evidence that hMLH1 polymorphisms may associate with cancer risk, especially in Asians.

  8. Influence of thermal annealing on the bulk scattering in giant-magnetoresistance spin-valve with nano-oxide layers

    NASA Astrophysics Data System (ADS)

    Nam, Chunghee; Jang, Youngman; Lee, Ki-Su; Shim, Jungjin; Cho, B. K.

    2006-04-01

    Based upon a bulk scattering model, we investigated the variation of giant magnetoresistance (GMR) behavior after thermal annealing at Ta=250 °C as a function of the top free layer thickness of a GMR spin valve with nano-oxide layers (NOLs). It was found that the enhancement of GMR ratio after thermal annealing is explained qualitatively in terms of the increase of active GMR region in the free layer and, simultaneously, the increase of intrinsic spin-scattering ratio. These effects are likely due to the improved specular reflection at the well-formed interface of NOL. Furthermore, we developed a modified phenomenological model for sheet conductance change (ΔG) in terms of the top free layer thickness. This modified model was found to be useful in the quantitative analysis of the variation of the active GMR region and the intrinsic spin-scattering properties. The two physical parameters were found to change consistently with the effects of thermal annealing on NOL.

  9. The QSAR study of flavonoid-metal complexes scavenging rad OH free radical

    NASA Astrophysics Data System (ADS)

    Wang, Bo-chu; Qian, Jun-zhen; Fan, Ying; Tan, Jun

    2014-10-01

    Flavonoid-metal complexes have antioxidant activities. However, quantitative structure-activity relationships (QSAR) of flavonoid-metal complexes and their antioxidant activities has still not been tackled. On the basis of 21 structures of flavonoid-metal complexes and their antioxidant activities for scavenging rad OH free radical, we optimised their structures using Gaussian 03 software package and we subsequently calculated and chose 18 quantum chemistry descriptors such as dipole, charge and energy. Then we chose several quantum chemistry descriptors that are very important to the IC50 of flavonoid-metal complexes for scavenging rad OH free radical through method of stepwise linear regression, Meanwhile we obtained 4 new variables through the principal component analysis. Finally, we built the QSAR models based on those important quantum chemistry descriptors and the 4 new variables as the independent variables and the IC50 as the dependent variable using an Artificial Neural Network (ANN), and we validated the two models using experimental data. These results show that the two models in this paper are reliable and predictable.

  10. WE-H-BRA-08: A Monte Carlo Cell Nucleus Model for Assessing Cell Survival Probability Based On Particle Track Structure Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, B; Georgia Institute of Technology, Atlanta, GA; Wang, C

    Purpose: To correlate the damage produced by particles of different types and qualities to cell survival on the basis of nanodosimetric analysis and advanced DNA structures in the cell nucleus. Methods: A Monte Carlo code was developed to simulate subnuclear DNA chromatin fibers (CFs) of 30nm utilizing a mean-free-path approach common to radiation transport. The cell nucleus was modeled as a spherical region containing 6000 chromatin-dense domains (CDs) of 400nm diameter, with additional CFs modeled in a sparser interchromatin region. The Geant4-DNA code was utilized to produce a particle track database representing various particles at different energies and dose quantities.more » These tracks were used to stochastically position the DNA structures based on their mean free path to interaction with CFs. Excitation and ionization events intersecting CFs were analyzed using the DBSCAN clustering algorithm for assessment of the likelihood of producing DSBs. Simulated DSBs were then assessed based on their proximity to one another for a probability of inducing cell death. Results: Variations in energy deposition to chromatin fibers match expectations based on differences in particle track structure. The quality of damage to CFs based on different particle types indicate more severe damage by high-LET radiation than low-LET radiation of identical particles. In addition, the model indicates more severe damage by protons than of alpha particles of same LET, which is consistent with differences in their track structure. Cell survival curves have been produced showing the L-Q behavior of sparsely ionizing radiation. Conclusion: Initial results indicate the feasibility of producing cell survival curves based on the Monte Carlo cell nucleus method. Accurate correlation between simulated DNA damage to cell survival on the basis of nanodosimetric analysis can provide insight into the biological responses to various radiation types. Current efforts are directed at producing cell survival curves for high-LET radiation.« less

  11. Impact of implant support on mandibular free-end base removable partial denture: theoretical study.

    PubMed

    Oh, Won-suk; Oh, Tae-Ju; Park, Ju-mi

    2016-02-01

    This study investigated the impact of implant support on the development of shear force and bending moment in mandibular free-end base removable partial dentures (RPDs). Three theoretical test models of unilateral mandibular free-end base RPDs were constructed to represent the base of tooth replacement, as follows: Model 1: first and second molars (M1 and M2); Model 2: second premolar (P2), M1, and M2; and Model 3: first premolar (P1), P2, M1, and M2. The implant support located either at M1 or M2 sites. The occlusal loading was concentrated at each replacement tooth to calculate the stress resultants developed in the RPD models using the free-body diagrams of shear force and bending moment. There was a trend of reduction in the peak shear force and bending moment when the base was supported by implant. However, the degree of reduction varied with the location of implant support. The moment reduced by 76% in Model 1, 58% in Model 2, and 42% in Model 3, when the implant location shifted from M1 to M2 sites. The shear forces and bending moments subjected to mandibular free-end base RPDs were found to decrease with the addition of implant support. However, the impact of implant support varied with the location of implant in this theoretical study. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. When human walking becomes random walking: fractal analysis and modeling of gait rhythm fluctuations

    NASA Astrophysics Data System (ADS)

    Hausdorff, Jeffrey M.; Ashkenazy, Yosef; Peng, Chang-K.; Ivanov, Plamen Ch.; Stanley, H. Eugene; Goldberger, Ary L.

    2001-12-01

    We present a random walk, fractal analysis of the stride-to-stride fluctuations in the human gait rhythm. The gait of healthy young adults is scale-free with long-range correlations extending over hundreds of strides. This fractal scaling changes characteristically with maturation in children and older adults and becomes almost completely uncorrelated with certain neurologic diseases. Stochastic modeling of the gait rhythm dynamics, based on transitions between different “neural centers”, reproduces distinctive statistical properties of the gait pattern. By tuning one model parameter, the hopping (transition) range, the model can describe alterations in gait dynamics from childhood to adulthood - including a decrease in the correlation and volatility exponents with maturation.

  13. Gene regulatory networks: a coarse-grained, equation-free approach to multiscale computation.

    PubMed

    Erban, Radek; Kevrekidis, Ioannis G; Adalsteinsson, David; Elston, Timothy C

    2006-02-28

    We present computer-assisted methods for analyzing stochastic models of gene regulatory networks. The main idea that underlies this equation-free analysis is the design and execution of appropriately initialized short bursts of stochastic simulations; the results of these are processed to estimate coarse-grained quantities of interest, such as mesoscopic transport coefficients. In particular, using a simple model of a genetic toggle switch, we illustrate the computation of an effective free energy Phi and of a state-dependent effective diffusion coefficient D that characterize an unavailable effective Fokker-Planck equation. Additionally we illustrate the linking of equation-free techniques with continuation methods for performing a form of stochastic "bifurcation analysis"; estimation of mean switching times in the case of a bistable switch is also implemented in this equation-free context. The accuracy of our methods is tested by direct comparison with long-time stochastic simulations. This type of equation-free analysis appears to be a promising approach to computing features of the long-time, coarse-grained behavior of certain classes of complex stochastic models of gene regulatory networks, circumventing the need for long Monte Carlo simulations.

  14. Measurement of the aerothermodynamic state in a high enthalpy plasma wind-tunnel flow

    NASA Astrophysics Data System (ADS)

    Hermann, Tobias; Löhle, Stefan; Zander, Fabian; Fasoulas, Stefanos

    2017-11-01

    This paper presents spatially resolved measurements of absolute particle densities of N2, N2+, N, O, N+ , O+ , e- and excitation temperatures of electronic, rotational and vibrational modes of an air plasma free stream. All results are based on optical emission spectroscopy data. The measured parameters are combined to determine the local mass-specific enthalpy of the free stream. The analysis of the radiative transport, relative and absolute intensities, and spectral shape is used to determine various thermochemical parameters. The model uncertainty of each analysis method is assessed. The plasma flow is shown to be close to equilibrium. The strongest deviations from equilibrium occur for N, N+ and N2+ number densities in the free stream. Additional measurements of the local mass-specific enthalpy are conducted using a mass injection probe as well as a heat flux and total pressure probe. The agreement between all methods of enthalpy determination is good.

  15. Force-free field modeling of twist and braiding-induced magnetic energy in an active-region corona

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thalmann, J. K.; Tiwari, S. K.; Wiegelmann, T., E-mail: julia.thalmann@uni-graz.at

    2014-01-01

    The theoretical concept that braided magnetic field lines in the solar corona may dissipate a sufficient amount of energy to account for the brightening observed in the active-region (AR) corona has only recently been substantiated by high-resolution observations. From the analysis of coronal images obtained with the High Resolution Coronal Imager, first observational evidence of the braiding of magnetic field lines was reported by Cirtain et al. (hereafter CG13). We present nonlinear force-free reconstructions of the associated coronal magnetic field based on Solar Dynamics Observatory/Helioseismic and Magnetic Imager vector magnetograms. We deliver estimates of the free magnetic energy associated withmore » a braided coronal structure. Our model results suggest (∼100 times) more free energy at the braiding site than analytically estimated by CG13, strengthening the possibility of the AR corona being heated by field line braiding. We were able to appropriately assess the coronal free energy by using vector field measurements and we attribute the lower energy estimate of CG13 to the underestimated (by a factor of 10) azimuthal field strength. We also quantify the increase in the overall twist of a flare-related flux rope that was noted by CG13. From our models we find that the overall twist of the flux rope increased by about half a turn within 12 minutes. Unlike another method to which we compare our results, we evaluate the winding of the flux rope's constituent field lines around each other purely based on their modeled coronal three-dimensional field line geometry. To our knowledge, this is done for the first time here.« less

  16. A neural network - based algorithm for predicting stone -free status after ESWL therapy

    PubMed Central

    Seckiner, Ilker; Seckiner, Serap; Sen, Haluk; Bayrak, Omer; Dogan, Kazım; Erturhan, Sakip

    2017-01-01

    ABSTRACT Objective: The prototype artificial neural network (ANN) model was developed using data from patients with renal stone, in order to predict stone-free status and to help in planning treatment with Extracorporeal Shock Wave Lithotripsy (ESWL) for kidney stones. Materials and Methods: Data were collected from the 203 patients including gender, single or multiple nature of the stone, location of the stone, infundibulopelvic angle primary or secondary nature of the stone, status of hydronephrosis, stone size after ESWL, age, size, skin to stone distance, stone density and creatinine, for eleven variables. Regression analysis and the ANN method were applied to predict treatment success using the same series of data. Results: Subsequently, patients were divided into three groups by neural network software, in order to implement the ANN: training group (n=139), validation group (n=32), and the test group (n=32). ANN analysis demonstrated that the prediction accuracy of the stone-free rate was 99.25% in the training group, 85.48% in the validation group, and 88.70% in the test group. Conclusions: Successful results were obtained to predict the stone-free rate, with the help of the ANN model designed by using a series of data collected from real patients in whom ESWL was implemented to help in planning treatment for kidney stones. PMID:28727384

  17. Hybrid modeling and empirical analysis of automobile supply chain network

    NASA Astrophysics Data System (ADS)

    Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying

    2017-05-01

    Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.

  18. The analysis of single-electron orbits in a free electron laser based upon a rectangular hybrid wiggler

    NASA Astrophysics Data System (ADS)

    Kordbacheh, A.; Ghahremaninezhad, Roghayeh; Maraghechi, B.

    2012-09-01

    A three-dimensional analysis of a novel free-electron laser (FEL) based upon a rectangular hybrid wiggler (RHW) is presented. This RHW is designed in a configuration composed of rectangular rings with alternating ferrite and dielectric spacers immersed in a solenoidal magnetic field. An analytic model of RHW is introduced by solution of Laplace's equation for the magnetostatic fields under the appropriate boundary conditions. The single-electron orbits in combined RHW and axial guide magnetic fields are studied when only the first and the third spatial harmonic components of the RHW field are taken into account and the higher order terms are ignored. The results indicate that the third spatial harmonic leads to group III orbits with a strong negative mass regime particularly in large solenoidal magnetic fields. RHW is found to be a promising candidate with favorable characteristics to be used in microwave FEL.

  19. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    PubMed

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. A numerical model on thermodynamic analysis of free piston Stirling engines

    NASA Astrophysics Data System (ADS)

    Mou, Jian; Hong, Guotong

    2017-02-01

    In this paper, a new numerical thermodynamic model which bases on the energy conservation law has been used to analyze the free piston Stirling engine. In the model all data was taken from a real free piston Stirling engine which has been built in our laboratory. The energy conservation equations have been applied to expansion space and compression space of the engine. The equation includes internal energy, input power, output power, enthalpy and the heat losses. The heat losses include regenerative heat conduction loss, shuttle heat loss, seal leakage loss and the cavity wall heat conduction loss. The numerical results show that the temperature of expansion space and the temperature of compression space vary with the time. The higher regeneration effectiveness, the higher efficiency and bigger output work. It is also found that under different initial pressures, the heat source temperature, phase angle and engine work frequency pose different effects on the engine’s efficiency and power. As a result, the model is expected to be a useful tool for simulation, design and optimization of Stirling engines.

  1. Limits to Cloud Susceptibility

    NASA Technical Reports Server (NTRS)

    Coakley, James A., Jr.

    2002-01-01

    1-kilometer AVHRR observations of ship tracks in low-level clouds off the west coast of the U S. were used to determine limits for the degree to which clouds might be altered by increases in anthropogenic aerosols. Hundreds of tracks were analyzed to determine whether the changes in droplet radii, visible optical depths, and cloud top altitudes that result from the influx of particles from underlying ships were consistent with expectations based on simple models for the indirect effect of aerosols. The models predict substantial increases in sunlight reflected by polluted clouds due to the increases in droplet numbers and cloud liquid water that result from the elevated particle concentrations. Contrary to the model predictions, the analysis of ship tracks revealed a 15-20% reduction in liquid water for the polluted clouds. Studies performed with a large-eddy cloud simulation model suggested that the shortfall in cloud liquid water found in the satellite observations might be attributed to the restriction that the 1-kilometer pixels be completely covered by either polluted or unpolluted cloud. The simulation model revealed that a substantial fraction of the indirect effect is caused by a horizontal redistribution of cloud water in the polluted clouds. Cloud-free gaps in polluted clouds fill in with cloud water while the cloud-free gaps in the surrounding unpolluted clouds remain cloud-free. By limiting the analysis to only overcast pixels, the current study failed to account for the gap-filling predicted by the simulation model. This finding and an analysis of the spatial variability of marine stratus suggest new ways to analyze ship tracks to determine the limit to which particle pollution will alter the amount of sunlight reflected by clouds.

  2. Guidelines for the analysis of free energy calculations

    PubMed Central

    Klimovich, Pavel V.; Shirts, Michael R.; Mobley, David L.

    2015-01-01

    Free energy calculations based on molecular dynamics (MD) simulations show considerable promise for applications ranging from drug discovery to prediction of physical properties and structure-function studies. But these calculations are still difficult and tedious to analyze, and best practices for analysis are not well defined or propagated. Essentially, each group analyzing these calculations needs to decide how to conduct the analysis and, usually, develop its own analysis tools. Here, we review and recommend best practices for analysis yielding reliable free energies from molecular simulations. Additionally, we provide a Python tool, alchemical–analysis.py, freely available on GitHub at https://github.com/choderalab/pymbar–examples, that implements the analysis practices reviewed here for several reference simulation packages, which can be adapted to handle data from other packages. Both this review and the tool covers analysis of alchemical calculations generally, including free energy estimates via both thermodynamic integration and free energy perturbation-based estimators. Our Python tool also handles output from multiple types of free energy calculations, including expanded ensemble and Hamiltonian replica exchange, as well as standard fixed ensemble calculations. We also survey a range of statistical and graphical ways of assessing the quality of the data and free energy estimates, and provide prototypes of these in our tool. We hope these tools and discussion will serve as a foundation for more standardization of and agreement on best practices for analysis of free energy calculations. PMID:25808134

  3. Linear analysis of a force reflective teleoperator

    NASA Technical Reports Server (NTRS)

    Biggers, Klaus B.; Jacobsen, Stephen C.; Davis, Clark C.

    1989-01-01

    Complex force reflective teleoperation systems are often very difficult to analyze due to the large number of components and control loops involved. One mode of a force reflective teleoperator is described. An analysis of the performance of the system based on a linear analysis of the general full order model is presented. Reduced order models are derived and correlated with the full order models. Basic effects of force feedback and position feedback are examined and the effects of time delays between the master and slave are studied. The results show that with symmetrical position-position control of teleoperators, a basic trade off must be made between the intersystem stiffness of the teleoperator, and the impedance felt by the operator in free space.

  4. Equation-free analysis of agent-based models and systematic parameter determination

    NASA Astrophysics Data System (ADS)

    Thomas, Spencer A.; Lloyd, David J. B.; Skeldon, Anne C.

    2016-12-01

    Agent based models (ABM)s are increasingly used in social science, economics, mathematics, biology and computer science to describe time dependent systems in circumstances where a description in terms of equations is difficult. Yet few tools are currently available for the systematic analysis of ABM behaviour. Numerical continuation and bifurcation analysis is a well-established tool for the study of deterministic systems. Recently, equation-free (EF) methods have been developed to extend numerical continuation techniques to systems where the dynamics are described at a microscopic scale and continuation of a macroscopic property of the system is considered. To date, the practical use of EF methods has been limited by; (1) the over-head of application-specific implementation; (2) the laborious configuration of problem-specific parameters; and (3) large ensemble sizes (potentially) leading to computationally restrictive run-times. In this paper we address these issues with our tool for the EF continuation of stochastic systems, which includes algorithms to systematically configuration problem specific parameters and enhance robustness to noise. Our tool is generic and can be applied to any 'black-box' simulator and determines the essential EF parameters prior to EF analysis. Robustness is significantly improved using our convergence-constraint with a corrector-repeat (C3R) method. This algorithm automatically detects outliers based on the dynamics of the underlying system enabling both an order of magnitude reduction in ensemble size and continuation of systems at much higher levels of noise than classical approaches. We demonstrate our method with application to several ABM models, revealing parameter dependence, bifurcation and stability analysis of these complex systems giving a deep understanding of the dynamical behaviour of the models in a way that is not otherwise easily obtainable. In each case we demonstrate our systematic parameter determination stage for configuring the system specific EF parameters.

  5. Assessing model sensitivity and uncertainty across multiple Free-Air CO2 Enrichment experiments.

    NASA Astrophysics Data System (ADS)

    Cowdery, E.; Dietze, M.

    2015-12-01

    As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentrations are highly variable and contain a considerable amount of uncertainty. It is necessary that we understand which factors are driving this uncertainty. The Free-Air CO2 Enrichment (FACE) experiments have equipped us with a rich data source that can be used to calibrate and validate these model predictions. To identify and evaluate the assumptions causing inter-model differences we performed model sensitivity and uncertainty analysis across ambient and elevated CO2 treatments using the Data Assimilation Linked Ecosystem Carbon (DALEC) model and the Ecosystem Demography Model (ED2), two process-based models ranging from low to high complexity respectively. These modeled process responses were compared to experimental data from the Kennedy Space Center Open Top Chamber Experiment, the Nevada Desert Free Air CO2 Enrichment Facility, the Rhinelander FACE experiment, the Wyoming Prairie Heating and CO2 Enrichment Experiment, the Duke Forest Face experiment and the Oak Ridge Experiment on CO2 Enrichment. By leveraging data access proxy and data tilling services provided by the BrownDog data curation project alongside analysis modules available in the Predictive Ecosystem Analyzer (PEcAn), we produced automated, repeatable benchmarking workflows that are generalized to incorporate different sites and ecological models. Combining the observed patterns of uncertainty between the two models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.

  6. Evidence for Model-based Computations in the Human Amygdala during Pavlovian Conditioning

    PubMed Central

    Prévost, Charlotte; McNamee, Daniel; Jessup, Ryan K.; Bossaerts, Peter; O'Doherty, John P.

    2013-01-01

    Contemporary computational accounts of instrumental conditioning have emphasized a role for a model-based system in which values are computed with reference to a rich model of the structure of the world, and a model-free system in which values are updated without encoding such structure. Much less studied is the possibility of a similar distinction operating at the level of Pavlovian conditioning. In the present study, we scanned human participants while they participated in a Pavlovian conditioning task with a simple structure while measuring activity in the human amygdala using a high-resolution fMRI protocol. After fitting a model-based algorithm and a variety of model-free algorithms to the fMRI data, we found evidence for the superiority of a model-based algorithm in accounting for activity in the amygdala compared to the model-free counterparts. These findings support an important role for model-based algorithms in describing the processes underpinning Pavlovian conditioning, as well as providing evidence of a role for the human amygdala in model-based inference. PMID:23436990

  7. From Creatures of Habit to Goal-Directed Learners: Tracking the Developmental Emergence of Model-Based Reinforcement Learning.

    PubMed

    Decker, Johannes H; Otto, A Ross; Daw, Nathaniel D; Hartley, Catherine A

    2016-06-01

    Theoretical models distinguish two decision-making strategies that have been formalized in reinforcement-learning theory. A model-based strategy leverages a cognitive model of potential actions and their consequences to make goal-directed choices, whereas a model-free strategy evaluates actions based solely on their reward history. Research in adults has begun to elucidate the psychological mechanisms and neural substrates underlying these learning processes and factors that influence their relative recruitment. However, the developmental trajectory of these evaluative strategies has not been well characterized. In this study, children, adolescents, and adults performed a sequential reinforcement-learning task that enabled estimation of model-based and model-free contributions to choice. Whereas a model-free strategy was apparent in choice behavior across all age groups, a model-based strategy was absent in children, became evident in adolescents, and strengthened in adults. These results suggest that recruitment of model-based valuation systems represents a critical cognitive component underlying the gradual maturation of goal-directed behavior. © The Author(s) 2016.

  8. Photoproduction of π 0 mesons off protons and neutrons in the second and third nucleon resonance regions

    DOE PAGES

    Dieterle, M.; Werthmüller, D.; Abt, S.; ...

    2018-06-21

    Background: Photoproduction of mesons off quasi-free nucleons bound in the deuteron allows to study the elec- tromagnetic excitation spectrum of the neutron and the isospin structure of the excitation of nucleon resonances. The database for such reactions is much more sparse than for free proton targets. Purpose: Study experimentally single π0 photoproduction off quasi-free nucleons from the deuteron. Investigate nuclear effects by a comparison of the results for free protons and quasi-free protons. Use the quasi-free neutron data (corrected for nuclear effects) to test the predictions of reaction models and partial wave analysis (PWA) for γn → nπ 0 derivedmore » from the analysis of the other isospin channels. Methods: High statistics angular distributions and total cross sections for the photoproduction of π 0 mesons off the deuteron with coincident detection of recoil nucleons have been measured for the first time. The experiment was performed at the tagged photon beam of the Mainz Microtron (MAMI) accelerator for photon energies between 0.45 GeV and 1.4 GeV, using an almost 4π electromagnetic calorimeter composed of the Crystal Ball and TAPS detectors. A complete kinematic reconstruction of the final state removed the effects of Fermi motion. Results: Significant effects from final state interactions (FSI) were observed for participant protons in comparison to free proton targets (between 30% and almost 40%). The data in coincidence with recoil neutrons were corrected for such effects under the assumption that they are identical for participant protons and neutrons. Reaction model predictions and PWA for γn → nπ 0, based on fits to data for the other isospin channels, disagreed between themselves and no model provided a good description of the new data. Conclusions: The results demonstrate clearly the importance of a measurement of the fully neutral final state for the isospin decomposition of the cross section. Model refits, for example from the Bonn-Gatchina analysis, show that the new and the previous data for the other three isospin channels can be simultaneously described when the contributions of several partial waves are modified. Finally, the results are also relevant for the suppression of the higher resonance bumps in total photoabsorption on nuclei, which are not well understood.« less

  9. Photoproduction of π 0 mesons off protons and neutrons in the second and third nucleon resonance regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dieterle, M.; Werthmüller, D.; Abt, S.

    Background: Photoproduction of mesons off quasi-free nucleons bound in the deuteron allows to study the elec- tromagnetic excitation spectrum of the neutron and the isospin structure of the excitation of nucleon resonances. The database for such reactions is much more sparse than for free proton targets. Purpose: Study experimentally single π0 photoproduction off quasi-free nucleons from the deuteron. Investigate nuclear effects by a comparison of the results for free protons and quasi-free protons. Use the quasi-free neutron data (corrected for nuclear effects) to test the predictions of reaction models and partial wave analysis (PWA) for γn → nπ 0 derivedmore » from the analysis of the other isospin channels. Methods: High statistics angular distributions and total cross sections for the photoproduction of π 0 mesons off the deuteron with coincident detection of recoil nucleons have been measured for the first time. The experiment was performed at the tagged photon beam of the Mainz Microtron (MAMI) accelerator for photon energies between 0.45 GeV and 1.4 GeV, using an almost 4π electromagnetic calorimeter composed of the Crystal Ball and TAPS detectors. A complete kinematic reconstruction of the final state removed the effects of Fermi motion. Results: Significant effects from final state interactions (FSI) were observed for participant protons in comparison to free proton targets (between 30% and almost 40%). The data in coincidence with recoil neutrons were corrected for such effects under the assumption that they are identical for participant protons and neutrons. Reaction model predictions and PWA for γn → nπ 0, based on fits to data for the other isospin channels, disagreed between themselves and no model provided a good description of the new data. Conclusions: The results demonstrate clearly the importance of a measurement of the fully neutral final state for the isospin decomposition of the cross section. Model refits, for example from the Bonn-Gatchina analysis, show that the new and the previous data for the other three isospin channels can be simultaneously described when the contributions of several partial waves are modified. Finally, the results are also relevant for the suppression of the higher resonance bumps in total photoabsorption on nuclei, which are not well understood.« less

  10. A rapid solvent accessible surface area estimator for coarse grained molecular simulations.

    PubMed

    Wei, Shuai; Brooks, Charles L; Frank, Aaron T

    2017-06-05

    The rapid and accurate calculation of solvent accessible surface area (SASA) is extremely useful in the energetic analysis of biomolecules. For example, SASA models can be used to estimate the transfer free energy associated with biophysical processes, and when combined with coarse-grained simulations, can be particularly useful for accounting for solvation effects within the framework of implicit solvent models. In such cases, a fast and accurate, residue-wise SASA predictor is highly desirable. Here, we develop a predictive model that estimates SASAs based on Cα-only protein structures. Through an extensive comparison between this method and a comparable method, POPS-R, we demonstrate that our new method, Protein-C α Solvent Accessibilities or PCASA, shows better performance, especially for unfolded conformations of proteins. We anticipate that this model will be quite useful in the efficient inclusion of SASA-based solvent free energy estimations in coarse-grained protein folding simulations. PCASA is made freely available to the academic community at https://github.com/atfrank/PCASA. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Binding free energy analysis of protein-protein docking model structures by evERdock.

    PubMed

    Takemura, Kazuhiro; Matubayasi, Nobuyuki; Kitao, Akio

    2018-03-14

    To aid the evaluation of protein-protein complex model structures generated by protein docking prediction (decoys), we previously developed a method to calculate the binding free energies for complexes. The method combines a short (2 ns) all-atom molecular dynamics simulation with explicit solvent and solution theory in the energy representation (ER). We showed that this method successfully selected structures similar to the native complex structure (near-native decoys) as the lowest binding free energy structures. In our current work, we applied this method (evERdock) to 100 or 300 model structures of four protein-protein complexes. The crystal structures and the near-native decoys showed the lowest binding free energy of all the examined structures, indicating that evERdock can successfully evaluate decoys. Several decoys that show low interface root-mean-square distance but relatively high binding free energy were also identified. Analysis of the fraction of native contacts, hydrogen bonds, and salt bridges at the protein-protein interface indicated that these decoys were insufficiently optimized at the interface. After optimizing the interactions around the interface by including interfacial water molecules, the binding free energies of these decoys were improved. We also investigated the effect of solute entropy on binding free energy and found that consideration of the entropy term does not necessarily improve the evaluations of decoys using the normal model analysis for entropy calculation.

  12. Binding free energy analysis of protein-protein docking model structures by evERdock

    NASA Astrophysics Data System (ADS)

    Takemura, Kazuhiro; Matubayasi, Nobuyuki; Kitao, Akio

    2018-03-01

    To aid the evaluation of protein-protein complex model structures generated by protein docking prediction (decoys), we previously developed a method to calculate the binding free energies for complexes. The method combines a short (2 ns) all-atom molecular dynamics simulation with explicit solvent and solution theory in the energy representation (ER). We showed that this method successfully selected structures similar to the native complex structure (near-native decoys) as the lowest binding free energy structures. In our current work, we applied this method (evERdock) to 100 or 300 model structures of four protein-protein complexes. The crystal structures and the near-native decoys showed the lowest binding free energy of all the examined structures, indicating that evERdock can successfully evaluate decoys. Several decoys that show low interface root-mean-square distance but relatively high binding free energy were also identified. Analysis of the fraction of native contacts, hydrogen bonds, and salt bridges at the protein-protein interface indicated that these decoys were insufficiently optimized at the interface. After optimizing the interactions around the interface by including interfacial water molecules, the binding free energies of these decoys were improved. We also investigated the effect of solute entropy on binding free energy and found that consideration of the entropy term does not necessarily improve the evaluations of decoys using the normal model analysis for entropy calculation.

  13. Special issue on Military Operations Research Society (MORS) Symposium (80th): Expanding the Boundaries of National Security Analysis (Phalanx: The Bulletin of Military Operations Research. Volume 45, Number 2, June 2012)

    DTIC Science & Technology

    2012-06-01

    brianmccue@alum.mit.edu Letters to the Editor, John Willis, Augustine Consulting, Inc., jwillis@aciedge.com Modeling and Simulation , James N. Bexfield, FS, OSD...concepts that are now being applied to modern analytical thinking. The tuto- rials are free to MORS members and $75 for the day for nonmembers. The...Overview of Agent- based Modeling and Simulation and Complex Adaptive Systems •  Visual Data Analysis •  Analyzing Combat Identification •  Guidelines for

  14. Free-energy analysis of spin models on hyperbolic lattice geometries.

    PubMed

    Serina, Marcel; Genzor, Jozef; Lee, Yoju; Gendiar, Andrej

    2016-04-01

    We investigate relations between spatial properties of the free energy and the radius of Gaussian curvature of the underlying curved lattice geometries. For this purpose we derive recurrence relations for the analysis of the free energy normalized per lattice site of various multistate spin models in the thermal equilibrium on distinct non-Euclidean surface lattices of the infinite sizes. Whereas the free energy is calculated numerically by means of the corner transfer matrix renormalization group algorithm, the radius of curvature has an analytic expression. Two tasks are considered in this work. First, we search for such a lattice geometry, which minimizes the free energy per site. We conjecture that the only Euclidean flat geometry results in the minimal free energy per site regardless of the spin model. Second, the relations among the free energy, the radius of curvature, and the phase transition temperatures are analyzed. We found out that both the free energy and the phase transition temperature inherit the structure of the lattice geometry and asymptotically approach the profile of the Gaussian radius of curvature. This achievement opens new perspectives in the AdS-CFT correspondence theories.

  15. An optimized rapid bisulfite conversion method with high recovery of cell-free DNA.

    PubMed

    Yi, Shaohua; Long, Fei; Cheng, Juanbo; Huang, Daixin

    2017-12-19

    Methylation analysis of cell-free DNA is a encouraging tool for tumor diagnosis, monitoring and prognosis. Sensitivity of methylation analysis is a very important matter due to the tiny amounts of cell-free DNA available in plasma. Most current methods of DNA methylation analysis are based on the difference of bisulfite-mediated deamination of cytosine between cytosine and 5-methylcytosine. However, the recovery of bisulfite-converted DNA based on current methods is very poor for the methylation analysis of cell-free DNA. We optimized a rapid method for the crucial steps of bisulfite conversion with high recovery of cell-free DNA. A rapid deamination step and alkaline desulfonation was combined with the purification of DNA on a silica column. The conversion efficiency and recovery of bisulfite-treated DNA was investigated by the droplet digital PCR. The optimization of the reaction results in complete cytosine conversion in 30 min at 70 °C and about 65% of recovery of bisulfite-treated cell-free DNA, which is higher than current methods. The method allows high recovery from low levels of bisulfite-treated cell-free DNA, enhancing the analysis sensitivity of methylation detection from cell-free DNA.

  16. Free energies from dynamic weighted histogram analysis using unbiased Markov state model.

    PubMed

    Rosta, Edina; Hummer, Gerhard

    2015-01-13

    The weighted histogram analysis method (WHAM) is widely used to obtain accurate free energies from biased molecular simulations. However, WHAM free energies can exhibit significant errors if some of the biasing windows are not fully equilibrated. To account for the lack of full equilibration, we develop the dynamic histogram analysis method (DHAM). DHAM uses a global Markov state model to obtain the free energy along the reaction coordinate. A maximum likelihood estimate of the Markov transition matrix is constructed by joint unbiasing of the transition counts from multiple umbrella-sampling simulations along discretized reaction coordinates. The free energy profile is the stationary distribution of the resulting Markov matrix. For this matrix, we derive an explicit approximation that does not require the usual iterative solution of WHAM. We apply DHAM to model systems, a chemical reaction in water treated using quantum-mechanics/molecular-mechanics (QM/MM) simulations, and the Na(+) ion passage through the membrane-embedded ion channel GLIC. We find that DHAM gives accurate free energies even in cases where WHAM fails. In addition, DHAM provides kinetic information, which we here use to assess the extent of convergence in each of the simulation windows. DHAM may also prove useful in the construction of Markov state models from biased simulations in phase-space regions with otherwise low population.

  17. Effects of long-term representations on free recall of unrelated words

    PubMed Central

    Katkov, Mikhail; Romani, Sandro

    2015-01-01

    Human memory stores vast amounts of information. Yet recalling this information is often challenging when specific cues are lacking. Here we consider an associative model of retrieval where each recalled item triggers the recall of the next item based on the similarity between their long-term neuronal representations. The model predicts that different items stored in memory have different probability to be recalled depending on the size of their representation. Moreover, items with high recall probability tend to be recalled earlier and suppress other items. We performed an analysis of a large data set on free recall and found a highly specific pattern of statistical dependencies predicted by the model, in particular negative correlations between the number of words recalled and their average recall probability. Taken together, experimental and modeling results presented here reveal complex interactions between memory items during recall that severely constrain recall capacity. PMID:25593296

  18. ON ESTIMATING FORCE-FREENESS BASED ON OBSERVED MAGNETOGRAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, X. M.; Zhang, M.; Su, J. T., E-mail: xmzhang@nao.cas.cn

    It is a common practice in the solar physics community to test whether or not measured photospheric or chromospheric vector magnetograms are force-free, using the Maxwell stress as a measure. Some previous studies have suggested that magnetic fields of active regions in the solar chromosphere are close to being force-free whereas there is no consistency among previous studies on whether magnetic fields of active regions in the solar photosphere are force-free or not. Here we use three kinds of representative magnetic fields (analytical force-free solutions, modeled solar-like force-free fields, and observed non-force-free fields) to discuss how measurement issues such asmore » limited field of view (FOV), instrument sensitivity, and measurement error could affect the estimation of force-freeness based on observed magnetograms. Unlike previous studies that focus on discussing the effect of limited FOV or instrument sensitivity, our calculation shows that just measurement error alone can significantly influence the results of estimates of force-freeness, due to the fact that measurement errors in horizontal magnetic fields are usually ten times larger than those in vertical fields. This property of measurement errors, interacting with the particular form of a formula for estimating force-freeness, would result in wrong judgments of the force-freeness: a truly force-free field may be mistakenly estimated as being non-force-free and a truly non-force-free field may be estimated as being force-free. Our analysis calls for caution when interpreting estimates of force-freeness based on measured magnetograms, and also suggests that the true photospheric magnetic field may be further away from being force-free than it currently appears to be.« less

  19. EEG and MEG data analysis in SPM8.

    PubMed

    Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl

    2011-01-01

    SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools.

  20. EEG and MEG Data Analysis in SPM8

    PubMed Central

    Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl

    2011-01-01

    SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools. PMID:21437221

  1. Simulation-Driven Design Approach for Design and Optimization of Blankholder

    NASA Astrophysics Data System (ADS)

    Sravan, Tatipala; Suddapalli, Nikshep R.; Johan, Pilthammar; Mats, Sigvant; Christian, Johansson

    2017-09-01

    Reliable design of stamping dies is desired for efficient and safe production. The design of stamping dies are today mostly based on casting feasibility, although it can also be based on criteria for fatigue, stiffness, safety, economy. Current work presents an approach that is built on Simulation Driven Design, enabling Design Optimization to address this issue. A structural finite element model of a stamping die, used to produce doors for Volvo V70/S80 car models, is studied. This die had developed cracks during its usage. To understand the behaviour of stress distribution in the stamping die, structural analysis of the die is conducted and critical regions with high stresses are identified. The results from structural FE-models are compared with analytical calculations pertaining to fatigue properties of the material. To arrive at an optimum design with increased stiffness and lifetime, topology and free-shape optimization are performed. In the optimization routine, identified critical regions of the die are set as design variables. Other optimization variables are set to maintain manufacturability of the resultant stamping die. Thereafter a CAD model is built based on geometrical results from topology and free-shape optimizations. Then the CAD model is subjected to structural analysis to visualize the new stress distribution. This process is iterated until a satisfactory result is obtained. The final results show reduction in stress levels by 70% with a more homogeneous distribution. Even though mass of the die is increased by 17 %, overall, a stiffer die with better lifetime is obtained. Finally, by reflecting on the entire process, a coordinated approach to handle such situations efficiently is presented.

  2. Pseudospectral modeling and dispersion analysis of Rayleigh waves in viscoelastic media

    USGS Publications Warehouse

    Zhang, K.; Luo, Y.; Xia, J.; Chen, C.

    2011-01-01

    Multichannel Analysis of Surface Waves (MASW) is one of the most widely used techniques in environmental and engineering geophysics to determine shear-wave velocities and dynamic properties, which is based on the elastic layered system theory. Wave propagation in the Earth, however, has been recognized as viscoelastic and the propagation of Rayleigh waves presents substantial differences in viscoelastic media as compared with elastic media. Therefore, it is necessary to carry out numerical simulation and dispersion analysis of Rayleigh waves in viscoelastic media to better understand Rayleigh-wave behaviors in the real world. We apply a pseudospectral method to the calculation of the spatial derivatives using a Chebyshev difference operator in the vertical direction and a Fourier difference operator in the horizontal direction based on the velocity-stress elastodynamic equations and relations of linear viscoelastic solids. This approach stretches the spatial discrete grid to have a minimum grid size near the free surface so that high accuracy and resolution are achieved at the free surface, which allows an effective incorporation of the free surface boundary conditions since the Chebyshev method is nonperiodic. We first use an elastic homogeneous half-space model to demonstrate the accuracy of the pseudospectral method comparing with the analytical solution, and verify the correctness of the numerical modeling results for a viscoelastic half-space comparing the phase velocities of Rayleigh wave between the theoretical values and the dispersive image generated by high-resolution linear Radon transform. We then simulate three types of two-layer models to analyze dispersive-energy characteristics for near-surface applications. Results demonstrate that the phase velocity of Rayleigh waves in viscoelastic media is relatively higher than in elastic media and the fundamental mode increases by 10-16% when the frequency is above 10. Hz due to the velocity dispersion of P and S waves. ?? 2011 Elsevier Ltd.

  3. Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems

    PubMed Central

    Stover, Lori J.; Nair, Niketh S.; Faeder, James R.

    2014-01-01

    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This “network-free” approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of “partial network expansion” into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility. PMID:24699269

  4. Development and testing of a text-mining approach to analyse patients' comments on their experiences of colorectal cancer care.

    PubMed

    Wagland, Richard; Recio-Saucedo, Alejandra; Simon, Michael; Bracher, Michael; Hunt, Katherine; Foster, Claire; Downing, Amy; Glaser, Adam; Corner, Jessica

    2016-08-01

    Quality of cancer care may greatly impact on patients' health-related quality of life (HRQoL). Free-text responses to patient-reported outcome measures (PROMs) provide rich data but analysis is time and resource-intensive. This study developed and tested a learning-based text-mining approach to facilitate analysis of patients' experiences of care and develop an explanatory model illustrating impact on HRQoL. Respondents to a population-based survey of colorectal cancer survivors provided free-text comments regarding their experience of living with and beyond cancer. An existing coding framework was tested and adapted, which informed learning-based text mining of the data. Machine-learning algorithms were trained to identify comments relating to patients' specific experiences of service quality, which were verified by manual qualitative analysis. Comparisons between coded retrieved comments and a HRQoL measure (EQ5D) were explored. The survey response rate was 63.3% (21 802/34 467), of which 25.8% (n=5634) participants provided free-text comments. Of retrieved comments on experiences of care (n=1688), over half (n=1045, 62%) described positive care experiences. Most negative experiences concerned a lack of post-treatment care (n=191, 11% of retrieved comments) and insufficient information concerning self-management strategies (n=135, 8%) or treatment side effects (n=160, 9%). Associations existed between HRQoL scores and coded algorithm-retrieved comments. Analysis indicated that the mechanism by which service quality impacted on HRQoL was the extent to which services prevented or alleviated challenges associated with disease and treatment burdens. Learning-based text mining techniques were found useful and practical tools to identify specific free-text comments within a large dataset, facilitating resource-efficient qualitative analysis. This method should be considered for future PROM analysis to inform policy and practice. Study findings indicated that perceived care quality directly impacts on HRQoL. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  5. Spreading dynamics of an e-commerce preferential information model on scale-free networks

    NASA Astrophysics Data System (ADS)

    Wan, Chen; Li, Tao; Guan, Zhi-Hong; Wang, Yuanmei; Liu, Xiongding

    2017-02-01

    In order to study the influence of the preferential degree and the heterogeneity of underlying networks on the spread of preferential e-commerce information, we propose a novel susceptible-infected-beneficial model based on scale-free networks. The spreading dynamics of the preferential information are analyzed in detail using the mean-field theory. We determine the basic reproductive number and equilibria. The theoretical analysis indicates that the basic reproductive number depends mainly on the preferential degree and the topology of the underlying networks. We prove the global stability of the information-elimination equilibrium. The permanence of preferential information and the global attractivity of the information-prevailing equilibrium are also studied in detail. Some numerical simulations are presented to verify the theoretical results.

  6. Boron nitride nanotube-based biosensing of various bacterium/viruses: continuum modelling-based simulation approach.

    PubMed

    Panchal, Mitesh B; Upadhyay, Sanjay H

    2014-09-01

    In this study, the feasibility of single walled boron nitride nanotube (SWBNNT)-based biosensors has been ensured considering the continuum modelling-based simulation approach, for mass-based detection of various bacterium/viruses. Various types of bacterium or viruses have been taken into consideration at the free-end of the cantilevered configuration of the SWBNNT, as a biosensor. Resonant frequency shift-based analysis has been performed with the adsorption of various bacterium/viruses considered as additional mass to the SWBNNT-based sensor system. The continuum mechanics-based analytical approach, considering effective wall thickness has been considered to validate the finite element method (FEM)-based simulation results, based on continuum volume-based modelling of the SWBNNT. As a systematic analysis approach, the FEM-based simulation results are found in excellent agreement with the analytical results, to analyse the SWBNNTs for their wide range of applications such as nanoresonators, biosensors, gas-sensors, transducers and so on. The obtained results suggest that by using the SWBNNT of smaller size the sensitivity of the sensor system can be enhanced and detection of the bacterium/virus having mass of 4.28 × 10⁻²⁴ kg can be effectively performed.

  7. Has Childhood Smoking Reduced Following Smoke-Free Public Places Legislation? A Segmented Regression Analysis of Cross-Sectional UK School-Based Surveys.

    PubMed

    Katikireddi, Srinivasa Vittal; Der, Geoff; Roberts, Chris; Haw, Sally

    2016-07-01

    Smoke-free legislation has been a great success for tobacco control but its impact on smoking uptake remains under-explored. We investigated if trends in smoking uptake amongst adolescents differed before and after the introduction of smoke-free legislation in the United Kingdom. Prevalence estimates for regular smoking were obtained from representative school-based surveys for the four countries of the United Kingdom. Post-intervention status was represented using a dummy variable and to allow for a change in trend, the number of years since implementation was included. To estimate the association between smoke-free legislation and adolescent smoking, the percentage of regular smokers was modeled using linear regression adjusted for trends over time and country. All models were stratified by age (13 and 15 years) and sex. For 15-year-old girls, the implementation of smoke-free legislation in the United Kingdom was associated with a 4.3% reduction in the prevalence of regular smoking (P = .029). In addition, regular smoking fell by an additional 1.5% per annum post-legislation in this group (P = .005). Among 13-year-old girls, there was a reduction of 2.8% in regular smoking (P = .051), with no evidence of a change in trend post-legislation. Smaller and nonsignificant reductions in regular smoking were observed for 15- and 13-year-old boys (P = .175 and P = .113, respectively). Smoke-free legislation may help reduce smoking uptake amongst teenagers, with stronger evidence for an association seen in females. Further research that analyses longitudinal data across more countries is required. Previous research has established that smoke-free legislation has led to many improvements in population health, including reductions in heart attack, stroke, and asthma. However, the impacts of smoke-free legislation on the rates of smoking amongst children have been less investigated. Analysis of repeated cross-sectional surveys across the four countries of the United Kingdom shows smoke-free legislation may be associated with a reduction in regular smoking among school-aged children. If this association is causal, comprehensive smoke-free legislation could help prevent future generations from taking up smoking. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco.

  8. The involvement of model-based but not model-free learning signals during observational reward learning in the absence of choice.

    PubMed

    Dunne, Simon; D'Souza, Arun; O'Doherty, John P

    2016-06-01

    A major open question is whether computational strategies thought to be used during experiential learning, specifically model-based and model-free reinforcement learning, also support observational learning. Furthermore, the question of how observational learning occurs when observers must learn about the value of options from observing outcomes in the absence of choice has not been addressed. In the present study we used a multi-armed bandit task that encouraged human participants to employ both experiential and observational learning while they underwent functional magnetic resonance imaging (fMRI). We found evidence for the presence of model-based learning signals during both observational and experiential learning in the intraparietal sulcus. However, unlike during experiential learning, model-free learning signals in the ventral striatum were not detectable during this form of observational learning. These results provide insight into the flexibility of the model-based learning system, implicating this system in learning during observation as well as from direct experience, and further suggest that the model-free reinforcement learning system may be less flexible with regard to its involvement in observational learning. Copyright © 2016 the American Physiological Society.

  9. Experimental design and data-analysis in label-free quantitative LC/MS proteomics: A tutorial with MSqRob.

    PubMed

    Goeminne, Ludger J E; Gevaert, Kris; Clement, Lieven

    2018-01-16

    Label-free shotgun proteomics is routinely used to assess proteomes. However, extracting relevant information from the massive amounts of generated data remains difficult. This tutorial provides a strong foundation on analysis of quantitative proteomics data. We provide key statistical concepts that help researchers to design proteomics experiments and we showcase how to analyze quantitative proteomics data using our recent free and open-source R package MSqRob, which was developed to implement the peptide-level robust ridge regression method for relative protein quantification described by Goeminne et al. MSqRob can handle virtually any experimental proteomics design and outputs proteins ordered by statistical significance. Moreover, its graphical user interface and interactive diagnostic plots provide easy inspection and also detection of anomalies in the data and flaws in the data analysis, allowing deeper assessment of the validity of results and a critical review of the experimental design. Our tutorial discusses interactive preprocessing, data analysis and visualization of label-free MS-based quantitative proteomics experiments with simple and more complex designs. We provide well-documented scripts to run analyses in bash mode on GitHub, enabling the integration of MSqRob in automated pipelines on cluster environments (https://github.com/statOmics/MSqRob). The concepts outlined in this tutorial aid in designing better experiments and analyzing the resulting data more appropriately. The two case studies using the MSqRob graphical user interface will contribute to a wider adaptation of advanced peptide-based models, resulting in higher quality data analysis workflows and more reproducible results in the proteomics community. We also provide well-documented scripts for experienced users that aim at automating MSqRob on cluster environments. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. A Novel Signal Modeling Approach for Classification of Seizure and Seizure-Free EEG Signals.

    PubMed

    Gupta, Anubha; Singh, Pushpendra; Karlekar, Mandar

    2018-05-01

    This paper presents a signal modeling-based new methodology of automatic seizure detection in EEG signals. The proposed method consists of three stages. First, a multirate filterbank structure is proposed that is constructed using the basis vectors of discrete cosine transform. The proposed filterbank decomposes EEG signals into its respective brain rhythms: delta, theta, alpha, beta, and gamma. Second, these brain rhythms are statistically modeled with the class of self-similar Gaussian random processes, namely, fractional Brownian motion and fractional Gaussian noises. The statistics of these processes are modeled using a single parameter called the Hurst exponent. In the last stage, the value of Hurst exponent and autoregressive moving average parameters are used as features to design a binary support vector machine classifier to classify pre-ictal, inter-ictal (epileptic with seizure free interval), and ictal (seizure) EEG segments. The performance of the classifier is assessed via extensive analysis on two widely used data set and is observed to provide good accuracy on both the data set. Thus, this paper proposes a novel signal model for EEG data that best captures the attributes of these signals and hence, allows to boost the classification accuracy of seizure and seizure-free epochs.

  11. Comparative analysis of internal friction and natural frequency measured by free decay and forced vibration.

    PubMed

    Wang, Y Z; Ding, X D; Xiong, X M; Zhang, J X

    2007-10-01

    Relations between various values of the internal friction (tgdelta, Q(-1), Q(-1*), and Lambda/pi) measured by free decay and forced vibration are analyzed systemically based on a fundamental mechanical model in this paper. Additionally, relations between various natural frequencies, such as vibration frequency of free decay omega(FD), displacement-resonant frequency of forced vibration omega(d), and velocity-resonant frequency of forced vibration omega(0) are calculated. Moreover, measurement of natural frequencies of a copper specimen of 99.9% purity has been made to demonstrate the relation between the measured natural frequencies of the system by forced vibration and free decay. These results are of importance for not only more accurate measurement of the elastic modulus of materials but also the data conversion between different internal friction measurements.

  12. The impact of surface area, volume, curvature, and Lennard-Jones potential to solvation modeling.

    PubMed

    Nguyen, Duc D; Wei, Guo-Wei

    2017-01-05

    This article explores the impact of surface area, volume, curvature, and Lennard-Jones (LJ) potential on solvation free energy predictions. Rigidity surfaces are utilized to generate robust analytical expressions for maximum, minimum, mean, and Gaussian curvatures of solvent-solute interfaces, and define a generalized Poisson-Boltzmann (GPB) equation with a smooth dielectric profile. Extensive correlation analysis is performed to examine the linear dependence of surface area, surface enclosed volume, maximum curvature, minimum curvature, mean curvature, and Gaussian curvature for solvation modeling. It is found that surface area and surfaces enclosed volumes are highly correlated to each other's, and poorly correlated to various curvatures for six test sets of molecules. Different curvatures are weakly correlated to each other for six test sets of molecules, but are strongly correlated to each other within each test set of molecules. Based on correlation analysis, we construct twenty six nontrivial nonpolar solvation models. Our numerical results reveal that the LJ potential plays a vital role in nonpolar solvation modeling, especially for molecules involving strong van der Waals interactions. It is found that curvatures are at least as important as surface area or surface enclosed volume in nonpolar solvation modeling. In conjugation with the GPB model, various curvature-based nonpolar solvation models are shown to offer some of the best solvation free energy predictions for a wide range of test sets. For example, root mean square errors from a model constituting surface area, volume, mean curvature, and LJ potential are less than 0.42 kcal/mol for all test sets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Yonsei nomogram: A predictive model of new-onset chronic kidney disease after on-clamp partial nephrectomy in patients with T1 renal tumors.

    PubMed

    Abdel Raheem, Ali; Shin, Tae Young; Chang, Ki Don; Santok, Glen Denmer R; Alenzi, Mohamed Jayed; Yoon, Young Eun; Ham, Won Sik; Han, Woong Kyu; Choi, Young Deuk; Rha, Koon Ho

    2018-06-19

    To develop a predictive nomogram for chronic kidney disease-free survival probability in the long term after partial nephrectomy. A retrospective analysis was carried out of 698 patients with T1 renal tumors undergoing partial nephrectomy at a tertiary academic institution. A multivariable Cox regression analysis was carried out based on parameters proven to have an impact on postoperative renal function. Patients with incomplete data, <12 months follow up and preoperative chronic kidney disease stage III or greater were excluded. The study end-points were to identify independent risk factors for new-onset chronic kidney disease development, as well as to construct a predictive model for chronic kidney disease-free survival probability after partial nephrectomy. The median age was 52 years, median tumor size was 2.5 cm and mean warm ischemia time was 28 min. A total of 91 patients (13.1%) developed new-onset chronic kidney disease at a median follow up of 60 months. The chronic kidney disease-free survival rates at 1, 3, 5 and 10 year were 97.1%, 94.4%, 85.3% and 70.6%, respectively. On multivariable Cox regression analysis, age (1.041, P = 0.001), male sex (hazard ratio 1.653, P < 0.001), diabetes mellitus (hazard ratio 1.921, P = 0.046), tumor size (hazard ratio 1.331, P < 0.001) and preoperative estimated glomerular filtration rate (hazard ratio 0.937, P < 0.001) were independent predictors for new-onset chronic kidney disease. The C-index for chronic kidney disease-free survival was 0.853 (95% confidence interval 0.815-0.895). We developed a novel nomogram for predicting the 5-year chronic kidney disease-free survival probability after on-clamp partial nephrectomy. This model might have an important role in partial nephrectomy decision-making and follow-up plan after surgery. External validation of our nomogram in a larger cohort of patients should be considered. © 2018 The Japanese Urological Association.

  14. Guidelines for the analysis of free energy calculations.

    PubMed

    Klimovich, Pavel V; Shirts, Michael R; Mobley, David L

    2015-05-01

    Free energy calculations based on molecular dynamics simulations show considerable promise for applications ranging from drug discovery to prediction of physical properties and structure-function studies. But these calculations are still difficult and tedious to analyze, and best practices for analysis are not well defined or propagated. Essentially, each group analyzing these calculations needs to decide how to conduct the analysis and, usually, develop its own analysis tools. Here, we review and recommend best practices for analysis yielding reliable free energies from molecular simulations. Additionally, we provide a Python tool, alchemical-analysis.py, freely available on GitHub as part of the pymbar package (located at http://github.com/choderalab/pymbar), that implements the analysis practices reviewed here for several reference simulation packages, which can be adapted to handle data from other packages. Both this review and the tool covers analysis of alchemical calculations generally, including free energy estimates via both thermodynamic integration and free energy perturbation-based estimators. Our Python tool also handles output from multiple types of free energy calculations, including expanded ensemble and Hamiltonian replica exchange, as well as standard fixed ensemble calculations. We also survey a range of statistical and graphical ways of assessing the quality of the data and free energy estimates, and provide prototypes of these in our tool. We hope this tool and discussion will serve as a foundation for more standardization of and agreement on best practices for analysis of free energy calculations.

  15. Free Volume, Energy, and Entropy at the Polymer Glass Transition: New Results and Connections with Widely Used Treatments

    NASA Astrophysics Data System (ADS)

    White, Ronald; Lipson, Jane

    Free volume has a storied history in polymer physics. To introduce our own results, we consider how free volume has been defined in the past, e.g. in the works of Fox and Flory, Doolittle, and the equation of Williams, Landel, and Ferry. We contrast these perspectives with our own analysis using our Locally Correlated Lattice (LCL) model where we have found a striking connection between polymer free volume (analyzed using PVT data) and the polymer's corresponding glass transition temperature, Tg. The pattern, covering over 50 different polymers, is robust enough to be reasonably predictive based on melt properties alone; when a melt hits this T-dependent boundary of critical minimum free volume it becomes glassy. We will present a broad selection of results from our thermodynamic analysis, and make connections with historical treatments. We will discuss patterns that have emerged across the polymers in the energy and entropy when quantified as ''per LCL theoretical segment''. Finally we will relate the latter trend to the point of view popularized in the theory of Adam and Gibbs. The authors gratefully acknowledge support from NSF DMR-1403757.

  16. Local Stability of AIDS Epidemic Model Through Treatment and Vertical Transmission with Time Delay

    NASA Astrophysics Data System (ADS)

    Novi W, Cascarilla; Lestari, Dwi

    2016-02-01

    This study aims to explain stability of the spread of AIDS through treatment and vertical transmission model. Human with HIV need a time to positively suffer AIDS. The existence of a time, human with HIV until positively suffer AIDS can be delayed for a time so that the model acquired is the model with time delay. The model form is a nonlinear differential equation with time delay, SIPTA (susceptible-infected-pre AIDS-treatment-AIDS). Based on SIPTA model analysis results the disease free equilibrium point and the endemic equilibrium point. The disease free equilibrium point with and without time delay are local asymptotically stable if the basic reproduction number is less than one. The endemic equilibrium point will be local asymptotically stable if the time delay is less than the critical value of delay, unstable if the time delay is more than the critical value of delay, and bifurcation occurs if the time delay is equal to the critical value of delay.

  17. Electrophysiological correlates reflect the integration of model-based and model-free decision information.

    PubMed

    Eppinger, Ben; Walter, Maik; Li, Shu-Chen

    2017-04-01

    In this study, we investigated the interplay of habitual (model-free) and goal-directed (model-based) decision processes by using a two-stage Markov decision task in combination with event-related potentials (ERPs) and computational modeling. To manipulate the demands on model-based decision making, we applied two experimental conditions with different probabilities of transitioning from the first to the second stage of the task. As we expected, when the stage transitions were more predictable, participants showed greater model-based (planning) behavior. Consistent with this result, we found that stimulus-evoked parietal (P300) activity at the second stage of the task increased with the predictability of the state transitions. However, the parietal activity also reflected model-free information about the expected values of the stimuli, indicating that at this stage of the task both types of information are integrated to guide decision making. Outcome-related ERP components only reflected reward-related processes: Specifically, a medial prefrontal ERP component (the feedback-related negativity) was sensitive to negative outcomes, whereas a component that is elicited by reward (the feedback-related positivity) increased as a function of positive prediction errors. Taken together, our data indicate that stimulus-locked parietal activity reflects the integration of model-based and model-free information during decision making, whereas feedback-related medial prefrontal signals primarily reflect reward-related decision processes.

  18. The cost of an additional disability-free life year for older Americans: 1992-2005.

    PubMed

    Cai, Liming

    2013-02-01

    To estimate the cost of an additional disability-free life year for older Americans in 1992-2005. This study used 1992-2005 Medicare Current Beneficiary Survey, a longitudinal survey of Medicare beneficiaries with a rotating panel design. This analysis used multistate life table model to estimate probabilities of transition among a discrete set of health states (nondisabled, disabled, and dead) for two panels of older Americans in 1992 and 2002. Health spending incurred between annual health interviews was estimated by a generalized linear mixed model. Health status, including death, was simulated for each member of the panel using these transition probabilities; the associated health spending was cross-walked to the simulated health changes. Disability-free life expectancy (DFLE) increased significantly more than life expectancy during the study period. Assuming that 50 percent of the gains in DFLE between 1992 and 2005 were attributable to increases in spending, the average discounted cost per additional disability-free life year was $71,000. There were small differences between gender and racial/ethnic groups. The cost of an additional disability-free life year was substantially below previous estimates based on mortality trends alone. © Health Research and Educational Trust.

  19. Unfolding and folding internal friction of β-hairpins is smaller than that of α-helices.

    PubMed

    Schulz, Julius C F; Miettinen, Markus S; Netz, R R

    2015-04-02

    By the forced unfolding of polyglutamine and polyalanine homopeptides in competing α-helix and β-hairpin secondary structures, we disentangle equilibrium free energetics from nonequilibrium dissipative effects. We find that α-helices are characterized by larger friction or dissipation upon unfolding, regardless of whether they are free energetically preferred over β-hairpins or not. Our analysis, based on MD simulations for atomistic peptide models with explicit water, suggests that this difference is related to the internal friction and mostly caused by the different number of intrapeptide hydrogen bonds in the α-helix and β-hairpin states.

  20. Experience with Free Bodies

    NASA Technical Reports Server (NTRS)

    Butler, T. G.

    1985-01-01

    Some of the problems that confront an analyst in free body modeling, to satisfy rigid body conditions are discussed and with some remedies for these problems are presented. The problems of detecting these culprits at various levels within the analysis are examined. A new method within NASTRAN for checking the model for defects very early in the analysis without requiring the analyst to bear the expense of an eigenvalue analysis before discovering these defects is outlined.

  1. Cr(VI) sorption by free and immobilised chromate-reducing bacterial cells in PVA-alginate matrix: equilibrium isotherms and kinetic studies.

    PubMed

    Rawat, Monica; Rawat, A P; Giri, Krishna; Rai, J P N

    2013-08-01

    Chromate-resistant bacterial strain isolated from the soil of tannery was studied for Cr(VI) bioaccumulation in free and immobilised cells to evaluate its applicability in chromium removal from aqueous solution. Based on the comparative analysis of the 16S rRNA gene, and phenotypic and biochemical characterization, this strain was identified as Paenibacillus xylanilyticus MR12. Mechanism of Cr adsorption was also ascertained by chemical modifications of the bacterial biomass followed by Fourier transform infrared spectroscopy analysis of the cell wall constituents. The equilibrium biosorption analysed using isotherms (Langmuir, Freundlich and Dubinin-Redushkevich) and kinetics models (pseudo-first-order, second-order and Weber-Morris) revealed that the Langmuir model best correlated to experimental data, and Weber-Morris equation well described Cr(VI) biosorption kinetics. Polyvinyl alcohol alginate immobilised cells had the highest Cr(VI) removal efficiency than that of free cells and could also be reused four times for Cr(VI) removal. Complete reduction of chromate in simulated effluent containing Cu(2+), Mg(2+), Mn(2+) and Zn(2+) by immobilised cells, demonstrated potential applications of a novel immobilised bacterial strain MR12, as a vital bioresource in Cr(VI) bioremediation technology.

  2. Factors Influencing Early Feeding of Foods and Drinks Containing Free Sugars-A Birth Cohort Study.

    PubMed

    Ha, Diep H; Do, Loc G; Spencer, Andrew John; Thomson, William Murray; Golley, Rebecca K; Rugg-Gunn, Andrew J; Levy, Steven M; Scott, Jane A

    2017-10-23

    Early feeding of free sugars to young children can increase the preference for sweetness and the risk of consuming a cariogenic diet high in free sugars later in life. This study aimed to investigate early life factors influencing early introduction of foods/drinks containing free sugars. Data from an ongoing population-based birth cohort study in Australia were used. Mothers of newborn children completed questionnaires at birth and subsequently at ages 3, 6, 12, and 24 months. The outcome was reported feeding (Yes/No) at age 6-9 months of common foods/drinks sources of free sugars (hereafter referred as foods/drinks with free sugars). Household income quartiles, mother's sugar-sweetened beverage (SSB) consumption, and other maternal factors were exposure variables. Analysis was conducted progressively from bivariate to multivariable log-binomial regression with robust standard error estimation to calculate prevalence ratios (PR) of being fed foods/drinks with free sugars at an early age (by 6-9 months). Models for both complete cases and with multiple imputations (MI) for missing data were generated. Of 1479 mother/child dyads, 21% of children had been fed foods/drinks with free sugars. There was a strong income gradient and a significant positive association with maternal SSB consumption. In the complete-case model, income Q1 and Q2 had PRs of 1.9 (1.2-3.1) and 1.8 (1.2-2.6) against Q4, respectively. The PR for mothers ingesting SSB everyday was 1.6 (1.2-2.3). The PR for children who had been breastfed to at least three months was 0.6 (0.5-0.8). Similar findings were observed in the MI model. Household income at birth and maternal behaviours were significant determinants of early feeding of foods/drinks with free sugars.

  3. On the Direct Assimilation of Along-track Sea Surface Height Observations into a Free-surface Ocean Model Using a Weak Constraints Four Dimensional Variational (4dvar) Method

    NASA Astrophysics Data System (ADS)

    Ngodock, H.; Carrier, M.; Smith, S. R.; Souopgui, I.; Martin, P.; Jacobs, G. A.

    2016-02-01

    The representer method is adopted for solving a weak constraints 4dvar problem for the assimilation of ocean observations including along-track SSH, using a free surface ocean model. Direct 4dvar assimilation of SSH observations along the satellite tracks requires that the adjoint model be integrated with Dirac impulses on the right hand side of the adjoint equations for the surface elevation equation. The solution of this adjoint model will inevitably include surface gravity waves, and it constitutes the forcing for the tangent linear model (TLM) according to the representer method. This yields an analysis that is contaminated by gravity waves. A method for avoiding the generation of the surface gravity waves in the analysis is proposed in this study; it consists of removing the adjoint of the free surface from the right hand side (rhs) of the free surface mode in the TLM. The information from the SSH observations will still propagate to all other variables via the adjoint of the balance relationship between the barotropic and baroclinic modes, resulting in the correction to the surface elevation. Two assimilation experiments are carried out in the Gulf of Mexico: one with adjoint forcing included on the rhs of the TLM free surface equation, and the other without. Both analyses are evaluated against the assimilated SSH observations, SSH maps from Aviso and independent surface drifters, showing that the analysis that did not include adjoint forcing in the free surface is more accurate. This study shows that when a weak constraint 4dvar approach is considered for the assimilation of along-track SSH observations using a free surface model, with the aim of correcting the mesoscale circulation, an independent model error should not be assigned to the free surface.

  4. A Method for Assessing Ground-Truth Accuracy of the 5DCT Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, Tai H., E-mail: tdou@mednet.ucla.edu; Thomas, David H.; O'Connell, Dylan P.

    2015-11-15

    Purpose: To develop a technique that assesses the accuracy of the breathing phase-specific volume image generation process by patient-specific breathing motion model using the original free-breathing computed tomographic (CT) scans as ground truths. Methods: Sixteen lung cancer patients underwent a previously published protocol in which 25 free-breathing fast helical CT scans were acquired with a simultaneous breathing surrogate. A patient-specific motion model was constructed based on the tissue displacements determined by a state-of-the-art deformable image registration. The first image was arbitrarily selected as the reference image. The motion model was used, along with the free-breathing phase information of the originalmore » 25 image datasets, to generate a set of deformation vector fields that mapped the reference image to the 24 nonreference images. The high-pitch helically acquired original scans served as ground truths because they captured the instantaneous tissue positions during free breathing. Image similarity between the simulated and the original scans was assessed using deformable registration that evaluated the pointwise discordance throughout the lungs. Results: Qualitative comparisons using image overlays showed excellent agreement between the simulated images and the original images. Even large 2-cm diaphragm displacements were very well modeled, as was sliding motion across the lung–chest wall boundary. The mean error across the patient cohort was 1.15 ± 0.37 mm, and the mean 95th percentile error was 2.47 ± 0.78 mm. Conclusion: The proposed ground truth–based technique provided voxel-by-voxel accuracy analysis that could identify organ-specific or tumor-specific motion modeling errors for treatment planning. Despite a large variety of breathing patterns and lung deformations during the free-breathing scanning session, the 5-dimensionl CT technique was able to accurately reproduce the original helical CT scans, suggesting its applicability to a wide range of patients.« less

  5. The diagnostic plot analysis of artesian aquifers with case studies in Table Mountain Group of South Africa

    NASA Astrophysics Data System (ADS)

    Sun, Xiaobin; Xu, Yongxin; Lin, Lixiang

    2015-05-01

    Parameter estimates of artesian aquifers where piezometric head is above ground level are largely made through free-flowing and recovery tests. The straight-line method proposed by Jacob-Lohman is often used for interpretation of flow rate measured at flowing artesian boreholes. However, the approach fails to interpret the free-flowing test data from two artesian boreholes in the fractured-rock aquifer in Table Mountain Group (TMG) of South Africa. The diagnostic plot method using the reciprocal rate derivative is adapted to evaluate the artesian aquifer properties. The variation of the derivative helps not only identify flow regimes and discern the boundary conditions, but also facilitates conceptualization of the aquifer system and selection of an appropriate model for data interpretation later on. Test data from two free-flowing tests conducted in different sites in TMG are analysed using the diagnostic plot method. Based on the results, conceptual models and appropriate approaches are developed to evaluate the aquifer properties. The advantages and limitations of using the diagnostic plot method on free-flowing test data are discussed.

  6. Quantifying Variation in Gait Features from Wearable Inertial Sensors Using Mixed Effects Models

    PubMed Central

    Cresswell, Kellen Garrison; Shin, Yongyun; Chen, Shanshan

    2017-01-01

    The emerging technology of wearable inertial sensors has shown its advantages in collecting continuous longitudinal gait data outside laboratories. This freedom also presents challenges in collecting high-fidelity gait data. In the free-living environment, without constant supervision from researchers, sensor-based gait features are susceptible to variation from confounding factors such as gait speed and mounting uncertainty, which are challenging to control or estimate. This paper is one of the first attempts in the field to tackle such challenges using statistical modeling. By accepting the uncertainties and variation associated with wearable sensor-based gait data, we shift our efforts from detecting and correcting those variations to modeling them statistically. From gait data collected on one healthy, non-elderly subject during 48 full-factorial trials, we identified four major sources of variation, and quantified their impact on one gait outcome—range per cycle—using a random effects model and a fixed effects model. The methodology developed in this paper lays the groundwork for a statistical framework to account for sources of variation in wearable gait data, thus facilitating informative statistical inference for free-living gait analysis. PMID:28245602

  7. When Habits Are Dangerous: Alcohol Expectancies and Habitual Decision Making Predict Relapse in Alcohol Dependence.

    PubMed

    Sebold, Miriam; Nebe, Stephan; Garbusow, Maria; Guggenmos, Matthias; Schad, Daniel J; Beck, Anne; Kuitunen-Paul, Soeren; Sommer, Christian; Frank, Robin; Neu, Peter; Zimmermann, Ulrich S; Rapp, Michael A; Smolka, Michael N; Huys, Quentin J M; Schlagenhauf, Florian; Heinz, Andreas

    2017-12-01

    Addiction is supposedly characterized by a shift from goal-directed to habitual decision making, thus facilitating automatic drug intake. The two-step task allows distinguishing between these mechanisms by computationally modeling goal-directed and habitual behavior as model-based and model-free control. In addicted patients, decision making may also strongly depend upon drug-associated expectations. Therefore, we investigated model-based versus model-free decision making and its neural correlates as well as alcohol expectancies in alcohol-dependent patients and healthy controls and assessed treatment outcome in patients. Ninety detoxified, medication-free, alcohol-dependent patients and 96 age- and gender-matched control subjects underwent functional magnetic resonance imaging during the two-step task. Alcohol expectancies were measured with the Alcohol Expectancy Questionnaire. Over a follow-up period of 48 weeks, 37 patients remained abstinent and 53 patients relapsed as indicated by the Alcohol Timeline Followback method. Patients who relapsed displayed reduced medial prefrontal cortex activation during model-based decision making. Furthermore, high alcohol expectancies were associated with low model-based control in relapsers, while the opposite was observed in abstainers and healthy control subjects. However, reduced model-based control per se was not associated with subsequent relapse. These findings suggest that poor treatment outcome in alcohol dependence does not simply result from a shift from model-based to model-free control but is instead dependent on the interaction between high drug expectancies and low model-based decision making. Reduced model-based medial prefrontal cortex signatures in those who relapse point to a neural correlate of relapse risk. These observations suggest that therapeutic interventions should target subjective alcohol expectancies. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  8. Dopamine selectively remediates 'model-based' reward learning: a computational approach.

    PubMed

    Sharp, Madeleine E; Foerde, Karin; Daw, Nathaniel D; Shohamy, Daphna

    2016-02-01

    Patients with loss of dopamine due to Parkinson's disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from 'model-free' learning. The other, 'model-based' learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson's disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson's disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson's disease may be related to an inability to pursue reward based on complete representations of the environment. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Spread of Ebola disease with susceptible exposed infected isolated recovered (SEIIhR) model

    NASA Astrophysics Data System (ADS)

    Azizah, Afina; Widyaningsih, Purnami; Retno Sari Saputro, Dewi

    2017-06-01

    Ebola is a deadly infectious disease and has caused an epidemic on several countries in West Africa. Mathematical modeling to study the spread of Ebola disease has been developed, including through models susceptible infected removed (SIR) and susceptible exposed infected removed (SEIR). Furthermore, susceptible exposed infected isolated recovered (SEIIhR) model has been derived. The aims of this research are to derive SEIIhR model for Ebola disease, to determine the patterns of its spread, to determine the equilibrium point and stability of the equilibrium point using phase plane analysis, and also to apply the SEIIhR model on Ebola epidemic in Sierra Leone in 2014. The SEIIhR model is a differential equation system. Pattern of ebola disease spread with SEIIhR model is solution of the differential equation system. The equilibrium point of SEIIhR model is unique and it is a disease-free equilibrium point that stable. Application of the model is based on the data Ebola epidemic in Sierra Leone. The free-disease equilibrium point (Se; Ee; Ie; Ihe; Re )=(5743865, 0, 0, 0, 0) is stable.

  10. Statistical analysis of life history calendar data.

    PubMed

    Eerola, Mervi; Helske, Satu

    2016-04-01

    The life history calendar is a data-collection tool for obtaining reliable retrospective data about life events. To illustrate the analysis of such data, we compare the model-based probabilistic event history analysis and the model-free data mining method, sequence analysis. In event history analysis, we estimate instead of transition hazards the cumulative prediction probabilities of life events in the entire trajectory. In sequence analysis, we compare several dissimilarity metrics and contrast data-driven and user-defined substitution costs. As an example, we study young adults' transition to adulthood as a sequence of events in three life domains. The events define the multistate event history model and the parallel life domains in multidimensional sequence analysis. The relationship between life trajectories and excess depressive symptoms in middle age is further studied by their joint prediction in the multistate model and by regressing the symptom scores on individual-specific cluster indices. The two approaches complement each other in life course analysis; sequence analysis can effectively find typical and atypical life patterns while event history analysis is needed for causal inquiries. © The Author(s) 2012.

  11. SWPhylo - A Novel Tool for Phylogenomic Inferences by Comparison of Oligonucleotide Patterns and Integration of Genome-Based and Gene-Based Phylogenetic Trees.

    PubMed

    Yu, Xiaoyu; Reva, Oleg N

    2018-01-01

    Modern phylogenetic studies may benefit from the analysis of complete genome sequences of various microorganisms. Evolutionary inferences based on genome-scale analysis are believed to be more accurate than the gene-based alternative. However, the computational complexity of current phylogenomic procedures, inappropriateness of standard phylogenetic tools to process genome-wide data, and lack of reliable substitution models which correlates with alignment-free phylogenomic approaches deter microbiologists from using these opportunities. For example, the super-matrix and super-tree approaches of phylogenomics use multiple integrated genomic loci or individual gene-based trees to infer an overall consensus tree. However, these approaches potentially multiply errors of gene annotation and sequence alignment not mentioning the computational complexity and laboriousness of the methods. In this article, we demonstrate that the annotation- and alignment-free comparison of genome-wide tetranucleotide frequencies, termed oligonucleotide usage patterns (OUPs), allowed a fast and reliable inference of phylogenetic trees. These were congruent to the corresponding whole genome super-matrix trees in terms of tree topology when compared with other known approaches including 16S ribosomal RNA and GyrA protein sequence comparison, complete genome-based MAUVE, and CVTree methods. A Web-based program to perform the alignment-free OUP-based phylogenomic inferences was implemented at http://swphylo.bi.up.ac.za/. Applicability of the tool was tested on different taxa from subspecies to intergeneric levels. Distinguishing between closely related taxonomic units may be enforced by providing the program with alignments of marker protein sequences, eg, GyrA.

  12. SWPhylo – A Novel Tool for Phylogenomic Inferences by Comparison of Oligonucleotide Patterns and Integration of Genome-Based and Gene-Based Phylogenetic Trees

    PubMed Central

    Yu, Xiaoyu; Reva, Oleg N

    2018-01-01

    Modern phylogenetic studies may benefit from the analysis of complete genome sequences of various microorganisms. Evolutionary inferences based on genome-scale analysis are believed to be more accurate than the gene-based alternative. However, the computational complexity of current phylogenomic procedures, inappropriateness of standard phylogenetic tools to process genome-wide data, and lack of reliable substitution models which correlates with alignment-free phylogenomic approaches deter microbiologists from using these opportunities. For example, the super-matrix and super-tree approaches of phylogenomics use multiple integrated genomic loci or individual gene-based trees to infer an overall consensus tree. However, these approaches potentially multiply errors of gene annotation and sequence alignment not mentioning the computational complexity and laboriousness of the methods. In this article, we demonstrate that the annotation- and alignment-free comparison of genome-wide tetranucleotide frequencies, termed oligonucleotide usage patterns (OUPs), allowed a fast and reliable inference of phylogenetic trees. These were congruent to the corresponding whole genome super-matrix trees in terms of tree topology when compared with other known approaches including 16S ribosomal RNA and GyrA protein sequence comparison, complete genome-based MAUVE, and CVTree methods. A Web-based program to perform the alignment-free OUP-based phylogenomic inferences was implemented at http://swphylo.bi.up.ac.za/. Applicability of the tool was tested on different taxa from subspecies to intergeneric levels. Distinguishing between closely related taxonomic units may be enforced by providing the program with alignments of marker protein sequences, eg, GyrA. PMID:29511354

  13. Comparative empirical analysis of flow-weighted transit route networks in R-space and evolution modeling

    NASA Astrophysics Data System (ADS)

    Huang, Ailing; Zang, Guangzhi; He, Zhengbing; Guan, Wei

    2017-05-01

    Urban public transit system is a typical mixed complex network with dynamic flow, and its evolution should be a process coupling topological structure with flow dynamics, which has received little attention. This paper presents the R-space to make a comparative empirical analysis on Beijing’s flow-weighted transit route network (TRN) and we found that both the Beijing’s TRNs in the year of 2011 and 2015 exhibit the scale-free properties. As such, we propose an evolution model driven by flow to simulate the development of TRNs with consideration of the passengers’ dynamical behaviors triggered by topological change. The model simulates that the evolution of TRN is an iterative process. At each time step, a certain number of new routes are generated driven by travel demands, which leads to dynamical evolution of new routes’ flow and triggers perturbation in nearby routes that will further impact the next round of opening new routes. We present the theoretical analysis based on the mean-field theory, as well as the numerical simulation for this model. The results obtained agree well with our empirical analysis results, which indicate that our model can simulate the TRN evolution with scale-free properties for distributions of node’s strength and degree. The purpose of this paper is to illustrate the global evolutional mechanism of transit network that will be used to exploit planning and design strategies for real TRNs.

  14. An analytical method for free vibration analysis of functionally graded beams with edge cracks

    NASA Astrophysics Data System (ADS)

    Wei, Dong; Liu, Yinghua; Xiang, Zhihai

    2012-03-01

    In this paper, an analytical method is proposed for solving the free vibration of cracked functionally graded material (FGM) beams with axial loading, rotary inertia and shear deformation. The governing differential equations of motion for an FGM beam are established and the corresponding solutions are found first. The discontinuity of rotation caused by the cracks is simulated by means of the rotational spring model. Based on the transfer matrix method, then the recurrence formula is developed to get the eigenvalue equations of free vibration of FGM beams. The main advantage of the proposed method is that the eigenvalue equation for vibrating beams with an arbitrary number of cracks can be conveniently determined from a third-order determinant. Due to the decrease in the determinant order as compared with previous methods, the developed method is simpler and more convenient to analytically solve the free vibration problem of cracked FGM beams. Moreover, free vibration analyses of the Euler-Bernoulli and Timoshenko beams with any number of cracks can be conducted using the unified procedure based on the developed method. These advantages of the proposed procedure would be more remarkable as the increase of the number of cracks. A comprehensive analysis is conducted to investigate the influences of the location and total number of cracks, material properties, axial load, inertia and end supports on the natural frequencies and vibration mode shapes of FGM beams. The present work may be useful for the design and control of damaged structures.

  15. Utilization Elementary Siphons of Petri Net to Solved Deadlocks in Flexible Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Abdul-Hussin, Mowafak Hassan

    2015-07-01

    This article presents an approach to the constructing a class structural analysis of Petri nets, where elementary siphons are mainly used in the development of a deadlock control policy of flexible manufacturing systems (FMSs), that has been exploited successfully for the design of supervisors of some supervisory control problems. Deadlock-free operation of FMSs is significant objectives of siphons in the Petri net. The structure analysis of Petri net models has efficiency in control of FMSs, however different policy can be implemented for the deadlock prevention. Petri nets models based deadlock prevention for FMS's has gained considerable interest in the development of control theory and methods for design, controlling, operation, and performance evaluation depending of the special class of Petri nets called S3PR. Both structural analysis and reachability tree analysis is used for the purposes analysis, simulation and control of Petri nets. In our ex-perimental approach based to siphon is able to resolve the problem of deadlock occurred to Petri nets that are illustrated with an FMS.

  16. A context-based theory of recency and contiguity in free recall

    PubMed Central

    Sederberg, Per B.; Howard, Marc W.; Kahana, Michael J.

    2008-01-01

    We present a new model of free recall based on Howard and Kahana’s (2002) temporal context model and Usher and McClelland’s (2001) leaky-accumulator decision model. In this model, contextual drift gives rise to both short-term and long-term recency effects, and contextual retrieval gives rise to short-term and long-term contiguity effects, Recall decisions are controlled by a race between competitive leaky-accumulators. The model captures the dynamics of immediate, delayed, and continual distractor free recall, demonstrating that dissociations between short- and long-term recency can naturally arise from a model that uses an internal contextual state as the sole cue for retrieval across time scales. PMID:18954208

  17. Skill-Based and Planned Active Play Versus Free-Play Effects on Fundamental Movement Skills in Preschoolers.

    PubMed

    Roach, Lindsay; Keats, Melanie

    2018-01-01

    Fundamental movement skill interventions are important for promoting physical activity, but the optimal intervention model for preschool children remains unclear. We compared two 8-week interventions, a structured skill-station and a planned active play approach, to a free-play control condition on pre- and postintervention fundamental movement skills. We also collected data regarding program attendance and perceived enjoyment. We found a significant interaction effect between intervention type and time. A Tukey honest significant difference analysis supported a positive intervention effect showing a significant difference between both interventions and the free-play control condition. There was a significant between-group difference in group attendance such that mean attendance was higher for both the free-play and planned active play groups relative to the structured skill-based approach. There were no differences in attendance between free-play and planned active play groups, and there were no differences in enjoyment ratings between the two intervention groups. In sum, while both interventions led to improved fundamental movement skills, the active play approach offered several logistical advantages. Although these findings should be replicated, they can guide feasible and sustainable fundamental movement skill programs within day care settings.

  18. The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Xu, X.; Tong, S.; Wang, L.

    2017-12-01

    How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.

  19. Virtual screening of integrase inhibitors by large scale binding free energy calculations: the SAMPL4 challenge

    PubMed Central

    Gallicchio, Emilio; Deng, Nanjie; He, Peng; Wickstrom, Lauren; Perryman, Alexander L.; Santiago, Daniel N.; Forli, Stefano; Olson, Arthur J.; Levy, Ronald M.

    2014-01-01

    As part of the SAMPL4 blind challenge, filtered AutoDock Vina ligand docking predictions and large scale binding energy distribution analysis method binding free energy calculations have been applied to the virtual screening of a focused library of candidate binders to the LEDGF site of the HIV integrase protein. The computational protocol leveraged docking and high level atomistic models to improve enrichment. The enrichment factor of our blind predictions ranked best among all of the computational submissions, and second best overall. This work represents to our knowledge the first example of the application of an all-atom physics-based binding free energy model to large scale virtual screening. A total of 285 parallel Hamiltonian replica exchange molecular dynamics absolute protein-ligand binding free energy simulations were conducted starting from docked poses. The setup of the simulations was fully automated, calculations were distributed on multiple computing resources and were completed in a 6-weeks period. The accuracy of the docked poses and the inclusion of intramolecular strain and entropic losses in the binding free energy estimates were the major factors behind the success of the method. Lack of sufficient time and computing resources to investigate additional protonation states of the ligands was a major cause of mispredictions. The experiment demonstrated the applicability of binding free energy modeling to improve hit rates in challenging virtual screening of focused ligand libraries during lead optimization. PMID:24504704

  20. Importance of partitioning membranes of the brain and the influence of the neck in head injury modelling.

    PubMed

    Kumaresan, S; Radhakrishnan, S

    1996-01-01

    A head injury model consisting of the skull, the CSF, the brain and its partitioning membranes and the neck region is simulated by considering its near actual geometry. Three-dimensional finite-element analysis is carried out to investigate the influence of the partitioning membranes of the brain and the neck in head injury analysis through free-vibration analysis and transient analysis. In free-vibration analysis, the first five modal frequencies are calculated, and in transient analysis intracranial pressure and maximum shear stress in the brain are determined for a given occipital impact load.

  1. Phase structure of completely asymptotically free SU(Nc) models with quarks and scalar quarks

    NASA Astrophysics Data System (ADS)

    Hansen, F. F.; Janowski, T.; Langæble, K.; Mann, R. B.; Sannino, F.; Steele, T. G.; Wang, Z. W.

    2018-03-01

    We determine the phase diagram of completely asymptotically free SU (Nc) gauge theories featuring Ns complex scalars and Nf Dirac quarks transforming according to the fundamental representation of the gauge group. The analysis is performed at the maximum known order in perturbation theory. We unveil a very rich dynamics and associated phase structure. Intriguingly, we discover that the completely asymptotically free conditions guarantee that the infrared dynamics displays long-distance conformality, and in a regime when perturbation theory is applicable. We conclude our analysis by determining the quantum corrected potential of the model and summarizing the possible patterns of radiative symmetry breaking. These models are of potential phenomenological interest as either elementary or composite ultraviolet finite extensions of the standard model.

  2. Variable-free exploration of stochastic models: a gene regulatory network example.

    PubMed

    Erban, Radek; Frewen, Thomas A; Wang, Xiao; Elston, Timothy C; Coifman, Ronald; Nadler, Boaz; Kevrekidis, Ioannis G

    2007-04-21

    Finding coarse-grained, low-dimensional descriptions is an important task in the analysis of complex, stochastic models of gene regulatory networks. This task involves (a) identifying observables that best describe the state of these complex systems and (b) characterizing the dynamics of the observables. In a previous paper [R. Erban et al., J. Chem. Phys. 124, 084106 (2006)] the authors assumed that good observables were known a priori, and presented an equation-free approach to approximate coarse-grained quantities (i.e., effective drift and diffusion coefficients) that characterize the long-time behavior of the observables. Here we use diffusion maps [R. Coifman et al., Proc. Natl. Acad. Sci. U.S.A. 102, 7426 (2005)] to extract appropriate observables ("reduction coordinates") in an automated fashion; these involve the leading eigenvectors of a weighted Laplacian on a graph constructed from network simulation data. We present lifting and restriction procedures for translating between physical variables and these data-based observables. These procedures allow us to perform equation-free, coarse-grained computations characterizing the long-term dynamics through the design and processing of short bursts of stochastic simulation initialized at appropriate values of the data-based observables.

  3. Chemometric analysis of minerals in gluten-free products.

    PubMed

    Gliszczyńska-Świgło, Anna; Klimczak, Inga; Rybicka, Iga

    2018-06-01

    Numerous studies indicate mineral deficiencies in people on a gluten-free (GF) diet. These deficiencies may indicate that GF products are a less valuable source of minerals than gluten-containing products. In the study, the nutritional quality of 50 GF products is discussed taking into account the nutritional requirements for minerals expressed as percentage of recommended daily allowance (%RDA) or percentage of adequate intake (%AI) for a model celiac patient. Elements analyzed were calcium, potassium, magnesium, sodium, copper, iron, manganese, and zinc. Analysis of %RDA or %AI was performed using principal component analysis (PCA) and hierarchical cluster analysis (HCA). Using PCA, the differentiation between products based on rice, corn, potato, GF wheat starch and based on buckwheat, chickpea, millet, oats, amaranth, teff, quinoa, chestnut, and acorn was possible. In the HCA, four clusters were created. The main criterion determining the adherence of the sample to the cluster was the content of all minerals included to HCA (K, Mg, Cu, Fe, Mn); however, only the Mn content differentiated four formed groups. GF products made of buckwheat, chickpea, millet, oats, amaranth, teff, quinoa, chestnut, and acorn are better source of minerals than based on other GF raw materials, what was confirmed by PCA and HCA. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  4. Scale-free gravitational collapse as the origin of ρ ˜ r-2 density profile - a possible role of turbulence in regulating gravitational collapse

    NASA Astrophysics Data System (ADS)

    Li, Guang-Xing

    2018-03-01

    Astrophysical systems, such as clumps that form star clusters share a density profile that is close to ρ ˜ r-2. We prove analytically this density profile is the result of the scale-free nature of the gravitational collapse. Therefore, it should emerge in many different situations as long as gravity is dominating the evolution for a period that is comparable or longer than the free-fall time, and this does not necessarily imply an isothermal model, as many have previously believed. To describe the collapse process, we construct a model called the turbulence-regulated gravitational collapse model, where turbulence is sustained by accretion and dissipates in roughly a crossing time. We demonstrate that a ρ ˜ r-2 profile emerges due to the scale-free nature the system. In this particular case, the rate of gravitational collapse is regulated by the rate at which turbulence dissipates the kinetic energy such that the infall speed can be 20-50% of the free-fall speed(which also depends on the interpretation of the crossing time based on simulations of driven turbulence). These predictions are consistent with existing observations, which suggests that these clumps are in the stage of turbulence-regulated gravitational collapse. Our analysis provides a unified description of gravitational collapse in different environments.

  5. The predictive value of 53BP1 and BRCA1 mRNA expression in advanced non-small-cell lung cancer patients treated with first-line platinum-based chemotherapy

    PubMed Central

    Bonanno, Laura; Costa, Carlota; Majem, Margarita; Sanchez, Jose Javier; Gimenez-Capitan, Ana; Rodriguez, Ignacio; Vergenegre, Alain; Massuti, Bartomeu; Favaretto, Adolfo; Rugge, Massimo; Pallares, Cinta; Taron, Miquel; Rosell, Rafael

    2013-01-01

    Platinum-based chemotherapy is the standard first-line treatment for non-oncogene-addicted non-small cell lung cancers (NSCLCs) and the analysis of multiple DNA repair genes could improve current models for predicting chemosensitivity. We investigated the potential predictive role of components of the 53BP1 pathway in conjunction with BRCA1. The mRNA expression of BRCA1, MDC1, CASPASE3, UBC13, RNF8, 53BP1, PIAS4, UBC9 and MMSET was analyzed by real-time PCR in 115 advanced NSCLC patients treated with first-line platinum-based chemotherapy. Patients expressing low levels of both BRCA1 and 53BP1 obtained a median progression-free survival of 10.3 months and overall survival of 19.3 months, while among those with low BRCA1 and high 53BP1 progression-free survival was 5.9 months (P <0.0001) and overall survival was 8.2 months (P=0.001). The expression of 53BP1 refines BRCA1-based predictive modeling to identify patients most likely to benefit from platinum-based chemotherapy. PMID:24197907

  6. Processing speed enhances model-based over model-free reinforcement learning in the presence of high working memory functioning

    PubMed Central

    Schad, Daniel J.; Jünger, Elisabeth; Sebold, Miriam; Garbusow, Maria; Bernhardt, Nadine; Javadi, Amir-Homayoun; Zimmermann, Ulrich S.; Smolka, Michael N.; Heinz, Andreas; Rapp, Michael A.; Huys, Quentin J. M.

    2014-01-01

    Theories of decision-making and its neural substrates have long assumed the existence of two distinct and competing valuation systems, variously described as goal-directed vs. habitual, or, more recently and based on statistical arguments, as model-free vs. model-based reinforcement-learning. Though both have been shown to control choices, the cognitive abilities associated with these systems are under ongoing investigation. Here we examine the link to cognitive abilities, and find that individual differences in processing speed covary with a shift from model-free to model-based choice control in the presence of above-average working memory function. This suggests shared cognitive and neural processes; provides a bridge between literatures on intelligence and valuation; and may guide the development of process models of different valuation components. Furthermore, it provides a rationale for individual differences in the tendency to deploy valuation systems, which may be important for understanding the manifold neuropsychiatric diseases associated with malfunctions of valuation. PMID:25566131

  7. Assessing the economic impacts of drought from the perspective of profit loss rate: a case study of the sugar industry in China

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Lin, L.; Chen, H.

    2015-02-01

    Natural disasters have enormous impacts on human society, especially on the development of the economy. To support decision making in mitigation and adaption to natural disasters, assessment of economic impacts is fundamental and of great significance. Based on a review of the literature of economic impact evaluation, this paper proposes a new assessment model of economic impact from drought by using the sugar industry in China as a case study, which focuses on the generation and transfer of economic impacts along a simple value chain involving only sugarcane growers and a sugar producing company. A perspective of profit loss rate is applied to scale economic impact with a model based on cost-and-benefit analysis. By using analysis of "with-and-without", profit loss is defined as the difference in profits between disaster-hit and disaster-free scenarios. To calculate profit, analysis on a time series of sugar price is applied. With the support of a linear regression model, an endogenous trend in sugar price is identified, and the time series of sugar price "without" disaster is obtained using an autoregressive error model to separate impact by disasters from the internal trend in sugar price. Unlike the settings in other assessment models, representative sugar prices, which represent value level in disaster-free condition and disaster-hit condition, are integrated from a long time series that covers the whole period of drought. As a result, it is found that in a rigid farming contract, sugarcane growers suffer far more than the sugar company when impacted by severe drought, which may promote the reflections on economic equality among various economic bodies at the occurrence of natural disasters.

  8. Integrating cortico-limbic-basal ganglia architectures for learning model-based and model-free navigation strategies

    PubMed Central

    Khamassi, Mehdi; Humphries, Mark D.

    2012-01-01

    Behavior in spatial navigation is often organized into map-based (place-driven) vs. map-free (cue-driven) strategies; behavior in operant conditioning research is often organized into goal-directed vs. habitual strategies. Here we attempt to unify the two. We review one powerful theory for distinct forms of learning during instrumental conditioning, namely model-based (maintaining a representation of the world) and model-free (reacting to immediate stimuli) learning algorithms. We extend these lines of argument to propose an alternative taxonomy for spatial navigation, showing how various previously identified strategies can be distinguished as “model-based” or “model-free” depending on the usage of information and not on the type of information (e.g., cue vs. place). We argue that identifying “model-free” learning with dorsolateral striatum and “model-based” learning with dorsomedial striatum could reconcile numerous conflicting results in the spatial navigation literature. From this perspective, we further propose that the ventral striatum plays key roles in the model-building process. We propose that the core of the ventral striatum is positioned to learn the probability of action selection for every transition between states of the world. We further review suggestions that the ventral striatal core and shell are positioned to act as “critics” contributing to the computation of a reward prediction error for model-free and model-based systems, respectively. PMID:23205006

  9. A study of stiffness, residual strength and fatigue life relationships for composite laminates

    NASA Technical Reports Server (NTRS)

    Ryder, J. T.; Crossman, F. W.

    1983-01-01

    Qualitative and quantitative exploration of the relationship between stiffness, strength, fatigue life, residual strength, and damage of unnotched, graphite/epoxy laminates subjected to tension loading. Clarification of the mechanics of the tension loading is intended to explain previous contradictory observations and hypotheses; to develop a simple procedure to anticipate strength, fatigue life, and stiffness changes; and to provide reasons for the study of more complex cases of compression, notches, and spectrum fatigue loading. Mathematical models are developed based upon analysis of the damage states. Mathematical models were based on laminate analysis, free body type modeling or a strain energy release rate. Enough understanding of the tension loaded case is developed to allow development of a proposed, simple procedure for calculating strain to failure, stiffness, strength, data scatter, and shape of the stress-life curve for unnotched laminates subjected to tension load.

  10. Fabrication of submicron proteinaceous structures by direct laser writing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serien, Daniela; Takeuchi, Shoji, E-mail: takeuchi@iis.u-tokyo.ac.jp; ERATO Takeuchi Biohybrid Innovation Project, Japan Science and Technology Agency, 4-6-1 Komaba, Meguro-ku, 153-8505 Tokyo

    In this paper, we provide a characterization of truly free-standing proteinaceous structures with submicron feature sizes depending on the fabrication conditions by model-based analysis. Protein cross-linking of bovine serum albumin is performed by direct laser writing and two-photon excitation of flavin adenine dinucleotide. We analyze the obtainable fabrication resolution and required threshold energy for polymerization. The applied polymerization model allows prediction of fabrication conditions and resulting fabrication size, alleviating the application of proteinaceous structure fabrication.

  11. Improving Power Density of Free-Piston Stirling Engines

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell H.; Prahl, Joseph M.; Loparo, Kenneth A.

    2016-01-01

    Analyses and experiments demonstrate the potential benefits of optimizing piston and displacer motion in a free-piston Stirling Engine. Isothermal analysis shows the theoretical limits of power density improvement due to ideal motion in ideal Stirling engines. More realistic models based on nodal analysis show that ideal piston and displacer waveforms are not optimal, often producing less power than engines that use sinusoidal piston and displacer motion. Constrained optimization using nodal analysis predicts that Stirling engine power density can be increased by as much as 58 percent using optimized higher harmonic piston and displacer motion. An experiment is conducted in which an engine designed for sinusoidal motion is forced to operate with both second and third harmonics, resulting in a piston power increase of as much as 14 percent. Analytical predictions are compared to experimental data and show close agreement with indirect thermodynamic power calculations, but poor agreement with direct electrical power measurements.

  12. Improving Power Density of Free-Piston Stirling Engines

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell H.; Prahl, Joseph; Loparo, Kenneth

    2016-01-01

    Analyses and experiments demonstrate the potential benefits of optimizing piston and displacer motion in a free piston Stirling Engine. Isothermal analysis shows the theoretical limits of power density improvement due to ideal motion in ideal Stirling engines. More realistic models based on nodal analysis show that ideal piston and displacer waveforms are not optimal, often producing less power than engines that use sinusoidal piston and displacer motion. Constrained optimization using nodal analysis predicts that Stirling engine power density can be increased by as much as 58 using optimized higher harmonic piston and displacer motion. An experiment is conducted in which an engine designed for sinusoidal motion is forced to operate with both second and third harmonics, resulting in a maximum piston power increase of 14. Analytical predictions are compared to experimental data showing close agreement with indirect thermodynamic power calculations, but poor agreement with direct electrical power measurements.

  13. Improving Free-Piston Stirling Engine Power Density

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell H.

    2016-01-01

    Analyses and experiments demonstrate the potential benefits of optimizing piston and displacer motion in a free piston Stirling Engine. Isothermal analysis shows the theoretical limits of power density improvement due to ideal motion in ideal Stirling engines. More realistic models based on nodal analysis show that ideal piston and displacer waveforms are not optimal, often producing less power than engines that use sinusoidal piston and displacer motion. Constrained optimization using nodal analysis predicts that Stirling engine power density can be increased by as much as 58% using optimized higher harmonic piston and displacer motion. An experiment is conducted in which an engine designed for sinusoidal motion is forced to operate with both second and third harmonics, resulting in a maximum piston power increase of 14%. Analytical predictions are compared to experimental data showing close agreement with indirect thermodynamic power calculations, but poor agreement with direct electrical power measurements.

  14. Achieving moral, high quality, affordable medical care in America through a true free market

    PubMed Central

    McKalip, David

    2016-01-01

    The basis of a just and moral economic model for health care is examined in the context of Catholic social teaching. The performance of the current model of “central economic planning” in medicine is evaluated in terms of the core principles of the social doctrine of the Catholic Church and compared to freedom-based economic models. It is clear that the best way to respect and serve human dignity, the common good, subsidiarity, and solidarity in medicine is through the establishment of a true, free-market health economy. Lay Summary: This article reviews the impact of recent healthcare reforms as well as traditional “third party payment” models for healthcare financing in America (insurance). Impact on patients and doctors are evaluated in the context of Catholic social doctrine and the Catechism. The many shortcomings and negative consequences of an economy planned centrally by government are compared to the benefits of a true free-market medical economy with empowered individuals. The analysis shows that interference in the patient–physician relationship and the centrally planned medical economy itself violates Catholic teachings, harms patients and doctors, and create morally evil outcomes and economic structures. PMID:28392591

  15. Gurtin-Murdoch surface elasticity theory revisit: An orbital-free density functional theory perspective

    NASA Astrophysics Data System (ADS)

    Zhu, Yichao; Wei, Yihai; Guo, Xu

    2017-12-01

    In the present paper, the well-established Gurtin-Murdoch theory of surface elasticity (Gurtin and Murdoch, 1975, 1978) is revisited from an orbital-free density functional theory (OFDFT) perspective by taking the boundary layer into consideration. Our analysis indicates that firstly, the quantities introduced in the Gurtin-Murdoch theory of surface elasticity can all find their explicit expressions in the derived OFDFT-based theoretical model. Secondly, the derived expression for surface energy density captures a competition between the surface normal derivatives of the electron density and the electrostatic potential, which well rationalises the onset of signed elastic constants that are observed both experimentally and computationally. Thirdly, the established model naturally yields an inversely linear relationship between the materials surface stiffness and its size, which conforms to relevant findings in literature. Since the proposed OFDFT-based model is established under arbitrarily imposed boundary condition of electron density, electrostatic potential and external load, it also has the potential of being used to investigate the electro-mechanical behaviour of nanoscale materials manifesting surface effect.

  16. A model of free-living gait: A factor analysis in Parkinson's disease.

    PubMed

    Morris, Rosie; Hickey, Aodhán; Del Din, Silvia; Godfrey, Alan; Lord, Sue; Rochester, Lynn

    2017-02-01

    Gait is a marker of global health, cognition and falls risk. Gait is complex, comprised of multiple characteristics sensitive to survival, age and pathology. Due to covariance amongst characteristics, conceptual gait models have been established to reduce redundancy and aid interpretation. Previous models have been derived from laboratory gait assessments which are costly in equipment and time. Body-worn monitors (BWM) allow for free-living, low-cost and continuous gait measurement and produce similar covariant gait characteristics. A BWM gait model from both controlled and free-living measurement has not yet been established, limiting utility. 103 control and 67 PD participants completed a controlled laboratory assessment; walking for two minutes around a circuit wearing a BWM. 89 control and 58 PD participants were assessed in free-living, completing normal activities for 7 days wearing a BWM. Fourteen gait characteristics were derived from the BWM, selected according to a previous model. Principle component analysis derived factor loadings of gait characteristics. Four gait domains were derived for both groups and conditions; pace, rhythm, variability and asymmetry. Domains totalled 84.84% and 88.43% of variance for controlled and 90.00% and 93.03% of variance in free-living environments for control and PD participants respectively. Gait characteristic loading was unambiguous for all characteristics apart from gait variability which demonstrated cross-loading for both groups and environments. The model was highly congruent with the original model. The conceptual gait models remained stable using a BWM in controlled and free-living environments. The model became more discrete supporting utility of the gait model for free-living gait. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. In-plane free vibration analysis of cable arch structure

    NASA Astrophysics Data System (ADS)

    Zhao, Yueyu; Kang, Houjun

    2008-05-01

    Cable-stayed arch bridge is a new type of composite bridge, which utilizes the mechanical characters of cable and arch. Based on the supporting members of cable-stayed arch bridge and of erection of arch bridge using of the cantilever construction method with tiebacks, we propose a novel mechanical model of cable-arch structure. In this model, the equations governing vibrations of the cable-arch are derived according to Hamilton's principle for dynamic problems in elastic body under equilibrium state. Then, the program of solving the dynamic governing equations is ultimately established by the transfer matrix method for free vibration of uniform and variable cross-section, and the internal characteristics of the cable-arch are investigated. After analyzing step by step, the research results approve that the program is accurate; meanwhile, the mechanical model and method are both valuable and significant not only in theoretical research and calculation but also in design of engineering.

  18. Elasto-dynamic analysis of spinning nanodisks via a surface energy-based model

    NASA Astrophysics Data System (ADS)

    Kiani, Keivan

    2016-07-01

    Using the surface elasticity theory of Gurtin and Murdoch, in-plane vibrations of annular nanodisks due to their rotary motion are explored. By the imposition of non-classical boundary conditions on the innermost and outermost surfaces and employing Hamilton’s principle, the unknown elasto-dynamic fields of the bulk zone are determined via the finite element method. The roles of both nanodisk geometry and surface effect on the natural frequencies are addressed. Subsequently, forced vibrations of spinning nanodisks with fixed-free and free-free boundary conditions are comprehensively examined. The obtained results show that the maximum dynamic elastic fields grow in a parabolic manner as the steady angular velocity increases. By increasing the outermost radius, the maximum dynamic elastic field is magnified and the influence of the surface effect on the results reduced. This work can be considered as a pivotal step towards optimal design and dynamic analysis of nanorotors with disk-like parts, which are one of the basic building blocks of the upcoming advanced nanotechnologies.

  19. A LIGHT CURVE ANALYSIS OF CLASSICAL NOVAE: FREE-FREE EMISSION VERSUS PHOTOSPHERIC EMISSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hachisu, Izumi; Kato, Mariko, E-mail: hachisu@ea.c.u-tokyo.ac.jp, E-mail: mariko@educ.cc.keio.ac.jp

    2015-01-10

    We analyzed light curves of seven relatively slower novae, PW Vul, V705 Cas, GQ Mus, RR Pic, V5558 Sgr, HR Del, and V723 Cas, based on an optically thick wind theory of nova outbursts. For fast novae, free-free emission dominates the spectrum in optical bands rather than photospheric emission, and nova optical light curves follow the universal decline law. Faster novae blow stronger winds with larger mass-loss rates. Because the brightness of free-free emission depends directly on the wind mass-loss rate, faster novae show brighter optical maxima. In slower novae, however, we must take into account photospheric emission because of theirmore » lower wind mass-loss rates. We calculated three model light curves of free-free emission, photospheric emission, and their sum for various white dwarf (WD) masses with various chemical compositions of their envelopes and fitted reasonably with observational data of optical, near-IR (NIR), and UV bands. From light curve fittings of the seven novae, we estimated their absolute magnitudes, distances, and WD masses. In PW Vul and V705 Cas, free-free emission still dominates the spectrum in the optical and NIR bands. In the very slow novae, RR Pic, V5558 Sgr, HR Del, and V723 Cas, photospheric emission dominates the spectrum rather than free-free emission, which makes a deviation from the universal decline law. We have confirmed that the absolute brightnesses of our model light curves are consistent with the distance moduli of four classical novae with known distances (GK Per, V603 Aql, RR Pic, and DQ Her). We also discussed the reason why the very slow novae are about ∼1 mag brighter than the proposed maximum magnitude versus rate of decline relation.« less

  20. Learning Based Bidding Strategy for HVAC Systems in Double Auction Retail Energy Markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Somani, Abhishek; Carroll, Thomas E.

    In this paper, a bidding strategy is proposed using reinforcement learning for HVAC systems in a double auction market. The bidding strategy does not require a specific model-based representation of behavior, i.e., a functional form to translate indoor house temperatures into bid prices. The results from reinforcement learning based approach are compared with the HVAC bidding approach used in the AEP gridSMART® smart grid demonstration project and it is shown that the model-free (learning based) approach tracks well the results from the model-based behavior. Successful use of model-free approaches to represent device-level economic behavior may help develop similar approaches tomore » represent behavior of more complex devices or groups of diverse devices, such as in a building. Distributed control requires an understanding of decision making processes of intelligent agents so that appropriate mechanisms may be developed to control and coordinate their responses, and model-free approaches to represent behavior will be extremely useful in that quest.« less

  1. Likelihood-Based Confidence Intervals in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Oort, Frans J.

    2011-01-01

    In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…

  2. Thermodynamic free energy methods to investigate shape transitions in bilayer membranes.

    PubMed

    Ramakrishnan, N; Tourdot, Richard W; Radhakrishnan, Ravi

    2016-06-01

    The conformational free energy landscape of a system is a fundamental thermodynamic quantity of importance particularly in the study of soft matter and biological systems, in which the entropic contributions play a dominant role. While computational methods to delineate the free energy landscape are routinely used to analyze the relative stability of conformational states, to determine phase boundaries, and to compute ligand-receptor binding energies its use in problems involving the cell membrane is limited. Here, we present an overview of four different free energy methods to study morphological transitions in bilayer membranes, induced either by the action of curvature remodeling proteins or due to the application of external forces. Using a triangulated surface as a model for the cell membrane and using the framework of dynamical triangulation Monte Carlo, we have focused on the methods of Widom insertion, thermodynamic integration, Bennett acceptance scheme, and umbrella sampling and weighted histogram analysis. We have demonstrated how these methods can be employed in a variety of problems involving the cell membrane. Specifically, we have shown that the chemical potential, computed using Widom insertion, and the relative free energies, computed using thermodynamic integration and Bennett acceptance method, are excellent measures to study the transition from curvature sensing to curvature inducing behavior of membrane associated proteins. The umbrella sampling and WHAM analysis has been used to study the thermodynamics of tether formation in cell membranes and the quantitative predictions of the computational model are in excellent agreement with experimental measurements. Furthermore, we also present a method based on WHAM and thermodynamic integration to handle problems related to end-point-catastrophe that are common in most free energy methods.

  3. QPROT: Statistical method for testing differential expression using protein-level intensity data in label-free quantitative proteomics.

    PubMed

    Choi, Hyungwon; Kim, Sinae; Fermin, Damian; Tsou, Chih-Chiang; Nesvizhskii, Alexey I

    2015-11-03

    We introduce QPROT, a statistical framework and computational tool for differential protein expression analysis using protein intensity data. QPROT is an extension of the QSPEC suite, originally developed for spectral count data, adapted for the analysis using continuously measured protein-level intensity data. QPROT offers a new intensity normalization procedure and model-based differential expression analysis, both of which account for missing data. Determination of differential expression of each protein is based on the standardized Z-statistic based on the posterior distribution of the log fold change parameter, guided by the false discovery rate estimated by a well-known Empirical Bayes method. We evaluated the classification performance of QPROT using the quantification calibration data from the clinical proteomic technology assessment for cancer (CPTAC) study and a recently published Escherichia coli benchmark dataset, with evaluation of FDR accuracy in the latter. QPROT is a statistical framework with computational software tool for comparative quantitative proteomics analysis. It features various extensions of QSPEC method originally built for spectral count data analysis, including probabilistic treatment of missing values in protein intensity data. With the increasing popularity of label-free quantitative proteomics data, the proposed method and accompanying software suite will be immediately useful for many proteomics laboratories. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. CAFE: aCcelerated Alignment-FrEe sequence analysis.

    PubMed

    Lu, Yang Young; Tang, Kujin; Ren, Jie; Fuhrman, Jed A; Waterman, Michael S; Sun, Fengzhu

    2017-07-03

    Alignment-free genome and metagenome comparisons are increasingly important with the development of next generation sequencing (NGS) technologies. Recently developed state-of-the-art k-mer based alignment-free dissimilarity measures including CVTree, $d_2^*$ and $d_2^S$ are more computationally expensive than measures based solely on the k-mer frequencies. Here, we report a standalone software, aCcelerated Alignment-FrEe sequence analysis (CAFE), for efficient calculation of 28 alignment-free dissimilarity measures. CAFE allows for both assembled genome sequences and unassembled NGS shotgun reads as input, and wraps the output in a standard PHYLIP format. In downstream analyses, CAFE can also be used to visualize the pairwise dissimilarity measures, including dendrograms, heatmap, principal coordinate analysis and network display. CAFE serves as a general k-mer based alignment-free analysis platform for studying the relationships among genomes and metagenomes, and is freely available at https://github.com/younglululu/CAFE. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Eigenmode analysis of a high-gain free-electron laser based on a transverse gradient undulator

    DOE PAGES

    Baxevanis, Panagiotis; Huang, Zhirong; Ruth, Ronald; ...

    2015-01-27

    Here, the use of a transverse gradient undulator (TGU) is viewed as an attractive option for free-electron lasers (FELs) driven by beams with a large energy spread. By suitably dispersing the electron beam and tilting the undulator poles, the energy spread effect can be substantially mitigated. However, adding the dispersion typically leads to electron beams with large aspect ratios. As a result, the presence of higher-order modes in the FEL radiation can become significant. To investigate this effect, we study the eigenmode properties of a TGU-based, high-gain FEL, using both an analytically-solvable model and a variational technique. Our analysis, whichmore » includes the fundamental and the higher-order FEL eigenmodes, can provide an estimate of the mode content for the output radiation. This formalism also enables us to study the trade-off between FEL gain and transverse coherence. Numerical results are presented for a representative soft X-ray, TGU FEL example.« less

  6. Eigenmode analysis of a high-gain free-electron laser based on a transverse gradient undulator

    NASA Astrophysics Data System (ADS)

    Baxevanis, Panagiotis; Huang, Zhirong; Ruth, Ronald; Schroeder, Carl B.

    2015-01-01

    The use of a transverse gradient undulator (TGU) is viewed as an attractive option for free-electron lasers (FELs) driven by beams with a large energy spread. By suitably dispersing the electron beam and tilting the undulator poles, the energy spread effect can be substantially mitigated. However, adding the dispersion typically leads to electron beams with large aspect ratios. As a result, the presence of higher-order modes in the FEL radiation can become significant. To investigate this effect, we study the eigenmode properties of a TGU-based, high-gain FEL, using both an analytically-solvable model and a variational technique. Our analysis, which includes the fundamental and the higher-order FEL eigenmodes, can provide an estimate of the mode content for the output radiation. This formalism also enables us to study the trade-off between FEL gain and transverse coherence. Numerical results are presented for a representative soft X-ray, TGU FEL example.

  7. Impact of rheology on probabilistic forecasts of sea ice trajectories: application for search and rescue operations in the Arctic

    NASA Astrophysics Data System (ADS)

    Rabatel, Matthias; Rampal, Pierre; Carrassi, Alberto; Bertino, Laurent; Jones, Christopher K. R. T.

    2018-03-01

    We present a sensitivity analysis and discuss the probabilistic forecast capabilities of the novel sea ice model neXtSIM used in hindcast mode. The study pertains to the response of the model to the uncertainty on winds using probabilistic forecasts of ice trajectories. neXtSIM is a continuous Lagrangian numerical model that uses an elasto-brittle rheology to simulate the ice response to external forces. The sensitivity analysis is based on a Monte Carlo sampling of 12 members. The response of the model to the uncertainties is evaluated in terms of simulated ice drift distances from their initial positions, and from the mean position of the ensemble, over the mid-term forecast horizon of 10 days. The simulated ice drift is decomposed into advective and diffusive parts that are characterised separately both spatially and temporally and compared to what is obtained with a free-drift model, that is, when the ice rheology does not play any role in the modelled physics of the ice. The seasonal variability of the model sensitivity is presented and shows the role of the ice compactness and rheology in the ice drift response at both local and regional scales in the Arctic. Indeed, the ice drift simulated by neXtSIM in summer is close to the one obtained with the free-drift model, while the more compact and solid ice pack shows a significantly different mechanical and drift behaviour in winter. For the winter period analysed in this study, we also show that, in contrast to the free-drift model, neXtSIM reproduces the sea ice Lagrangian diffusion regimes as found from observed trajectories. The forecast capability of neXtSIM is also evaluated using a large set of real buoy's trajectories and compared to the capability of the free-drift model. We found that neXtSIM performs significantly better in simulating sea ice drift, both in terms of forecast error and as a tool to assist search and rescue operations, although the sources of uncertainties assumed for the present experiment are not sufficient for complete coverage of the observed IABP positions.

  8. To label or not to label: applications of quantitative proteomics in neuroscience research.

    PubMed

    Filiou, Michaela D; Martins-de-Souza, Daniel; Guest, Paul C; Bahn, Sabine; Turck, Christoph W

    2012-02-01

    Proteomics has provided researchers with a sophisticated toolbox of labeling-based and label-free quantitative methods. These are now being applied in neuroscience research where they have already contributed to the elucidation of fundamental mechanisms and the discovery of candidate biomarkers. In this review, we evaluate and compare labeling-based and label-free quantitative proteomic techniques for applications in neuroscience research. We discuss the considerations required for the analysis of brain and central nervous system specimens, the experimental design of quantitative proteomic workflows as well as the feasibility, advantages, and disadvantages of the available techniques for neuroscience-oriented questions. Furthermore, we assess the use of labeled standards as internal controls for comparative studies in humans and review applications of labeling-based and label-free mass spectrometry approaches in relevant model organisms and human subjects. Providing a comprehensive guide of feasible and meaningful quantitative proteomic methodologies for neuroscience research is crucial not only for overcoming current limitations but also for gaining useful insights into brain function and translating proteomics from bench to bedside. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Better Resolved Low Frequency Dispersions by the Apt Use of Kramers-Kronig Relations, Differential Operators, and All-In-1 Modeling

    PubMed Central

    van Turnhout, J.

    2016-01-01

    The dielectric spectra of colloidal systems often contain a typical low frequency dispersion, which usually remains unnoticed, because of the presence of strong conduction losses. The KK relations offer a means for converting ε′ into ε″ data. This allows us to calculate conduction free ε″ spectra in which the l.f. dispersion will show up undisturbed. This interconversion can be done on line with a moving frame of logarithmically spaced ε′ data. The coefficients of the conversion frames were obtained by kernel matching and by using symbolic differential operators. Logarithmic derivatives and differences of ε′ and ε″ provide another option for conduction free data analysis. These difference-based functions actually derived from approximations to the distribution function, have the additional advantage of improving the resolution power of dielectric studies. A high resolution is important because of the rich relaxation structure of colloidal suspensions. The development of all-in-1 modeling facilitates the conduction free and high resolution data analysis. This mathematical tool allows the apart-together fitting of multiple data and multiple model functions. It proved also useful to go around the KK conversion altogether. This was achieved by the combined approximating ε′ and ε″ data with a complex rational fractional power function. The all-in-1 minimization turned out to be also highly useful for the dielectric modeling of a suspension with the complex dipolar coefficient. It guarantees a secure correction for the electrode polarization, so that the modeling with the help of the differences ε′ and ε″ can zoom in on the genuine colloidal relaxations. PMID:27242997

  10. Interfacing comprehensive rotorcraft analysis with advanced aeromechanics and vortex wake models

    NASA Astrophysics Data System (ADS)

    Liu, Haiying

    This dissertation describes three aspects of the comprehensive rotorcraft analysis. First, a physics-based methodology for the modeling of hydraulic devices within multibody-based comprehensive models of rotorcraft systems is developed. This newly proposed approach can predict the fully nonlinear behavior of hydraulic devices, and pressure levels in the hydraulic chambers are coupled with the dynamic response of the system. The proposed hydraulic device models are implemented in a multibody code and calibrated by comparing their predictions with test bench measurements for the UH-60 helicopter lead-lag damper. Predicted peak damping forces were found to be in good agreement with measurements, while the model did not predict the entire time history of damper force to the same level of accuracy. The proposed model evaluates relevant hydraulic quantities such as chamber pressures, orifice flow rates, and pressure relief valve displacements. This model could be used to design lead-lag dampers with desirable force and damping characteristics. The second part of this research is in the area of computational aeroelasticity, in which an interface between computational fluid dynamics (CFD) and computational structural dynamics (CSD) is established. This interface enables data exchange between CFD and CSD with the goal of achieving accurate airloads predictions. In this work, a loose coupling approach based on the delta-airloads method is developed in a finite-element method based multibody dynamics formulation, DYMORE. To validate this aerodynamic interface, a CFD code, OVERFLOW-2, is loosely coupled with a CSD program, DYMORE, to compute the airloads of different flight conditions for Sikorsky UH-60 aircraft. This loose coupling approach has good convergence characteristics. The predicted airloads are found to be in good agreement with the experimental data, although not for all flight conditions. In addition, the tight coupling interface between the CFD program, OVERFLOW-2, and the CSD program, DYMORE, is also established. The ability to accurately capture the wake structure around a helicopter rotor is crucial for rotorcraft performance analysis. In the third part of this thesis, a new representation of the wake vortex structure based on Non-Uniform Rational B-Spline (NURBS) curves and surfaces is proposed to develop an efficient model for prescribed and free wakes. NURBS curves and surfaces are able to represent complex shapes with remarkably little data. The proposed formulation has the potential to reduce the computational cost associated with the use of Helmholtz's law and the Biot-Savart law when calculating the induced flow field around the rotor. An efficient free-wake analysis will considerably decrease the computational cost of comprehensive rotorcraft analysis, making the approach more attractive to routine use in industrial settings.

  11. Cost Effectiveness of Free Access to Smoking Cessation Treatment in France Considering the Economic Burden of Smoking-Related Diseases.

    PubMed

    Cadier, Benjamin; Durand-Zaleski, Isabelle; Thomas, Daniel; Chevreul, Karine

    2016-01-01

    In France more than 70,000 deaths from diseases related to smoking are recorded each year, and since 2005 prevalence of tobacco has increased. Providing free access to smoking cessation treatment would reduce this burden. The aim of our study was to estimate the incremental cost-effectiveness ratios (ICER) of providing free access to cessation treatment taking into account the cost offsets associated with the reduction of the three main diseases related to smoking: lung cancer, chronic obstructive pulmonary disease (COPD) and cardiovascular disease (CVD). To measure the financial impact of such a measure we also conducted a probabilistic budget impact analysis. We performed a cost-effectiveness analysis using a Markov state-transition model that compared free access to cessation treatment to the existing coverage of €50 provided by the French statutory health insurance, taking into account the cost offsets among current French smokers aged 15-75 years. Our results were expressed by the incremental cost-effectiveness ratio in 2009 Euros per life year gained (LYG) at the lifetime horizon. We estimated a base case scenario and carried out a Monte Carlo sensitivity analysis to account for uncertainty. Assuming a participation rate of 7.3%, the ICER value for free access to cessation treatment was €3,868 per LYG in the base case. The variation of parameters provided a range of ICER values from -€736 to €15,715 per LYG. In 99% of cases, the ICER for full coverage was lower than €11,187 per LYG. The probabilistic budget impact analysis showed that the potential cost saving for lung cancer, COPD and CVD ranges from €15 million to €215 million at the five-year horizon for an initial cessation treatment cost of €125 million to €421 million. The results suggest that providing medical support to smokers in their attempts to quit is very cost-effective and may even result in cost savings.

  12. Secure free-space optical communication system based on data fragmentation multipath transmission technology.

    PubMed

    Huang, Qingchao; Liu, Dachang; Chen, Yinfang; Wang, Yuehui; Tan, Jun; Chen, Wei; Liu, Jianguo; Zhu, Ninghua

    2018-05-14

    A secure free-space optical (S-FSO) communication system based on data fragmentation multipath transmission (DFMT) scheme is proposed and demonstrated for enhancing the security of FSO communications. By fragmenting the transmitted data and simultaneously distributing data fragments into different atmospheric channels, the S-FSO communication system can protect confidential messages from being eavesdropped effectively. A field experiment of S-FSO communication between two buildings has been successfully undertaken, and the experiment results demonstrate the feasibility of the scheme. The transmission distance is 50m and the maximum throughput is 1 Gb/s. We also established a theoretical model to analysis the security performance of the S-FSO communication system. To the best of our knowledge, this is the first application of DFMT scheme in FSO communication system.

  13. Whole-proteome phylogeny of large dsDNA viruses and parvoviruses through a composition vector method related to dynamical language model

    PubMed Central

    2010-01-01

    Background The vast sequence divergence among different virus groups has presented a great challenge to alignment-based analysis of virus phylogeny. Due to the problems caused by the uncertainty in alignment, existing tools for phylogenetic analysis based on multiple alignment could not be directly applied to the whole-genome comparison and phylogenomic studies of viruses. There has been a growing interest in alignment-free methods for phylogenetic analysis using complete genome data. Among the alignment-free methods, a dynamical language (DL) method proposed by our group has successfully been applied to the phylogenetic analysis of bacteria and chloroplast genomes. Results In this paper, the DL method is used to analyze the whole-proteome phylogeny of 124 large dsDNA viruses and 30 parvoviruses, two data sets with large difference in genome size. The trees from our analyses are in good agreement to the latest classification of large dsDNA viruses and parvoviruses by the International Committee on Taxonomy of Viruses (ICTV). Conclusions The present method provides a new way for recovering the phylogeny of large dsDNA viruses and parvoviruses, and also some insights on the affiliation of a number of unclassified viruses. In comparison, some alignment-free methods such as the CV Tree method can be used for recovering the phylogeny of large dsDNA viruses, but they are not suitable for resolving the phylogeny of parvoviruses with a much smaller genome size. PMID:20565983

  14. Adaptive regularization network based neural modeling paradigm for nonlinear adaptive estimation of cerebral evoked potentials.

    PubMed

    Zhang, Jian-Hua; Böhme, Johann F

    2007-11-01

    In this paper we report an adaptive regularization network (ARN) approach to realizing fast blind separation of cerebral evoked potentials (EPs) from background electroencephalogram (EEG) activity with no need to make any explicit assumption on the statistical (or deterministic) signal model. The ARNs are proposed to construct nonlinear EEG and EP signal models. A novel adaptive regularization training (ART) algorithm is proposed to improve the generalization performance of the ARN. Two adaptive neural modeling methods based on the ARN are developed and their implementation and performance analysis are also presented. The computer experiments using simulated and measured visual evoked potential (VEP) data have shown that the proposed ARN modeling paradigm yields computationally efficient and more accurate VEP signal estimation owing to its intrinsic model-free and nonlinear processing characteristics.

  15. Production of G protein-coupled receptors in an insect-based cell-free system.

    PubMed

    Sonnabend, Andrei; Spahn, Viola; Stech, Marlitt; Zemella, Anne; Stein, Christoph; Kubick, Stefan

    2017-10-01

    The biochemical analysis of human cell membrane proteins remains a challenging task due to the difficulties in producing sufficient quantities of functional protein. G protein-coupled receptors (GPCRs) represent a main class of membrane proteins and drug targets, which are responsible for a huge number of signaling processes regulating various physiological functions in living cells. To circumvent the current bottlenecks in GPCR studies, we propose the synthesis of GPCRs in eukaryotic cell-free systems based on extracts generated from insect (Sf21) cells. Insect cell lysates harbor the fully active translational and translocational machinery allowing posttranslational modifications, such as glycosylation and phosphorylation of de novo synthesized proteins. Here, we demonstrate the production of several GPCRs in a eukaryotic cell-free system, performed within a short time and in a cost-effective manner. We were able to synthesize a variety of GPCRs ranging from 40 to 133 kDa in an insect-based cell-free system. Moreover, we have chosen the μ opioid receptor (MOR) as a model protein to analyze the ligand binding affinities of cell-free synthesized MOR in comparison to MOR expressed in a human cell line by "one-point" radioligand binding experiments. Biotechnol. Bioeng. 2017;114: 2328-2338. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.

  16. Identification of Biomarkers Associated with the Healing of Chronic Wounds

    DTIC Science & Technology

    2015-11-01

    The analysis of the wound fluid began with a broad survey tool Kinex™ Antibody Microarray (KAM) a single dye , non-competitive sample binding...signaling proteins. Lysate protein from each sample was covalently labeled with a fluorescent dye combination. Free dye molecules were then...patterned structures is controlled by varying their pattern geometry. The biodegradation of micro-patterned structures is modeled geometrically based on

  17. Anterior Aortic Plane Systolic Excursion: A Novel Indicator of Transplant-Free Survival in Systemic Light-Chain Amyloidosis.

    PubMed

    Ochs, Marco M; Riffel, Johannes; Kristen, Arnt V; Hegenbart, Ute; Schönland, Stefan; Hardt, Stefan E; Katus, Hugo A; Mereles, Derliz; Buss, Sebastian J

    2016-12-01

    Anterior aortic plane systolic excursion (AAPSE) was evaluated in the present pilot study as a novel echocardiographic indicator of transplant-free survival in patients with systemic light-chain amyloidosis. Eighty-nine patients with light-chain amyloidosis were included in the post-hoc analysis. A subgroup of 54 patients with biopsy-proven cardiac amyloid infiltration were compared with 41 healthy individuals to evaluate the discriminative ability of echocardiographic findings. AAPSE is defined as the systolic excursion of the anterior aortic margin. To quantify AAPSE, the M-mode cursor was placed on the aortic valve plane in parasternal long-axis view at end-diastole. Index echocardiography had been performed before chemotherapy. Median follow-up duration was 2.4 years. The primary combined end point was heart transplantation or overall death. Mean AAPSE was 14 ± 2 mm in healthy individuals (mean age=57 ± 10 years; 56% men; BMI=25 ± 4 kg/m 2 ). AAPSE < 11 mm separated patients from age-, gender-, and BMI-matched control subjects with 93% sensitivity and 97% specificity. Median transplant-free survival of patients with AAPSE < 5 mm was 0.7 versus 4.8 years (P = .0001). AAPSE was an independent indicator of transplant-free survival in multivariate Cox regression (echocardiographic model: hazard ratio=0.72 [P = .03]; biomarker model: hazard ratio=0.62 [P = .0001]). Sequential regression analysis suggested incremental power of AAPSE as a marker of transplant-free survival. An ejection fraction-based model with an overall χ 2 value of 22.8 was improved by the addition of log NT-proBNP (χ 2  = 32.6, P < .005), troponin-T (χ 2  = 39.6, P < .01), and AAPSE (χ 2  = 54.0, P < .0001). AAPSE is suggested as an indicator of transplant-free survival in patients with systemic light-chain amyloidosis. AAPSE provided significant incremental value to established staging models. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.

  18. Thermodynamic integration from classical to quantum mechanics.

    PubMed

    Habershon, Scott; Manolopoulos, David E

    2011-12-14

    We present a new method for calculating quantum mechanical corrections to classical free energies, based on thermodynamic integration from classical to quantum mechanics. In contrast to previous methods, our method is numerically stable even in the presence of strong quantum delocalization. We first illustrate the method and its relationship to a well-established method with an analysis of a one-dimensional harmonic oscillator. We then show that our method can be used to calculate the quantum mechanical contributions to the free energies of ice and water for a flexible water model, a problem for which the established method is unstable. © 2011 American Institute of Physics

  19. Belief Propagation Algorithm for Portfolio Optimization Problems

    PubMed Central

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm. PMID:26305462

  20. Belief Propagation Algorithm for Portfolio Optimization Problems.

    PubMed

    Shinzato, Takashi; Yasuda, Muneki

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm.

  1. Magneto-mechanical modeling of electrical steel sheets

    NASA Astrophysics Data System (ADS)

    Aydin, U.; Rasilo, P.; Martin, F.; Singh, D.; Daniel, L.; Belahcen, A.; Rekik, M.; Hubert, O.; Kouhia, R.; Arkkio, A.

    2017-10-01

    A simplified multiscale approach and a Helmholtz free energy based approach for modeling the magneto-mechanical behavior of electrical steel sheets are compared. The models are identified from uniaxial magneto-mechanical measurements of two different electrical steel sheets which show different magneto-elastic behavior. Comparison with the available measurement data of the materials shows that both models successfully model the magneto-mechanical behavior of one of the studied materials, whereas for the second material only the Helmholtz free energy based approach is successful.

  2. Cost-effectiveness of on-site versus off-site collaborative care for depression in rural FQHCs.

    PubMed

    Pyne, Jeffrey M; Fortney, John C; Mouden, Sip; Lu, Liya; Hudson, Teresa J; Mittal, Dinesh

    2015-05-01

    Collaborative care for depression in primary care settings is effective and cost-effective. However, there is minimal evidence to support the choice of on-site versus off-site models. This study examined the cost-effectiveness of on-site practice-based collaborative care (PBCC) versus off-site telemedicine-based collaborative care (TBCC) for depression in federally qualified health centers (FQHCs). In a multisite, randomized, pragmatic comparative cost-effectiveness trial, 19,285 patients were screened for depression, 2,863 (14.8%) screened positive, and 364 were enrolled. Telephone interview data were collected at baseline and at six, 12, and 18 months. Base case analysis used Arkansas FQHC health care costs, and secondary analysis used national cost estimates. Effectiveness measures were depression-free days and quality-adjusted life years (QALYs) derived from depression-free days, the 12-Item Short-Form Survey, and the Quality of Well-Being (QWB) Scale. Nonparametric bootstrap with replacement methods were used to generate an empirical joint distribution of incremental costs and QALYs and acceptability curves. The TBCC intervention resulted in more depression-free days and QALYs but at a greater cost than the PBCC intervention. The disease-specific (depression-free day) and generic (QALY) incremental cost-effectiveness ratios (ICERs) were below their respective ICER thresholds for implementation, suggesting that the TBCC intervention was more cost effective than the PBCC intervention. These results support the cost-effectiveness of TBCC in medically underserved primary care settings. Information about whether to insource (make) or outsource (buy) depression care management is important, given the current interest in patient-centered medical homes, value-based purchasing, and bundled payments for depression care.

  3. Biological intuition in alignment-free methods: response to Posada.

    PubMed

    Ragan, Mark A; Chan, Cheong Xin

    2013-08-01

    A recent editorial in Journal of Molecular Evolution highlights opportunities and challenges facing molecular evolution in the era of next-generation sequencing. Abundant sequence data should allow more-complex models to be fit at higher confidence, making phylogenetic inference more reliable and improving our understanding of evolution at the molecular level. However, concern that approaches based on multiple sequence alignment may be computationally infeasible for large datasets is driving the development of so-called alignment-free methods for sequence comparison and phylogenetic inference. The recent editorial characterized these approaches as model-free, not based on the concept of homology, and lacking in biological intuition. We argue here that alignment-free methods have not abandoned models or homology, and can be biologically intuitive.

  4. Hybrid Data Assimilation without Ensemble Filtering

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo; Akkraoui, Amal El

    2014-01-01

    The Global Modeling and Assimilation Office is preparing to upgrade its three-dimensional variational system to a hybrid approach in which the ensemble is generated using a square-root ensemble Kalman filter (EnKF) and the variational problem is solved using the Grid-point Statistical Interpolation system. As in most EnKF applications, we found it necessary to employ a combination of multiplicative and additive inflations, to compensate for sampling and modeling errors, respectively and, to maintain the small-member ensemble solution close to the variational solution; we also found it necessary to re-center the members of the ensemble about the variational analysis. During tuning of the filter we have found re-centering and additive inflation to play a considerably larger role than expected, particularly in a dual-resolution context when the variational analysis is ran at larger resolution than the ensemble. This led us to consider a hybrid strategy in which the members of the ensemble are generated by simply converting the variational analysis to the resolution of the ensemble and applying additive inflation, thus bypassing the EnKF. Comparisons of this, so-called, filter-free hybrid procedure with an EnKF-based hybrid procedure and a control non-hybrid, traditional, scheme show both hybrid strategies to provide equally significant improvement over the control; more interestingly, the filter-free procedure was found to give qualitatively similar results to the EnKF-based procedure.

  5. A Framework for Performing Multiscale Stochastic Progressive Failure Analysis of Composite Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2006-01-01

    A framework is presented that enables coupled multiscale analysis of composite structures. The recently developed, free, Finite Element Analysis - Micromechanics Analysis Code (FEAMAC) software couples the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with ABAQUS to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. As a result, the stochastic nature of fiber breakage in composites can be simulated through incorporation of an appropriate damage and failure model that operates within MAC/GMC on the level of the fiber. Results are presented for the progressive failure analysis of a titanium matrix composite tensile specimen that illustrate the power and utility of the framework and address the techniques needed to model the statistical nature of the problem properly. In particular, it is shown that incorporating fiber strength randomness on multiple scales improves the quality of the simulation by enabling failure at locations other than those associated with structural level stress risers.

  6. A Framework for Performing Multiscale Stochastic Progressive Failure Analysis of Composite Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2007-01-01

    A framework is presented that enables coupled multiscale analysis of composite structures. The recently developed, free, Finite Element Analysis-Micromechanics Analysis Code (FEAMAC) software couples the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with ABAQUS to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. As a result, the stochastic nature of fiber breakage in composites can be simulated through incorporation of an appropriate damage and failure model that operates within MAC/GMC on the level of the fiber. Results are presented for the progressive failure analysis of a titanium matrix composite tensile specimen that illustrate the power and utility of the framework and address the techniques needed to model the statistical nature of the problem properly. In particular, it is shown that incorporating fiber strength randomness on multiple scales improves the quality of the simulation by enabling failure at locations other than those associated with structural level stress risers.

  7. Theoretical analysis of microring resonator-based biosensor with high resolution and free of temperature influence

    NASA Astrophysics Data System (ADS)

    Jian, Aoqun; Zou, Lu; Tang, Haiquan; Duan, Qianqian; Ji, Jianlong; Zhang, Qianwu; Zhang, Xuming; Sang, Shengbo

    2017-06-01

    The issue of thermal effects is inevitable for the ultrahigh refractive index (RI) measurement. A biosensor with parallel-coupled dual-microring resonator configuration is proposed to achieve high resolution and free thermal effects measurement. Based on the coupled-resonator-induced transparency effect, the design and principle of the biosensor are introduced in detail, and the performance of the sensor is deduced by simulations. Compared to the biosensor based on a single-ring configuration, the designed biosensor has a 10-fold increased Q value according to the simulation results, thus the sensor is expected to achieve a particularly high resolution. In addition, the output signal of the mathematical model of the proposed sensor can eliminate the thermal influence by adopting an algorithm. This work is expected to have great application potentials in the areas of high-resolution RI measurement, such as biomedical discoveries, virus screening, and drinking water safety.

  8. A new methodology for free wake analysis using curved vortex elements

    NASA Technical Reports Server (NTRS)

    Bliss, Donald B.; Teske, Milton E.; Quackenbush, Todd R.

    1987-01-01

    A method using curved vortex elements was developed for helicopter rotor free wake calculations. The Basic Curve Vortex Element (BCVE) is derived from the approximate Biot-Savart integration for a parabolic arc filament. When used in conjunction with a scheme to fit the elements along a vortex filament contour, this method has a significant advantage in overall accuracy and efficiency when compared to the traditional straight-line element approach. A theoretical and numerical analysis shows that free wake flows involving close interactions between filaments should utilize curved vortex elements in order to guarantee a consistent level of accuracy. The curved element method was implemented into a forward flight free wake analysis, featuring an adaptive far wake model that utilizes free wake information to extend the vortex filaments beyond the free wake regions. The curved vortex element free wake, coupled with this far wake model, exhibited rapid convergence, even in regions where the free wake and far wake turns are interlaced. Sample calculations are presented for tip vortex motion at various advance ratios for single and multiple blade rotors. Cross-flow plots reveal that the overall downstream wake flow resembles a trailing vortex pair. A preliminary assessment shows that the rotor downwash field is insensitive to element size, even for relatively large curved elements.

  9. GPU-based acceleration of computations in nonlinear finite element deformation analysis.

    PubMed

    Mafi, Ramin; Sirouspour, Shahin

    2014-03-01

    The physics of deformation for biological soft-tissue is best described by nonlinear continuum mechanics-based models, which then can be discretized by the FEM for a numerical solution. However, computational complexity of such models have limited their use in applications requiring real-time or fast response. In this work, we propose a graphic processing unit-based implementation of the FEM using implicit time integration for dynamic nonlinear deformation analysis. This is the most general formulation of the deformation analysis. It is valid for large deformations and strains and can account for material nonlinearities. The data-parallel nature and the intense arithmetic computations of nonlinear FEM equations make it particularly suitable for implementation on a parallel computing platform such as graphic processing unit. In this work, we present and compare two different designs based on the matrix-free and conventional preconditioned conjugate gradients algorithms for solving the FEM equations arising in deformation analysis. The speedup achieved with the proposed parallel implementations of the algorithms will be instrumental in the development of advanced surgical simulators and medical image registration methods involving soft-tissue deformation. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Dual RBFNNs-Based Model-Free Adaptive Control With Aspen HYSYS Simulation.

    PubMed

    Zhu, Yuanming; Hou, Zhongsheng; Qian, Feng; Du, Wenli

    2017-03-01

    In this brief, we propose a new data-driven model-free adaptive control (MFAC) method with dual radial basis function neural networks (RBFNNs) for a class of discrete-time nonlinear systems. The main novelty lies in that it provides a systematic design method for controller structure by the direct usage of I/O data, rather than using the first-principle model or offline identified plant model. The controller structure is determined by equivalent-dynamic-linearization representation of the ideal nonlinear controller, and the controller parameters are tuned by the pseudogradient information extracted from the I/O data of the plant, which can deal with the unknown nonlinear system. The stability of the closed-loop control system and the stability of the training process for RBFNNs are guaranteed by rigorous theoretical analysis. Meanwhile, the effectiveness and the applicability of the proposed method are further demonstrated by the numerical example and Aspen HYSYS simulation of distillation column in crude styrene produce process.

  11. Mass-velocity and size-velocity distributions of ejecta cloud from shock-loaded tin surface using large scale molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Durand, Olivier; Soulard, Laurent

    2015-06-01

    The mass (volume and areal densities) versus velocity as well as the size versus velocity distributions of a shock-induced cloud of particles are investigated using large scale molecular dynamics (MD) simulations. A generic 3D tin crystal with a sinusoidal free surface roughness is set in contact with vacuum and shock-loaded so that it melts directly on shock. At the reflection of the shock wave onto the perturbations of the free surface, 2D sheets/jets of liquid metal are ejected. The simulations show that the distributions may be described by an analytical model based on the propagation of a fragmentation zone, from the tip of the sheets to the free surface, within which the kinetic energy of the atoms decreases as this zone comes closer to the free surface on late times. As this kinetic energy drives (i) the (self-similar) expansion of the zone once it has broken away from the sheet and (ii) the average size of the particles which result from fragmentation in the zone, the ejected mass and the average size of the particles progressively increase in the cloud as fragmentation occurs closer to the free surface. Though relative to nanometric scales, our model reproduces quantitatively experimental profiles and may help in their analysis.

  12. The Cost of an Additional Disability-Free Life Year for Older Americans: 1992–2005

    PubMed Central

    Cai, Liming

    2013-01-01

    Objective To estimate the cost of an additional disability-free life year for older Americans in 1992–2005. Data Source This study used 1992–2005 Medicare Current Beneficiary Survey, a longitudinal survey of Medicare beneficiaries with a rotating panel design. Study Design This analysis used multistate life table model to estimate probabilities of transition among a discrete set of health states (nondisabled, disabled, and dead) for two panels of older Americans in 1992 and 2002. Health spending incurred between annual health interviews was estimated by a generalized linear mixed model. Health status, including death, was simulated for each member of the panel using these transition probabilities; the associated health spending was cross-walked to the simulated health changes. Principal Findings Disability-free life expectancy (DFLE) increased significantly more than life expectancy during the study period. Assuming that 50 percent of the gains in DFLE between 1992 and 2005 were attributable to increases in spending, the average discounted cost per additional disability-free life year was $71,000. There were small differences between gender and racial/ethnic groups. Conclusions The cost of an additional disability-free life year was substantially below previous estimates based on mortality trends alone. PMID:22670874

  13. Mathematical analysis of the boundary-integral based electrostatics estimation approximation for molecular solvation: exact results for spherical inclusions.

    PubMed

    Bardhan, Jaydeep P; Knepley, Matthew G

    2011-09-28

    We analyze the mathematically rigorous BIBEE (boundary-integral based electrostatics estimation) approximation of the mixed-dielectric continuum model of molecular electrostatics, using the analytically solvable case of a spherical solute containing an arbitrary charge distribution. Our analysis, which builds on Kirkwood's solution using spherical harmonics, clarifies important aspects of the approximation and its relationship to generalized Born models. First, our results suggest a new perspective for analyzing fast electrostatic models: the separation of variables between material properties (the dielectric constants) and geometry (the solute dielectric boundary and charge distribution). Second, we find that the eigenfunctions of the reaction-potential operator are exactly preserved in the BIBEE model for the sphere, which supports the use of this approximation for analyzing charge-charge interactions in molecular binding. Third, a comparison of BIBEE to the recent GBε theory suggests a modified BIBEE model capable of predicting electrostatic solvation free energies to within 4% of a full numerical Poisson calculation. This modified model leads to a projection-framework understanding of BIBEE and suggests opportunities for future improvements. © 2011 American Institute of Physics

  14. Modeling and Model Identification of Autonomous Underwater Vehicles

    DTIC Science & Technology

    2015-06-01

    setup, based on a quadrifilar pendulum , is developed to measure the moments of inertia of the vehicle. System identification techniques, based on...parametric models of the platforms: an individual channel excitation approach and a free decay pendulum test. The former is applied to THAUS, which can...excite the system in individual channels in four degrees of freedom. These results are verified in the free decay pendulum setup, which has the

  15. Nomograms Predicting Progression-Free Survival, Overall Survival, and Pelvic Recurrence in Locally Advanced Cervical Cancer Developed From an Analysis of Identifiable Prognostic Factors in Patients From NRG Oncology/Gynecologic Oncology Group Randomized Trials of Chemoradiotherapy

    PubMed Central

    Rose, Peter G.; Java, James; Whitney, Charles W.; Stehman, Frederick B.; Lanciano, Rachelle; Thomas, Gillian M.; DiSilvestro, Paul A.

    2015-01-01

    Purpose To evaluate the prognostic factors in locally advanced cervical cancer limited to the pelvis and develop nomograms for 2-year progression-free survival (PFS), 5-year overall survival (OS), and pelvic recurrence. Patients and Methods We retrospectively reviewed 2,042 patients with locally advanced cervical carcinoma enrolled onto Gynecologic Oncology Group clinical trials of concurrent cisplatin-based chemotherapy and radiotherapy. Nomograms for 2-year PFS, five-year OS, and pelvic recurrence were created as visualizations of Cox proportional hazards regression models. The models were validated by bootstrap-corrected, relatively unbiased estimates of discrimination and calibration. Results Multivariable analysis identified prognostic factors including histology, race/ethnicity, performance status, tumor size, International Federation of Gynecology and Obstetrics stage, tumor grade, pelvic node status, and treatment with concurrent cisplatin-based chemotherapy. PFS, OS, and pelvic recurrence nomograms had bootstrap-corrected concordance indices of 0.62, 0.64, and 0.73, respectively, and were well calibrated. Conclusion Prognostic factors were used to develop nomograms for 2-year PFS, 5-year OS, and pelvic recurrence for locally advanced cervical cancer clinically limited to the pelvis treated with concurrent cisplatin-based chemotherapy and radiotherapy. These nomograms can be used to better estimate individual and collective outcomes. PMID:25732170

  16. Stabilizing effect of resistivity towards ELM-free H-mode discharge in lithium-conditioned NSTX

    NASA Astrophysics Data System (ADS)

    Banerjee, Debabrata; Zhu, Ping; Maingi, Rajesh

    2017-07-01

    Linear stability analysis of the national spherical torus experiment (NSTX) Li-conditioned ELM-free H-mode equilibria is carried out in the context of the extended magneto-hydrodynamic (MHD) model in NIMROD. The purpose is to investigate the physical cause behind edge localized mode (ELM) suppression in experiment after the Li-coating of the divertor and the first wall of the NSTX tokamak. Besides ideal MHD modeling, including finite-Larmor radius effect and two-fluid Hall and electron diamagnetic drift contributions, a non-ideal resistivity model is employed, taking into account the increase of Z eff after Li-conditioning in ELM-free H-mode. Unlike an earlier conclusion from an eigenvalue code analysis of these equilibria, NIMROD results find that after reduced recycling from divertor plates, profile modification is necessary but insufficient to explain the mechanism behind complete ELMs suppression in ideal two-fluid MHD. After considering the higher plasma resistivity due to higher Z eff, the complete stabilization could be explained. A thorough analysis of both pre-lithium ELMy and with-lithium ELM-free cases using ideal and non-ideal MHD models is presented, after accurately including a vacuum-like cold halo region in NIMROD to investigate ELMs.

  17. Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.

    PubMed

    Will, Sebastian; Jabbari, Hosna

    2016-01-01

    RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free software, available at http://www.bioinf.uni-leipzig.de/~will/Software/SparseMFEFold.

  18. A Method of Relating General Circulation Model Simulated Climate to the Observed Local Climate. Part I: Seasonal Statistics.

    NASA Astrophysics Data System (ADS)

    Karl, Thomas R.; Wang, Wei-Chyung; Schlesinger, Michael E.; Knight, Richard W.; Portman, David

    1990-10-01

    Important surface observations such as the daily maximum and minimum temperature, daily precipitation, and cloud ceilings often have localized characteristics that are difficult to reproduce with the current resolution and the physical parameterizations in state-of-the-art General Circulation climate Models (GCMs). Many of the difficulties can be partially attributed to mismatches in scale, local topography. regional geography and boundary conditions between models and surface-based observations. Here, we present a method, called climatological projection by model statistics (CPMS), to relate GCM grid-point flee-atmosphere statistics, the predictors, to these important local surface observations. The method can be viewed as a generalization of the model output statistics (MOS) and perfect prog (PP) procedures used in numerical weather prediction (NWP) models. It consists of the application of three statistical methods: 1) principle component analysis (FICA), 2) canonical correlation, and 3) inflated regression analysis. The PCA reduces the redundancy of the predictors The canonical correlation is used to develop simultaneous relationships between linear combinations of the predictors, the canonical variables, and the surface-based observations. Finally, inflated regression is used to relate the important canonical variables to each of the surface-based observed variables.We demonstrate that even an early version of the Oregon State University two-level atmospheric GCM (with prescribed sea surface temperature) produces free-atmosphere statistics than can, when standardized using the model's internal means and variances (the MOS-like version of CPMS), closely approximate the observed local climate. When the model data are standardized by the observed free-atmosphere means and variances (the PP version of CPMS), however, the model does not reproduce the observed surface climate as well. Our results indicate that in the MOS-like version of CPMS the differences between the output of a ten-year GCM control run and the surface-based observations are often smaller than the differences between the observations of two ten-year periods. Such positive results suggest that GCMs may already contain important climatological information that can be used to infer the local climate.

  19. Variability in Dopamine Genes Dissociates Model-Based and Model-Free Reinforcement Learning.

    PubMed

    Doll, Bradley B; Bath, Kevin G; Daw, Nathaniel D; Frank, Michael J

    2016-01-27

    Considerable evidence suggests that multiple learning systems can drive behavior. Choice can proceed reflexively from previous actions and their associated outcomes, as captured by "model-free" learning algorithms, or flexibly from prospective consideration of outcomes that might occur, as captured by "model-based" learning algorithms. However, differential contributions of dopamine to these systems are poorly understood. Dopamine is widely thought to support model-free learning by modulating plasticity in striatum. Model-based learning may also be affected by these striatal effects, or by other dopaminergic effects elsewhere, notably on prefrontal working memory function. Indeed, prominent demonstrations linking striatal dopamine to putatively model-free learning did not rule out model-based effects, whereas other studies have reported dopaminergic modulation of verifiably model-based learning, but without distinguishing a prefrontal versus striatal locus. To clarify the relationships between dopamine, neural systems, and learning strategies, we combine a genetic association approach in humans with two well-studied reinforcement learning tasks: one isolating model-based from model-free behavior and the other sensitive to key aspects of striatal plasticity. Prefrontal function was indexed by a polymorphism in the COMT gene, differences of which reflect dopamine levels in the prefrontal cortex. This polymorphism has been associated with differences in prefrontal activity and working memory. Striatal function was indexed by a gene coding for DARPP-32, which is densely expressed in the striatum where it is necessary for synaptic plasticity. We found evidence for our hypothesis that variations in prefrontal dopamine relate to model-based learning, whereas variations in striatal dopamine function relate to model-free learning. Decisions can stem reflexively from their previously associated outcomes or flexibly from deliberative consideration of potential choice outcomes. Research implicates a dopamine-dependent striatal learning mechanism in the former type of choice. Although recent work has indicated that dopamine is also involved in flexible, goal-directed decision-making, it remains unclear whether it also contributes via striatum or via the dopamine-dependent working memory function of prefrontal cortex. We examined genetic indices of dopamine function in these regions and their relation to the two choice strategies. We found that striatal dopamine function related most clearly to the reflexive strategy, as previously shown, and that prefrontal dopamine related most clearly to the flexible strategy. These findings suggest that dissociable brain regions support dissociable choice strategies. Copyright © 2016 the authors 0270-6474/16/361211-12$15.00/0.

  20. Hierarchical folding free energy landscape of HP35 revealed by most probable path clustering.

    PubMed

    Jain, Abhinav; Stock, Gerhard

    2014-07-17

    Adopting extensive molecular dynamics simulations of villin headpiece protein (HP35) by Shaw and co-workers, a detailed theoretical analysis of the folding of HP35 is presented. The approach is based on the recently proposed most probable path algorithm which identifies the metastable states of the system, combined with dynamical coring of these states in order to obtain a consistent Markov state model. The method facilitates the construction of a dendrogram associated with the folding free-energy landscape of HP35, which reveals a hierarchical funnel structure and shows that the native state is rather a kinetic trap than a network hub. The energy landscape of HP35 consists of the entropic unfolded basin U, where the prestructuring of the protein takes place, the intermediate basin I, which is connected to U via the rate-limiting U → I transition state reflecting the formation of helix-1, and the native basin N, containing a state close to the NMR structure and a native-like state that exhibits enhanced fluctuations of helix-3. The model is in line with recent experimental observations that the intermediate and native states differ mostly in their dynamics (locked vs unlocked states). Employing dihedral angle principal component analysis, subdiffusive motion on a multidimensional free-energy surface is found.

  1. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-07-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.

  2. Jointly modeling longitudinal proportional data and survival times with an application to the quality of life data in a breast cancer trial.

    PubMed

    Song, Hui; Peng, Yingwei; Tu, Dongsheng

    2017-04-01

    Motivated by the joint analysis of longitudinal quality of life data and recurrence free survival times from a cancer clinical trial, we present in this paper two approaches to jointly model the longitudinal proportional measurements, which are confined in a finite interval, and survival data. Both approaches assume a proportional hazards model for the survival times. For the longitudinal component, the first approach applies the classical linear mixed model to logit transformed responses, while the second approach directly models the responses using a simplex distribution. A semiparametric method based on a penalized joint likelihood generated by the Laplace approximation is derived to fit the joint model defined by the second approach. The proposed procedures are evaluated in a simulation study and applied to the analysis of breast cancer data motivated this research.

  3. Conformational Transitions and Convergence of Absolute Binding Free Energy Calculations

    PubMed Central

    Lapelosa, Mauro; Gallicchio, Emilio; Levy, Ronald M.

    2011-01-01

    The Binding Energy Distribution Analysis Method (BEDAM) is employed to compute the standard binding free energies of a series of ligands to a FK506 binding protein (FKBP12) with implicit solvation. Binding free energy estimates are in reasonably good agreement with experimental affinities. The conformations of the complexes identified by the simulations are in good agreement with crystallographic data, which was not used to restrain ligand orientations. The BEDAM method is based on λ -hopping Hamiltonian parallel Replica Exchange (HREM) molecular dynamics conformational sampling, the OPLS-AA/AGBNP2 effective potential, and multi-state free energy estimators (MBAR). Achieving converged and accurate results depends on all of these elements of the calculation. Convergence of the binding free energy is tied to the level of convergence of binding energy distributions at critical intermediate states where bound and unbound states are at equilibrium, and where the rate of binding/unbinding conformational transitions is maximal. This finding mirrors similar observations in the context of order/disorder transitions as for example in protein folding. Insights concerning the physical mechanism of ligand binding and unbinding are obtained. Convergence for the largest FK506 ligand is achieved only after imposing strict conformational restraints, which however require accurate prior structural knowledge of the structure of the complex. The analytical AGBNP2 model is found to underestimate the magnitude of the hydrophobic driving force towards binding in these systems characterized by loosely packed protein-ligand binding interfaces. Rescoring of the binding energies using a numerical surface area model corrects this deficiency. This study illustrates the complex interplay between energy models, exploration of conformational space, and free energy estimators needed to obtain robust estimates from binding free energy calculations. PMID:22368530

  4. A Model-Free Diagnostic for Single-Peakedness of Item Responses Using Ordered Conditional Means

    ERIC Educational Resources Information Center

    Polak, Marike; De Rooij, Mark; Heiser, Willem J.

    2012-01-01

    In this article we propose a model-free diagnostic for single-peakedness (unimodality) of item responses. Presuming a unidimensional unfolding scale and a given item ordering, we approximate item response functions of all items based on ordered conditional means (OCM). The proposed OCM methodology is based on Thurstone & Chave's (1929) "criterion…

  5. The accuracy comparison between ARFIMA and singular spectrum analysis for forecasting the sales volume of motorcycle in Indonesia

    NASA Astrophysics Data System (ADS)

    Sitohang, Yosep Oktavianus; Darmawan, Gumgum

    2017-08-01

    This research attempts to compare between two forecasting models in time series analysis for predicting the sales volume of motorcycle in Indonesia. The first forecasting model used in this paper is Autoregressive Fractionally Integrated Moving Average (ARFIMA). ARFIMA can handle non-stationary data and has a better performance than ARIMA in forecasting accuracy on long memory data. This is because the fractional difference parameter can explain correlation structure in data that has short memory, long memory, and even both structures simultaneously. The second forecasting model is Singular spectrum analysis (SSA). The advantage of the technique is that it is able to decompose time series data into the classic components i.e. trend, cyclical, seasonal and noise components. This makes the forecasting accuracy of this technique significantly better. Furthermore, SSA is a model-free technique, so it is likely to have a very wide range in its application. Selection of the best model is based on the value of the lowest MAPE. Based on the calculation, it is obtained the best model for ARFIMA is ARFIMA (3, d = 0, 63, 0) with MAPE value of 22.95 percent. For SSA with a window length of 53 and 4 group of reconstructed data, resulting MAPE value of 13.57 percent. Based on these results it is concluded that SSA produces better forecasting accuracy.

  6. Free vibration analysis of single-walled boron nitride nanotubes based on a computational mechanics framework

    NASA Astrophysics Data System (ADS)

    Yan, J. W.; Tong, L. H.; Xiang, Ping

    2017-12-01

    Free vibration behaviors of single-walled boron nitride nanotubes are investigated using a computational mechanics approach. Tersoff-Brenner potential is used to reflect atomic interaction between boron and nitrogen atoms. The higher-order Cauchy-Born rule is employed to establish the constitutive relationship for single-walled boron nitride nanotubes on the basis of higher-order gradient continuum theory. It bridges the gaps between the nanoscale lattice structures with a continuum body. A mesh-free modeling framework is constructed, using the moving Kriging interpolation which automatically satisfies the higher-order continuity, to implement numerical simulation in order to match the higher-order constitutive model. In comparison with conventional atomistic simulation methods, the established atomistic-continuum multi-scale approach possesses advantages in tackling atomic structures with high-accuracy and high-efficiency. Free vibration characteristics of single-walled boron nitride nanotubes with different boundary conditions, tube chiralities, lengths and radii are examined in case studies. In this research, it is pointed out that a critical radius exists for the evaluation of fundamental vibration frequencies of boron nitride nanotubes; opposite trends can be observed prior to and beyond the critical radius. Simulation results are presented and discussed.

  7. Thermal Cycling Life Prediction of Sn-3.0Ag-0.5Cu Solder Joint Using Type-I Censored Data

    PubMed Central

    Mi, Jinhua; Yang, Yuan-Jian; Huang, Hong-Zhong

    2014-01-01

    Because solder joint interconnections are the weaknesses of microelectronic packaging, their reliability has great influence on the reliability of the entire packaging structure. Based on an accelerated life test the reliability assessment and life prediction of lead-free solder joints using Weibull distribution are investigated. The type-I interval censored lifetime data were collected from a thermal cycling test, which was implemented on microelectronic packaging with lead-free ball grid array (BGA) and fine-pitch ball grid array (FBGA) interconnection structures. The number of cycles to failure of lead-free solder joints is predicted by using a modified Engelmaier fatigue life model and a type-I censored data processing method. Then, the Pan model is employed to calculate the acceleration factor of this test. A comparison of life predictions between the proposed method and the ones calculated directly by Matlab and Minitab is conducted to demonstrate the practicability and effectiveness of the proposed method. At last, failure analysis and microstructure evolution of lead-free solders are carried out to provide useful guidance for the regular maintenance, replacement of substructure, and subsequent processing of electronic products. PMID:25121138

  8. Cost-effectiveness analysis in the Spanish setting of the PEAK trial of panitumumab plus mFOLFOX6 compared with bevacizumab plus mFOLFOX6 for first-line treatment of patients with wild-type RAS metastatic colorectal cancer.

    PubMed

    Rivera, Fernando; Valladares, Manuel; Gea, Salvador; López-Martínez, Noemí

    2017-06-01

    To assess the cost-effectiveness of panitumumab in combination with mFOLFOX6 (oxaliplatin, 5-fluorouracil, and leucovorin) vs bevacizumab in combination with mFOLFOX6 as first-line treatment of patients with wild-type RAS metastatic colorectal cancer (mCRC) in Spain. A semi-Markov model was developed including the following health states: Progression free; Progressive disease: Treat with best supportive care; Progressive disease: Treat with subsequent active therapy; Attempted resection of metastases; Disease free after metastases resection; Progressive disease: after resection and relapse; and Death. Parametric survival analyses of patient-level progression free survival and overall survival data from the PEAK Phase II clinical trial were used to estimate health state transitions. Additional data from the PEAK trial were considered for the dose and duration of therapy, the use of subsequent therapy, the occurrence of adverse events, and the incidence and probability of time to metastasis resection. Utility weightings were calculated from patient-level data from panitumumab trials evaluating first-, second-, and third-line treatments. The study was performed from the Spanish National Health System (NHS) perspective including only direct costs. A life-time horizon was applied. Probabilistic sensitivity analyses and scenario sensitivity analyses were performed to assess the robustness of the model. Based on the PEAK trial, which demonstrated greater efficacy of panitumumab vs bevacizumab, both in combination with mFOLFOX6 first-line in wild-type RAS mCRC patients, the estimated incremental cost per life-year gained was €16,567 and the estimated incremental cost per quality-adjusted life year gained was €22,794. The sensitivity analyses showed the model was robust to alternative parameters and assumptions. The analysis was based on a simulation model and, therefore, the results should be interpreted cautiously. Based on the PEAK Phase II clinical trial and taking into account Spanish costs, the results of the analysis showed that first-line treatment of mCRC with panitumumab + mFOLFOX6 could be considered a cost-effective option compared with bevacizumab + mFOLFOX6 for the Spanish NHS.

  9. Free Energy-Based Virtual Screening and Optimization of RNase H Inhibitors of HIV-1 Reverse Transcriptase.

    PubMed

    Zhang, Baofeng; D'Erasmo, Michael P; Murelli, Ryan P; Gallicchio, Emilio

    2016-09-30

    We report the results of a binding free energy-based virtual screening campaign of a library of 77 α-hydroxytropolone derivatives against the challenging RNase H active site of the reverse transcriptase (RT) enzyme of human immunodeficiency virus-1. Multiple protonation states, rotamer states, and binding modalities of each compound were individually evaluated. The work involved more than 300 individual absolute alchemical binding free energy parallel molecular dynamics calculations and over 1 million CPU hours on national computing clusters and a local campus computational grid. The thermodynamic and structural measures obtained in this work rationalize a series of characteristics of this system useful for guiding future synthetic and biochemical efforts. The free energy model identified key ligand-dependent entropic and conformational reorganization processes difficult to capture using standard docking and scoring approaches. Binding free energy-based optimization of the lead compounds emerging from the virtual screen has yielded four compounds with very favorable binding properties, which will be the subject of further experimental investigations. This work is one of the few reported applications of advanced-binding free energy models to large-scale virtual screening and optimization projects. It further demonstrates that, with suitable algorithms and automation, advanced-binding free energy models can have a useful role in early-stage drug-discovery programs.

  10. Development of a Leader Training Model and System

    DTIC Science & Technology

    1980-01-01

    recognition--are (a) the free - play , two-sided engagement which incorporates the crucial elements of complexity and uncertainty, (b) objective and real-time...changing conditions, many created by opposition action. In such a dynamic free - play environ- ment, further complicated by the confounding conditions...the ISD model with its emphasis on task analysis was considered less than adequate for the combat arms. The free - play character of the combat setting

  11. Community structure and scale-free collections of Erdős-Rényi graphs.

    PubMed

    Seshadhri, C; Kolda, Tamara G; Pinar, Ali

    2012-05-01

    Community structure plays a significant role in the analysis of social networks and similar graphs, yet this structure is little understood and not well captured by most models. We formally define a community to be a subgraph that is internally highly connected and has no deeper substructure. We use tools of combinatorics to show that any such community must contain a dense Erdős-Rényi (ER) subgraph. Based on mathematical arguments, we hypothesize that any graph with a heavy-tailed degree distribution and community structure must contain a scale-free collection of dense ER subgraphs. These theoretical observations corroborate well with empirical evidence. From this, we propose the Block Two-Level Erdős-Rényi (BTER) model, and demonstrate that it accurately captures the observable properties of many real-world social networks.

  12. Emergence of Scale-Free Leadership Structure in Social Recommender Systems

    PubMed Central

    Zhou, Tao; Medo, Matúš; Cimini, Giulio; Zhang, Zi-Ke; Zhang, Yi-Cheng

    2011-01-01

    The study of the organization of social networks is important for the understanding of opinion formation, rumor spreading, and the emergence of trends and fashion. This paper reports empirical analysis of networks extracted from four leading sites with social functionality (Delicious, Flickr, Twitter and YouTube) and shows that they all display a scale-free leadership structure. To reproduce this feature, we propose an adaptive network model driven by social recommending. Artificial agent-based simulations of this model highlight a “good get richer” mechanism where users with broad interests and good judgments are likely to become popular leaders for the others. Simulations also indicate that the studied social recommendation mechanism can gradually improve the user experience by adapting to tastes of its users. Finally we outline implications for real online resource-sharing systems. PMID:21857891

  13. Stress enhances model-free reinforcement learning only after negative outcome

    PubMed Central

    Lee, Daeyeol

    2017-01-01

    Previous studies found that stress shifts behavioral control by promoting habits while decreasing goal-directed behaviors during reward-based decision-making. It is, however, unclear how stress disrupts the relative contribution of the two systems controlling reward-seeking behavior, i.e. model-free (or habit) and model-based (or goal-directed). Here, we investigated whether stress biases the contribution of model-free and model-based reinforcement learning processes differently depending on the valence of outcome, and whether stress alters the learning rate, i.e., how quickly information from the new environment is incorporated into choices. Participants were randomly assigned to either a stress or a control condition, and performed a two-stage Markov decision-making task in which the reward probabilities underwent periodic reversals without notice. We found that stress increased the contribution of model-free reinforcement learning only after negative outcome. Furthermore, stress decreased the learning rate. The results suggest that stress diminishes one’s ability to make adaptive choices in multiple aspects of reinforcement learning. This finding has implications for understanding how stress facilitates maladaptive habits, such as addictive behavior, and other dysfunctional behaviors associated with stress in clinical and educational contexts. PMID:28723943

  14. Stress enhances model-free reinforcement learning only after negative outcome.

    PubMed

    Park, Heyeon; Lee, Daeyeol; Chey, Jeanyung

    2017-01-01

    Previous studies found that stress shifts behavioral control by promoting habits while decreasing goal-directed behaviors during reward-based decision-making. It is, however, unclear how stress disrupts the relative contribution of the two systems controlling reward-seeking behavior, i.e. model-free (or habit) and model-based (or goal-directed). Here, we investigated whether stress biases the contribution of model-free and model-based reinforcement learning processes differently depending on the valence of outcome, and whether stress alters the learning rate, i.e., how quickly information from the new environment is incorporated into choices. Participants were randomly assigned to either a stress or a control condition, and performed a two-stage Markov decision-making task in which the reward probabilities underwent periodic reversals without notice. We found that stress increased the contribution of model-free reinforcement learning only after negative outcome. Furthermore, stress decreased the learning rate. The results suggest that stress diminishes one's ability to make adaptive choices in multiple aspects of reinforcement learning. This finding has implications for understanding how stress facilitates maladaptive habits, such as addictive behavior, and other dysfunctional behaviors associated with stress in clinical and educational contexts.

  15. Development and validation of an automated and marker-free CT-based spatial analysis method (CTSA) for assessment of femoral hip implant migration: In vitro accuracy and precision comparable to that of radiostereometric analysis (RSA).

    PubMed

    Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef

    2016-01-01

    We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SD(SE)): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SD(SE): 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice.

  16. Real-time inverse kinematics for the upper limb: a model-based algorithm using segment orientations.

    PubMed

    Borbély, Bence J; Szolgay, Péter

    2017-01-17

    Model based analysis of human upper limb movements has key importance in understanding the motor control processes of our nervous system. Various simulation software packages have been developed over the years to perform model based analysis. These packages provide computationally intensive-and therefore off-line-solutions to calculate the anatomical joint angles from motion captured raw measurement data (also referred as inverse kinematics). In addition, recent developments in inertial motion sensing technology show that it may replace large, immobile and expensive optical systems with small, mobile and cheaper solutions in cases when a laboratory-free measurement setup is needed. The objective of the presented work is to extend the workflow of measurement and analysis of human arm movements with an algorithm that allows accurate and real-time estimation of anatomical joint angles for a widely used OpenSim upper limb kinematic model when inertial sensors are used for movement recording. The internal structure of the selected upper limb model is analyzed and used as the underlying platform for the development of the proposed algorithm. Based on this structure, a prototype marker set is constructed that facilitates the reconstruction of model-based joint angles using orientation data directly available from inertial measurement systems. The mathematical formulation of the reconstruction algorithm is presented along with the validation of the algorithm on various platforms, including embedded environments. Execution performance tables of the proposed algorithm show significant improvement on all tested platforms. Compared to OpenSim's Inverse Kinematics tool 50-15,000x speedup is achieved while maintaining numerical accuracy. The proposed algorithm is capable of real-time reconstruction of standardized anatomical joint angles even in embedded environments, establishing a new way for complex applications to take advantage of accurate and fast model-based inverse kinematics calculations.

  17. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  18. Physics-based statistical learning approach to mesoscopic model selection.

    PubMed

    Taverniers, Søren; Haut, Terry S; Barros, Kipton; Alexander, Francis J; Lookman, Turab

    2015-11-01

    In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.

  19. When Does Model-Based Control Pay Off?

    PubMed Central

    2016-01-01

    Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to “model-free” and “model-based” strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand. PMID:27564094

  20. Coupling compositional liquid gas Darcy and free gas flows at porous and free-flow domains interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masson, R., E-mail: roland.masson@unice.fr; Team COFFEE INRIA Sophia Antipolis Méditerranée; Trenty, L., E-mail: laurent.trenty@andra.fr

    This paper proposes an efficient splitting algorithm to solve coupled liquid gas Darcy and free gas flows at the interface between a porous medium and a free-flow domain. This model is compared to the reduced model introduced in [6] using a 1D approximation of the gas free flow. For that purpose, the gas molar fraction diffusive flux at the interface in the free-flow domain is approximated by a two point flux approximation based on a low-frequency diagonal approximation of a Steklov–Poincaré type operator. The splitting algorithm and the reduced model are applied in particular to the modelling of the massmore » exchanges at the interface between the storage and the ventilation galleries in radioactive waste deposits.« less

  1. Structural-change localization and monitoring through a perturbation-based inverse problem.

    PubMed

    Roux, Philippe; Guéguen, Philippe; Baillet, Laurent; Hamze, Alaa

    2014-11-01

    Structural-change detection and characterization, or structural-health monitoring, is generally based on modal analysis, for detection, localization, and quantification of changes in structure. Classical methods combine both variations in frequencies and mode shapes, which require accurate and spatially distributed measurements. In this study, the detection and localization of a local perturbation are assessed by analysis of frequency changes (in the fundamental mode and overtones) that are combined with a perturbation-based linear inverse method and a deconvolution process. This perturbation method is applied first to a bending beam with the change considered as a local perturbation of the Young's modulus, using a one-dimensional finite-element model for modal analysis. Localization is successful, even for extended and multiple changes. In a second step, the method is numerically tested under ambient-noise vibration from the beam support with local changes that are shifted step by step along the beam. The frequency values are revealed using the random decrement technique that is applied to the time-evolving vibrations recorded by one sensor at the free extremity of the beam. Finally, the inversion method is experimentally demonstrated at the laboratory scale with data recorded at the free end of a Plexiglas beam attached to a metallic support.

  2. Transverse Vibration of Tapered Single-Walled Carbon Nanotubes Embedded in Viscoelastic Medium

    NASA Astrophysics Data System (ADS)

    Lei, Y. J.; Zhang, D. P.; Shen, Z. B.

    2017-12-01

    Based on the nonlocal theory, Euler-Bernoulli beam theory and Kelvin viscoelastic foundation model, free transverse vibration is studied for a tapered viscoelastic single-walled carbon nanotube (visco-SWCNT) embedded in a viscoelastic medium. Firstly, the governing equations for vibration analysis are established. And then, we derive the natural frequencies in closed form for SWCNTs with arbitrary boundary conditions by applying transfer function method and perturbation method. Numerical results are also presented to discuss the effects of nonlocal parameter, relaxation time and taper parameter of SWCNTs, and material property parameters of the medium. This study demonstrates that the proposed model is available for vibration analysis of the tapered SWCNTs-viscoelastic medium coupling system.

  3. Theoretical foundations for finite-time transient stability and sensitivity analysis of power systems

    NASA Astrophysics Data System (ADS)

    Dasgupta, Sambarta

    Transient stability and sensitivity analysis of power systems are problems of enormous academic and practical interest. These classical problems have received renewed interest, because of the advancement in sensor technology in the form of phasor measurement units (PMUs). The advancement in sensor technology has provided unique opportunity for the development of real-time stability monitoring and sensitivity analysis tools. Transient stability problem in power system is inherently a problem of stability analysis of the non-equilibrium dynamics, because for a short time period following a fault or disturbance the system trajectory moves away from the equilibrium point. The real-time stability decision has to be made over this short time period. However, the existing stability definitions and hence analysis tools for transient stability are asymptotic in nature. In this thesis, we discover theoretical foundations for the short-term transient stability analysis of power systems, based on the theory of normally hyperbolic invariant manifolds and finite time Lyapunov exponents, adopted from geometric theory of dynamical systems. The theory of normally hyperbolic surfaces allows us to characterize the rate of expansion and contraction of co-dimension one material surfaces in the phase space. The expansion and contraction rates of these material surfaces can be computed in finite time. We prove that the expansion and contraction rates can be used as finite time transient stability certificates. Furthermore, material surfaces with maximum expansion and contraction rate are identified with the stability boundaries. These stability boundaries are used for computation of stability margin. We have used the theoretical framework for the development of model-based and model-free real-time stability monitoring methods. Both the model-based and model-free approaches rely on the availability of high resolution time series data from the PMUs for stability prediction. The problem of sensitivity analysis of power system, subjected to changes or uncertainty in load parameters and network topology, is also studied using the theory of normally hyperbolic manifolds. The sensitivity analysis is used for the identification and rank ordering of the critical interactions and parameters in the power network. The sensitivity analysis is carried out both in finite time and in asymptotic. One of the distinguishing features of the asymptotic sensitivity analysis is that the asymptotic dynamics of the system is assumed to be a periodic orbit. For asymptotic sensitivity analysis we employ combination of tools from ergodic theory and geometric theory of dynamical systems.

  4. A metabolomics-driven approach to predict cocoa product consumption by designing a multimetabolite biomarker model in free-living subjects from the PREDIMED study.

    PubMed

    Garcia-Aloy, Mar; Llorach, Rafael; Urpi-Sarda, Mireia; Jáuregui, Olga; Corella, Dolores; Ruiz-Canela, Miguel; Salas-Salvadó, Jordi; Fitó, Montserrat; Ros, Emilio; Estruch, Ramon; Andres-Lacueva, Cristina

    2015-02-01

    The aim of the current study was to apply an untargeted metabolomics strategy to characterize a model of cocoa intake biomarkers in a free-living population. An untargeted HPLC-q-ToF-MS based metabolomics approach was applied to human urine from 32 consumers of cocoa or derived products (CC) and 32 matched control subjects with no consumption of cocoa products (NC). The multivariate statistical analysis (OSC-PLS-DA) showed clear differences between CC and NC groups. The discriminant biomarkers identified were mainly related to the metabolic pathways of theobromine and polyphenols, as well as to cocoa processing. Consumption of cocoa products was also associated with reduced urinary excretions of methylglutarylcarnitine, which could be related to effects of cocoa exposure on insulin resistance. To improve the prediction of cocoa consumption, a combined urinary metabolite model was constructed. ROC curves were performed to evaluate the model and individual metabolites. The AUC values (95% CI) for the model were 95.7% (89.8-100%) and 92.6% (81.9-100%) in training and validation sets, respectively, whereas the AUCs for individual metabolites were <90%. The metabolic signature of cocoa consumption in free-living subjects reveals that combining different metabolites as biomarker models improves prediction of dietary exposure to cocoa. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Sybil--efficient constraint-based modelling in R.

    PubMed

    Gelius-Dietrich, Gabriel; Desouki, Abdelmoneim Amer; Fritzemeier, Claus Jonathan; Lercher, Martin J

    2013-11-13

    Constraint-based analyses of metabolic networks are widely used to simulate the properties of genome-scale metabolic networks. Publicly available implementations tend to be slow, impeding large scale analyses such as the genome-wide computation of pairwise gene knock-outs, or the automated search for model improvements. Furthermore, available implementations cannot easily be extended or adapted by users. Here, we present sybil, an open source software library for constraint-based analyses in R; R is a free, platform-independent environment for statistical computing and graphics that is widely used in bioinformatics. Among other functions, sybil currently provides efficient methods for flux-balance analysis (FBA), MOMA, and ROOM that are about ten times faster than previous implementations when calculating the effect of whole-genome single gene deletions in silico on a complete E. coli metabolic model. Due to the object-oriented architecture of sybil, users can easily build analysis pipelines in R or even implement their own constraint-based algorithms. Based on its highly efficient communication with different mathematical optimisation programs, sybil facilitates the exploration of high-dimensional optimisation problems on small time scales. Sybil and all its dependencies are open source. Sybil and its documentation are available for download from the comprehensive R archive network (CRAN).

  6. Using Tracker to understand ‘toss up’ and free fall motion: a case study

    NASA Astrophysics Data System (ADS)

    Wee, Loo Kang; Kia Tan, Kim; Leong, Tze Kwang; Tan, Ching

    2015-07-01

    This paper reports the use of Tracker as a computer-based learning tool to support effective learning and teaching of ‘toss up’ and free fall motion for beginning secondary three (15 year-old) students. The case study involved (N = 123) students from express pure physics classes at a mainstream school in Singapore. We used eight multiple-choice questions pre- and post-test to gauge the impact on learning. The experimental group showed learning gains of d = 0.79  ±  0.23 (large effect) for Cohen’s d effect size analysis, and gains with a gradient of  total = 0.42  ±  0.08 (medium gain) above the traditional baseline value of  non interactive = 0.23 for Hake’s normalized gain regression analysis. This applied to all of the teachers and students who participated in this study. Our initial research findings suggest that allowing learners to relate abstract physics concepts to real life through coupling traditional video analysis with video modelling might be an innovative and effective method for teaching and learning about free fall motion.

  7. Refined hierarchical kinematics quasi-3D Ritz models for free vibration analysis of doubly curved FGM shells and sandwich shells with FGM core

    NASA Astrophysics Data System (ADS)

    Fazzolari, Fiorenzo A.; Carrera, Erasmo

    2014-02-01

    In this paper, the Ritz minimum energy method, based on the use of the Principle of Virtual Displacements (PVD), is combined with refined Equivalent Single Layer (ESL) and Zig Zag (ZZ) shell models hierarchically generated by exploiting the use of Carrera's Unified Formulation (CUF), in order to engender the Hierarchical Trigonometric Ritz Formulation (HTRF). The HTRF is then employed to carry out the free vibration analysis of doubly curved shallow and deep functionally graded material (FGM) shells. The PVD is further used in conjunction with the Gauss theorem to derive the governing differential equations and related natural boundary conditions. Donnell-Mushtari's shallow shell-type equations are given as a particular case. Doubly curved FGM shells and doubly curved sandwich shells made up of isotropic face sheets and FGM core are investigated. The proposed shell models are widely assessed by comparison with the literature results. Two benchmarks are provided and the effects of significant parameters such as stacking sequence, boundary conditions, length-to-thickness ratio, radius-to-length ratio and volume fraction index on the circular frequency parameters and modal displacements are discussed.

  8. Of goals and habits: age-related and individual differences in goal-directed decision-making.

    PubMed

    Eppinger, Ben; Walter, Maik; Heekeren, Hauke R; Li, Shu-Chen

    2013-01-01

    In this study we investigated age-related and individual differences in habitual (model-free) and goal-directed (model-based) decision-making. Specifically, we were interested in three questions. First, does age affect the balance between model-based and model-free decision mechanisms? Second, are these age-related changes due to age differences in working memory (WM) capacity? Third, can model-based behavior be affected by manipulating the distinctiveness of the reward value of choice options? To answer these questions we used a two-stage Markov decision task in in combination with computational modeling to dissociate model-based and model-free decision mechanisms. To affect model-based behavior in this task we manipulated the distinctiveness of reward probabilities of choice options. The results show age-related deficits in model-based decision-making, which are particularly pronounced if unexpected reward indicates the need for a shift in decision strategy. In this situation younger adults explore the task structure, whereas older adults show perseverative behavior. Consistent with previous findings, these results indicate that older adults have deficits in the representation and updating of expected reward value. We also observed substantial individual differences in model-based behavior. In younger adults high WM capacity is associated with greater model-based behavior and this effect is further elevated when reward probabilities are more distinct. However, in older adults we found no effect of WM capacity. Moreover, age differences in model-based behavior remained statistically significant, even after controlling for WM capacity. Thus, factors other than decline in WM, such as deficits in the in the integration of expected reward value into strategic decisions may contribute to the observed impairments in model-based behavior in older adults.

  9. Of goals and habits: age-related and individual differences in goal-directed decision-making

    PubMed Central

    Eppinger, Ben; Walter, Maik; Heekeren, Hauke R.; Li, Shu-Chen

    2013-01-01

    In this study we investigated age-related and individual differences in habitual (model-free) and goal-directed (model-based) decision-making. Specifically, we were interested in three questions. First, does age affect the balance between model-based and model-free decision mechanisms? Second, are these age-related changes due to age differences in working memory (WM) capacity? Third, can model-based behavior be affected by manipulating the distinctiveness of the reward value of choice options? To answer these questions we used a two-stage Markov decision task in in combination with computational modeling to dissociate model-based and model-free decision mechanisms. To affect model-based behavior in this task we manipulated the distinctiveness of reward probabilities of choice options. The results show age-related deficits in model-based decision-making, which are particularly pronounced if unexpected reward indicates the need for a shift in decision strategy. In this situation younger adults explore the task structure, whereas older adults show perseverative behavior. Consistent with previous findings, these results indicate that older adults have deficits in the representation and updating of expected reward value. We also observed substantial individual differences in model-based behavior. In younger adults high WM capacity is associated with greater model-based behavior and this effect is further elevated when reward probabilities are more distinct. However, in older adults we found no effect of WM capacity. Moreover, age differences in model-based behavior remained statistically significant, even after controlling for WM capacity. Thus, factors other than decline in WM, such as deficits in the in the integration of expected reward value into strategic decisions may contribute to the observed impairments in model-based behavior in older adults. PMID:24399925

  10. Generating clustered scale-free networks using Poisson based localization of edges

    NASA Astrophysics Data System (ADS)

    Türker, İlker

    2018-05-01

    We introduce a variety of network models using a Poisson-based edge localization strategy, which result in clustered scale-free topologies. We first verify the success of our localization strategy by realizing a variant of the well-known Watts-Strogatz model with an inverse approach, implying a small-world regime of rewiring from a random network through a regular one. We then apply the rewiring strategy to a pure Barabasi-Albert model and successfully achieve a small-world regime, with a limited capacity of scale-free property. To imitate the high clustering property of scale-free networks with higher accuracy, we adapted the Poisson-based wiring strategy to a growing network with the ingredients of both preferential attachment and local connectivity. To achieve the collocation of these properties, we used a routine of flattening the edges array, sorting it, and applying a mixing procedure to assemble both global connections with preferential attachment and local clusters. As a result, we achieved clustered scale-free networks with a computational fashion, diverging from the recent studies by following a simple but efficient approach.

  11. The ABC (in any D) of logarithmic CFT

    NASA Astrophysics Data System (ADS)

    Hogervorst, Matthijs; Paulos, Miguel; Vichi, Alessandro

    2017-10-01

    Logarithmic conformal field theories have a vast range of applications, from critical percolation to systems with quenched disorder. In this paper we thoroughly examine the structure of these theories based on their symmetry properties. Our analysis is model-independent and holds for any spacetime dimension. Our results include a determination of the general form of correlation functions and conformal block decompositions, clearing the path for future bootstrap applications. Several examples are discussed in detail, including logarithmic generalized free fields, holographic models, self-avoiding random walks and critical percolation.

  12. Coherence-Based Modeling of Cultural Change and Political Violence

    DTIC Science & Technology

    2010-08-31

    the classic sociologist Emile Durkheim . The grid/group concept was introduced to the risk analysis community in 1982 by a book Douglas wrote with...has been compared at time to the pioneering concepts of integration and regulation by 19th century sociologist Emile Durkheim (1997 [1897]), for... Durkheim , Emile , Suicide. (New York: Free Press, 1897. Reissue edition. 1997). Dufwenberg, Martin, and Georg Kirchsteiger, 1998. A Theory of

  13. Metabolic profiling and predicting the free radical scavenging activity of guava (Psidium guajava L.) leaves according to harvest time by 1H-nuclear magnetic resonance spectroscopy.

    PubMed

    Kim, So-Hyun; Cho, Somi K; Hyun, Sun-Hee; Park, Hae-Eun; Kim, Young-Suk; Choi, Hyung-Kyoon

    2011-01-01

    Guava leaves were classified and the free radical scavenging activity (FRSA) evaluated according to different harvest times by using the (1)H-NMR-based metabolomic technique. A principal component analysis (PCA) of (1)H-NMR data from the guava leaves provided clear clusters according to the harvesting time. A partial least squares (PLS) analysis indicated a correlation between the metabolic profile and FRSA. FRSA levels of the guava leaves harvested during May and August were high, and those leaves contained higher amounts of 3-hydroxybutyric acid, acetic acid, glutamic acid, asparagine, citric acid, malonic acid, trans-aconitic acid, ascorbic acid, maleic acid, cis-aconitic acid, epicatechin, protocatechuic acid, and xanthine than the leaves harvested during October and December. Epicatechin and protocatechuic acid among those compounds seem to have enhanced FRSA of the guava leaf samples harvested in May and August. A PLS regression model was established to predict guava leaf FRSA at different harvesting times by using a (1)H-NMR data set. The predictability of the PLS model was then tested by internal and external validation. The results of this study indicate that (1)H-NMR-based metabolomic data could usefully characterize guava leaves according to their time of harvesting.

  14. Free vibration of an embedded single-walled carbon nanotube with various boundary conditions using the RMVT-based nonlocal Timoshenko beam theory and DQ method

    NASA Astrophysics Data System (ADS)

    Wu, Chih-Ping; Lai, Wei-Wen

    2015-04-01

    The nonlocal Timoshenko beam theories (TBTs), based on the Reissner mixed variation theory (RMVT) and principle of virtual displacement (PVD), are derived for the free vibration analysis of a single-walled carbon nanotube (SWCNT) embedded in an elastic medium and with various boundary conditions. The strong formulations of the nonlocal TBTs are derived using Hamilton's principle, in which Eringen's nonlocal constitutive relations are used to account for the small-scale effect. The interaction between the SWCNT and its surrounding elastic medium is simulated using the Winkler and Pasternak foundation models. The frequency parameters of the embedded SWCNT are obtained using the differential quadrature (DQ) method. In the cases of the SWCNT without foundations, the results of RMVT- and PVD-based nonlocal TBTs converge rapidly, and their convergent solutions closely agree with the exact ones available in the literature. Because the highest order with regard to the derivatives of the field variables used in the RMVT-based nonlocal TBT is lower than that used in its PVD-based counterpart, the former is more efficient than the latter with regard to the execution time. The former is thus both faster and obtains more accurate solutions than the latter for the numerical analysis of the embedded SWCNT.

  15. Factors Influencing Early Feeding of Foods and Drinks Containing Free Sugars—A Birth Cohort Study

    PubMed Central

    Ha, Diep H.; Do, Loc G.; Spencer, Andrew John; Golley, Rebecca K.; Rugg-Gunn, Andrew J.; Levy, Steven M.

    2017-01-01

    Early feeding of free sugars to young children can increase the preference for sweetness and the risk of consuming a cariogenic diet high in free sugars later in life. This study aimed to investigate early life factors influencing early introduction of foods/drinks containing free sugars. Data from an ongoing population-based birth cohort study in Australia were used. Mothers of newborn children completed questionnaires at birth and subsequently at ages 3, 6, 12, and 24 months. The outcome was reported feeding (Yes/No) at age 6–9 months of common foods/drinks sources of free sugars (hereafter referred as foods/drinks with free sugars). Household income quartiles, mother’s sugar-sweetened beverage (SSB) consumption, and other maternal factors were exposure variables. Analysis was conducted progressively from bivariate to multivariable log-binomial regression with robust standard error estimation to calculate prevalence ratios (PR) of being fed foods/drinks with free sugars at an early age (by 6–9 months). Models for both complete cases and with multiple imputations (MI) for missing data were generated. Of 1479 mother/child dyads, 21% of children had been fed foods/drinks with free sugars. There was a strong income gradient and a significant positive association with maternal SSB consumption. In the complete-case model, income Q1 and Q2 had PRs of 1.9 (1.2–3.1) and 1.8 (1.2–2.6) against Q4, respectively. The PR for mothers ingesting SSB everyday was 1.6 (1.2–2.3). The PR for children who had been breastfed to at least three months was 0.6 (0.5–0.8). Similar findings were observed in the MI model. Household income at birth and maternal behaviours were significant determinants of early feeding of foods/drinks with free sugars. PMID:29065527

  16. Free vibration Analysis of Sandwich Plates with cutout

    NASA Astrophysics Data System (ADS)

    Mishra, N.; Basa, B.; Sarangi, S. K.

    2016-09-01

    This paper presents the free vibration analysis of sandwich plates with cutouts. Cutouts are inevitable in structural applications and the presence of these cutouts in the structures greatly influences their dynamic characteristics. A finite element model has been developed here using the ANSYS 15.0 software to study the free vibration characteristics of sandwich plates in the presence of cutouts. Shell 281 element, an 8-noded element with six degrees of freedom suited for analyzing thin to moderately thick structures is considered in the development of the model. Block Lanczose method is adopted to extract the mode shapes to obtain the natural frequency corresponding to free vibration of the plate. The effects of parametric variation on the natural frequency of the sandwich plates with cutout are studied and results are presented.

  17. Sentinel model for influenza A virus monitoring in free-grazing ducks in Thailand.

    PubMed

    Boonyapisitsopa, Supanat; Chaiyawong, Supassama; Nonthabenjawan, Nutthawan; Jairak, Waleemas; Prakairungnamthip, Duangduean; Bunpapong, Napawan; Amonsin, Alongkorn

    2016-01-01

    Influenza A virus (IAV) can cause influenza in birds and mammals. In Thailand, free-grazing ducks are known IAV reservoirs and can spread viruses through frequent movements in habitats they share with wild birds. In this study, the sentinel model for IAV monitoring was conducted over 4 months in two free-grazing duck flocks. IAV subtypes H4N6 (n=1) and H3N8 (n=5) were isolated from sentinel ducks at the ages of 13 and 15 weeks. Clinical signs of depression and ocular discharge were observed in the infected ducks. Phylogenetic analysis and genetic characterization of the isolated IAVs indicated that all Thai IAVs were clustered in the Eurasian lineage and pose low pathogenic avian influenza characteristics. Serological analysis found that antibodies against IAVs could be detected in the ducks since 9-weeks-old. In summary, our results indicate that the sentinel model can be used for IAV monitoring in free-grazing duck flocks. Since free-grazing ducks are potential reservoirs and transmitters of IAVs, routine IAV surveillance in free-grazing duck flocks can be beneficial for influenza prevention and control strategies. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Development of a GC/MS method for the qualitative and quantitative analysis of mixtures of free fatty acids and metal soaps in paint samples.

    PubMed

    La Nasa, Jacopo; Modugno, Francesca; Aloisi, Matteo; Lluveras-Tenorio, Anna; Bonaduce, Ilaria

    2018-02-25

    In this paper we present a new analytical GC/MS method for the analysis of mixtures of free fatty acids and metal soaps in paint samples. This approach is based on the use of two different silylating agents: N,O-bis(trimethylsilyl)trifluoroacetamide (BSTFA) and 1,1,1,3,3,3-hexamethyldisilazane (HMDS). Our experimentation demonstrated that HMDS does not silylate fatty acid carboxylates, so it can be used for the selective derivatization and GC/MS quantitative analysis of free fatty acids. On the other hand BSTFA is able to silylate both free fatty acids and fatty acids carboxylates. The reaction conditions for the derivatization of carboxylates with BSTFA were thus optimized with a full factorial 3 2 experimental design using lead stearate and lead palmitate as model systems. The analytical method was validated following the ICH guidelines. The method allows the qualitative and quantitative analysis of fatty acid carboxylates of sodium, calcium, magnesium, aluminium, manganese, cobalt, copper, zinc, cadmium, and lead and of lead azelate. In order to exploit the performances of the new analytical method, samples collected from two reference paint layers, from a gilded 16th century marble sculpture, and from a paint tube belonging to the atelier of Edvard Munch, used in the last period of his life (1916-1944), were characterized. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Low Dimensional Analysis of Wing Surface Morphology in Hummingbird Free Flight

    NASA Astrophysics Data System (ADS)

    Shallcross, Gregory; Ren, Yan; Liu, Geng; Dong, Haibo; Tobalske, Bret

    2015-11-01

    Surface morphing in flapping wings is a hallmark of bird flight. In current work, the role of dynamic wing morphing of a free flying hummingbird is studied in detail. A 3D image-based surface reconstruction method is used to obtain the kinematics and deformation of hummingbird wings from high-quality high-speed videos. The observed wing surface morphing is highly complex and a number of modeling methods including singular value decomposition (SVD) are used to obtain the fundamental kinematical modes with distinct motion features. Their aerodynamic roles are investigated by conducting immersed-boundary-method based flow simulations. The results show that the chord-wise deformation modes play key roles in the attachment of leading-edge vortex, thus improve the performance of the flapping wings. This work is supported by NSF CBET-1313217 and AFOSR FA9550-12-1-0071.

  20. Prototype Conflict Alerting Logic for Free Flight

    NASA Technical Reports Server (NTRS)

    Yang, Lee C.; Kuchar, James K.

    1997-01-01

    This paper discusses the development of a prototype alerting system for a conceptual Free Flight environment. The concept assumes that datalink between aircraft is available and that conflicts are primarily resolved on the flight deck. Four alert stages are generated depending on the likelihood of a conflict. If the conflict is not resolved by the flight crews, Air Traffic Control is notified to take over separation authority. The alerting logic is based on probabilistic analysis through modeling of aircraft sensor and trajectory uncertainties. Monte Carlo simulations were used over a range of encounter situations to determine conflict probability. The four alert stages were then defined based on probability of conflict and on the number of avoidance maneuvers available to the flight crew. Preliminary results from numerical evaluations and from a piloted simulator study at NASA Ames Research Center are summarized.

  1. Visualization of RNA structure models within the Integrative Genomics Viewer.

    PubMed

    Busan, Steven; Weeks, Kevin M

    2017-07-01

    Analyses of the interrelationships between RNA structure and function are increasingly important components of genomic studies. The SHAPE-MaP strategy enables accurate RNA structure probing and realistic structure modeling of kilobase-length noncoding RNAs and mRNAs. Existing tools for visualizing RNA structure models are not suitable for efficient analysis of long, structurally heterogeneous RNAs. In addition, structure models are often advantageously interpreted in the context of other experimental data and gene annotation information, for which few tools currently exist. We have developed a module within the widely used and well supported open-source Integrative Genomics Viewer (IGV) that allows visualization of SHAPE and other chemical probing data, including raw reactivities, data-driven structural entropies, and data-constrained base-pair secondary structure models, in context with linear genomic data tracks. We illustrate the usefulness of visualizing RNA structure in the IGV by exploring structure models for a large viral RNA genome, comparing bacterial mRNA structure in cells with its structure under cell- and protein-free conditions, and comparing a noncoding RNA structure modeled using SHAPE data with a base-pairing model inferred through sequence covariation analysis. © 2017 Busan and Weeks; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  2. Probabilistic topic modeling for the analysis and classification of genomic sequences

    PubMed Central

    2015-01-01

    Background Studies on genomic sequences for classification and taxonomic identification have a leading role in the biomedical field and in the analysis of biodiversity. These studies are focusing on the so-called barcode genes, representing a well defined region of the whole genome. Recently, alignment-free techniques are gaining more importance because they are able to overcome the drawbacks of sequence alignment techniques. In this paper a new alignment-free method for DNA sequences clustering and classification is proposed. The method is based on k-mers representation and text mining techniques. Methods The presented method is based on Probabilistic Topic Modeling, a statistical technique originally proposed for text documents. Probabilistic topic models are able to find in a document corpus the topics (recurrent themes) characterizing classes of documents. This technique, applied on DNA sequences representing the documents, exploits the frequency of fixed-length k-mers and builds a generative model for a training group of sequences. This generative model, obtained through the Latent Dirichlet Allocation (LDA) algorithm, is then used to classify a large set of genomic sequences. Results and conclusions We performed classification of over 7000 16S DNA barcode sequences taken from Ribosomal Database Project (RDP) repository, training probabilistic topic models. The proposed method is compared to the RDP tool and Support Vector Machine (SVM) classification algorithm in a extensive set of trials using both complete sequences and short sequence snippets (from 400 bp to 25 bp). Our method reaches very similar results to RDP classifier and SVM for complete sequences. The most interesting results are obtained when short sequence snippets are considered. In these conditions the proposed method outperforms RDP and SVM with ultra short sequences and it exhibits a smooth decrease of performance, at every taxonomic level, when the sequence length is decreased. PMID:25916734

  3. Bifurcation analysis in SIR epidemic model with treatment

    NASA Astrophysics Data System (ADS)

    Balamuralitharan, S.; Radha, M.

    2018-04-01

    We investigated the bifurcation analysis of nonlinear system of SIR epidemic model with treatment. It is accepted that the treatment is corresponding to the quantity of infective which is below the limit and steady when the quantity of infective achieves the limit. We analyze about the Transcritical bifurcation which occurs at the disease free equilibrium point and Hopf bifurcation which occurs at endemic equilibrium point. Using MATLAB we show the picture of bifurcation at the disease free equilibrium point.

  4. Multifidelity Analysis and Optimization for Supersonic Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory

    2010-01-01

    Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.

  5. A study of reacting free and ducted hydrogen/air jets

    NASA Technical Reports Server (NTRS)

    Beach, H. L., Jr.

    1975-01-01

    The mixing and reaction of a supersonic jet of hydrogen in coaxial free and ducted high temperature test gases were investigated. The importance of chemical kinetics on computed results, and the utilization of free-jet theoretical approaches to compute enclosed flow fields were studied. Measured pitot pressure profiles were correlated by use of a parabolic mixing analysis employing an eddy viscosity model. All computations, including free, ducted, reacting, and nonreacting cases, use the same value of the empirical constant in the viscosity model. Equilibrium and finite rate chemistry models were utilized. The finite rate assumption allowed prediction of observed ignition delay, but the equilibrium model gave the best correlations downstream from the ignition location. Ducted calculations were made with finite rate chemistry; correlations were, in general, as good as the free-jet results until problems with the boundary conditions were encountered.

  6. Cost-effectiveness of drug-eluting stents versus bare-metal stents in patients undergoing percutaneous coronary intervention.

    PubMed

    Baschet, Louise; Bourguignon, Sandrine; Marque, Sébastien; Durand-Zaleski, Isabelle; Teiger, Emmanuel; Wilquin, Fanny; Levesque, Karine

    2016-01-01

    To determine the cost-effectiveness of drug-eluting stents (DES) compared with bare-metal stents (BMS) in patients requiring a percutaneous coronary intervention in France, using a recent meta-analysis including second-generation DES. A cost-effectiveness analysis was performed in the French National Health Insurance setting. Effectiveness settings were taken from a meta-analysis of 117 762 patient-years with 76 randomised trials. The main effectiveness criterion was major cardiac event-free survival. Effectiveness and costs were modelled over a 5-year horizon using a three-state Markov model. Incremental cost-effectiveness ratios and a cost-effectiveness acceptability curve were calculated for a range of thresholds for willingness to pay per year without major cardiac event gain. Deterministic and probabilistic sensitivity analyses were performed. Base case results demonstrated that DES are dominant over BMS, with an increase in event-free survival and a cost-reduction of €184, primarily due to a diminution of second revascularisations, and an absence of myocardial infarction and stent thrombosis. These results are robust for uncertainty on one-way deterministic and probabilistic sensitivity analyses. Using a cost-effectiveness threshold of €7000 per major cardiac event-free year gained, DES has a >95% probability of being cost-effective versus BMS. Following DES price decrease, new-generation DES development and taking into account recent meta-analyses results, the DES can now be considered cost-effective regardless of selective indication in France, according to European recommendations.

  7. Non-Euclidean stress-free configuration of arteries accounting for curl of axial strips sectioned from vessels.

    PubMed

    Takamizawa, Keiichi; Nakayama, Yasuhide

    2013-11-01

    It is well known that arteries are subject to residual stress. In earlier studies, the residual stress in the arterial ring relieved by a radial cut was considered in stress analysis. However, it has been found that axial strips sectioned from arteries also curled into arcs, showing that the axial residual stresses were relieved from the arterial walls. The combined relief of circumferential and axial residual stresses must be considered to accurately analyze stress and strain distributions under physiological loading conditions. In the present study, a mathematical model of a stress-free configuration of artery was proposed using Riemannian geometry. Stress analysis for arterial walls under unloaded and physiologically loaded conditions was performed using exponential strain energy functions for porcine and human common carotid arteries. In the porcine artery, the circumferential stress distribution under physiological loading became uniform compared with that without axial residual strain, whereas a gradient of axial stress distribution increased through the wall thickness. This behavior showed almost the same pattern that was observed in a recent study in which approximate analysis accounting for circumferential and axial residual strains was performed, whereas the circumferential and axial stresses increased from the inner surface to the outer surface under a physiological condition in the human common carotid artery of a two-layer model based on data of other recent studies. In both analyses, Riemannian geometry was appropriate to define the stress-free configurations of the arterial walls with both circumferential and axial residual strains.

  8. Continuous protein concentration via free-flow moving reaction boundary electrophoresis.

    PubMed

    Kong, Fanzhi; Zhang, Min; Chen, Jingjing; Fan, Liuyin; Xiao, Hua; Liu, Shaorong; Cao, Chengxi

    2017-07-28

    In this work, we developed the model and theory of free-flow moving reaction boundary electrophoresis (FFMRB) for continuous protein concentration for the first time. The theoretical results indicated that (i) the moving reaction boundary (MRB) can be quantitatively designed in free-flow electrophoresis (FFE) system; (ii) charge-to-mass ratio (Z/M) analysis could provide guidance for protein concentration optimization; and (iii) the maximum processing capacity could be predicted. To demonstrate the model and theory, three model proteins of hemoglobin (Hb), cytochrome C (Cyt C) and C-phycocyanin (C-PC) were chosen for the experiments. The experimental results verified that (i) stable MRBs with different velocities could be established in FFE apparatus with weak acid/weak base neutralization reaction system; (ii) proteins of Hb, Cyt C and C-PC were well concentrated with FFMRB; and (iii) a maximum processing capacity and recovery ratio of Cyt C enrichment were 126mL/h and 95.5% respectively, and a maximum enrichment factor was achieved 12.6 times for Hb. All of the experiments demonstrated the protein concentration model and theory. In contrast to other methods, the continuous processing ability enables FFMRB to efficiently enrich diluted protein or peptide in large volume solution. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Temporal unfolding of declining episodic memory on the Free and Cued Selective Reminding Test in the predementia phase of Alzheimer's disease: Implications for clinical trials.

    PubMed

    Grober, Ellen; Veroff, Amy E; Lipton, Richard B

    2018-01-01

    Free and Cued Selective Reminding Test (FCSRT) performance identifies patients with preclinical disease at elevated risk for developing Alzheimer's dementia, predicting diagnosis better than other memory tests. Based on literature mapping FCSRT performance to clinical outcomes and biological markers, and on longitudinal preclinical data from the Baltimore Longitudinal Study of Aging, we developed the Stages of Objective Memory Impairment (SOMI) model. Five sequential stages of episodic memory decline are defined by Free Recall (FR) and Total Recall (TR) score ranges and years prior to dementia diagnosis. We sought to replicate the SOMI model using longitudinal assessments of 142 Einstein Aging Study participants who developed AD over 10 years. Time to diagnosis was at least seven years if FR was intact, at least four years if TR was intact, and two years if TR was impaired, consistent with SOMI model predictions. The SOMI identified incipient dementia with excellent sensitivity and specificity. The SOMI model provides an efficient approach for clinical trial cognitive screening in advance of more costly biomarker studies and ultimately in clinical practice, and provides a vocabulary for understanding AD biomarker patterns and for re-analysis of existing clinical trial data.

  10. Comparison of NASTRAN analysis with ground vibration results of UH-60A NASA/AEFA test configuration

    NASA Technical Reports Server (NTRS)

    Idosor, Florentino; Seible, Frieder

    1990-01-01

    Preceding program flight tests, a ground vibration test and modal test analysis of a UH-60A Black Hawk helicopter was conducted by Sikorsky Aircraft to complement the UH-60A test plan and NASA/ARMY Modern Technology Rotor Airloads Program. The 'NASA/AEFA' shake test configuration was tested for modal frequencies and shapes and compared with its NASTRAN finite element model counterpart to give correlative results. Based upon previous findings, significant differences in modal data existed and were attributed to assumptions regarding the influence of secondary structure contributions in the preliminary NASTRAN modeling. An analysis of an updated finite element model including several secondary structural additions has confirmed that the inclusion of specific secondary components produces a significant effect on modal frequency and free-response shapes and improves correlations at lower frequencies with shake test data.

  11. Numerical and experimental investigation of the 3D free surface flow in a model Pelton turbine

    NASA Astrophysics Data System (ADS)

    Fiereder, R.; Riemann, S.; Schilling, R.

    2010-08-01

    This investigation focuses on the numerical and experimental analysis of the 3D free surface flow in a Pelton turbine. In particular, two typical flow conditions occurring in a full scale Pelton turbine - a configuration with a straight inlet as well as a configuration with a 90 degree elbow upstream of the nozzle - are considered. Thereby, the effect of secondary flow due to the 90 degree bending of the upstream pipe on the characteristics of the jet is explored. The hybrid flow field consists of pure liquid flow within the conduit and free surface two component flow of the liquid jet emerging out of the nozzle into air. The numerical results are validated against experimental investigations performed in the laboratory of the Institute of Fluid Mechanics (FLM). For the numerical simulation of the flow the in-house unstructured fully parallelized finite volume solver solver3D is utilized. An advanced interface capturing model based on the classic Volume of Fluid method is applied. In order to ensure sharp interface resolution an additional convection term is added to the transport equation of the volume fraction. A collocated variable arrangement is used and the set of non-linear equations, containing fluid conservation equations and model equations for turbulence and volume fraction, are solved in a segregated manner. For pressure-velocity coupling the SIMPLE and PISO algorithms are implemented. Detailed analysis of the observed flow patterns in the jet and of the jet geometry are presented.

  12. A Process Approach to Community-Based Education: The People's Free University of Saskatchewan

    ERIC Educational Resources Information Center

    Woodhouse, Howard

    2005-01-01

    On the basis of insights provided by Whitehead and John Cobb, I show how the People's Free University of Saskatchewan (PFU) is a working model of free, open, community-based education that embodies several characteristics of Whitehead's philosophy of education. Formed in opposition to the growing commercialization at the original "people?s…

  13. Using Bayes' theorem for free energy calculations

    NASA Astrophysics Data System (ADS)

    Rogers, David M.

    Statistical mechanics is fundamentally based on calculating the probabilities of molecular-scale events. Although Bayes' theorem has generally been recognized as providing key guiding principals for setup and analysis of statistical experiments [83], classical frequentist models still predominate in the world of computational experimentation. As a starting point for widespread application of Bayesian methods in statistical mechanics, we investigate the central quantity of free energies from this perspective. This dissertation thus reviews the basics of Bayes' view of probability theory, and the maximum entropy formulation of statistical mechanics before providing examples of its application to several advanced research areas. We first apply Bayes' theorem to a multinomial counting problem in order to determine inner shell and hard sphere solvation free energy components of Quasi-Chemical Theory [140]. We proceed to consider the general problem of free energy calculations from samples of interaction energy distributions. From there, we turn to spline-based estimation of the potential of mean force [142], and empirical modeling of observed dynamics using integrator matching. The results of this research are expected to advance the state of the art in coarse-graining methods, as they allow a systematic connection from high-resolution (atomic) to low-resolution (coarse) structure and dynamics. In total, our work on these problems constitutes a critical starting point for further application of Bayes' theorem in all areas of statistical mechanics. It is hoped that the understanding so gained will allow for improvements in comparisons between theory and experiment.

  14. Large scale affinity calculations of cyclodextrin host-guest complexes: Understanding the role of reorganization in the molecular recognition process

    PubMed Central

    Wickstrom, Lauren; He, Peng; Gallicchio, Emilio; Levy, Ronald M.

    2013-01-01

    Host-guest inclusion complexes are useful models for understanding the structural and energetic aspects of molecular recognition. Due to their small size relative to much larger protein-ligand complexes, converged results can be obtained rapidly for these systems thus offering the opportunity to more reliably study fundamental aspects of the thermodynamics of binding. In this work, we have performed a large scale binding affinity survey of 57 β-cyclodextrin (CD) host guest systems using the binding energy distribution analysis method (BEDAM) with implicit solvation (OPLS-AA/AGBNP2). Converged estimates of the standard binding free energies are obtained for these systems by employing techniques such as parallel Hamitionian replica exchange molecular dynamics, conformational reservoirs and multistate free energy estimators. Good agreement with experimental measurements is obtained in terms of both numerical accuracy and affinity rankings. Overall, average effective binding energies reproduce affinity rank ordering better than the calculated binding affinities, even though calculated binding free energies, which account for effects such as conformational strain and entropy loss upon binding, provide lower root mean square errors when compared to measurements. Interestingly, we find that binding free energies are superior rank order predictors for a large subset containing the most flexible guests. The results indicate that, while challenging, accurate modeling of reorganization effects can lead to ligand design models of superior predictive power for rank ordering relative to models based only on ligand-receptor interaction energies. PMID:25147485

  15. Mastitomics, the integrated omics of bovine milk in an experimental model of Streptococcus uberis mastitis: 2. Label-free relative quantitative proteomics.

    PubMed

    Mudaliar, Manikhandan; Tassi, Riccardo; Thomas, Funmilola C; McNeilly, Tom N; Weidt, Stefan K; McLaughlin, Mark; Wilson, David; Burchmore, Richard; Herzyk, Pawel; Eckersall, P David; Zadoks, Ruth N

    2016-08-16

    Mastitis, inflammation of the mammary gland, is the most common and costly disease of dairy cattle in the western world. It is primarily caused by bacteria, with Streptococcus uberis as one of the most prevalent causative agents. To characterize the proteome during Streptococcus uberis mastitis, an experimentally induced model of intramammary infection was used. Milk whey samples obtained from 6 cows at 6 time points were processed using label-free relative quantitative proteomics. This proteomic analysis complements clinical, bacteriological and immunological studies as well as peptidomic and metabolomic analysis of the same challenge model. A total of 2552 non-redundant bovine peptides were identified, and from these, 570 bovine proteins were quantified. Hierarchical cluster analysis and principal component analysis showed clear clustering of results by stage of infection, with similarities between pre-infection and resolution stages (0 and 312 h post challenge), early infection stages (36 and 42 h post challenge) and late infection stages (57 and 81 h post challenge). Ingenuity pathway analysis identified upregulation of acute phase protein pathways over the course of infection, with dominance of different acute phase proteins at different time points based on differential expression analysis. Antimicrobial peptides, notably cathelicidins and peptidoglycan recognition protein, were upregulated at all time points post challenge and peaked at 57 h, which coincided with 10 000-fold decrease in average bacterial counts. The integration of clinical, bacteriological, immunological and quantitative proteomics and other-omic data provides a more detailed systems level view of the host response to mastitis than has been achieved previously.

  16. Optical modeling based on mean free path calculations for quantum dot phosphors applied to optoelectronic devices.

    PubMed

    Shin, Min-Ho; Kim, Hyo-Jun; Kim, Young-Joo

    2017-02-20

    We proposed an optical simulation model for the quantum dot (QD) nanophosphor based on the mean free path concept to understand precisely the optical performance of optoelectronic devices. A measurement methodology was also developed to get the desired optical characteristics such as the mean free path and absorption spectra for QD nanophosphors which are to be incorporated into the simulation. The simulation results for QD-based white LED and OLED displays show good agreement with the experimental values from the fabricated devices in terms of spectral power distribution, chromaticity coordinate, CCT, and CRI. The proposed simulation model and measurement methodology can be applied easily to the design of lots of optoelectronics devices using QD nanophosphors to obtain high efficiency and the desired color characteristics.

  17. BP artificial neural network based wave front correction for sensor-less free space optics communication

    NASA Astrophysics Data System (ADS)

    Li, Zhaokun; Zhao, Xiaohui

    2017-02-01

    The sensor-less adaptive optics (AO) is one of the most promising methods to compensate strong wave front disturbance in free space optics communication (FSO). The back propagation (BP) artificial neural network is applied for the sensor-less AO system to design a distortion correction scheme in this study. This method only needs one or a few online measurements to correct the wave front distortion compared with other model-based approaches, by which the real-time capacity of the system is enhanced and the Strehl Ratio (SR) is largely improved. Necessary comparisons in numerical simulation with other model-based and model-free correction methods proposed in Refs. [6,8,9,10] are given to show the validity and advantage of the proposed method.

  18. Disease-Free Survival after Hepatic Resection in Hepatocellular Carcinoma Patients: A Prediction Approach Using Artificial Neural Network

    PubMed Central

    Ho, Wen-Hsien; Lee, King-Teh; Chen, Hong-Yaw; Ho, Te-Wei; Chiu, Herng-Chia

    2012-01-01

    Background A database for hepatocellular carcinoma (HCC) patients who had received hepatic resection was used to develop prediction models for 1-, 3- and 5-year disease-free survival based on a set of clinical parameters for this patient group. Methods The three prediction models included an artificial neural network (ANN) model, a logistic regression (LR) model, and a decision tree (DT) model. Data for 427, 354 and 297 HCC patients with histories of 1-, 3- and 5-year disease-free survival after hepatic resection, respectively, were extracted from the HCC patient database. From each of the three groups, 80% of the cases (342, 283 and 238 cases of 1-, 3- and 5-year disease-free survival, respectively) were selected to provide training data for the prediction models. The remaining 20% of cases in each group (85, 71 and 59 cases in the three respective groups) were assigned to validation groups for performance comparisons of the three models. Area under receiver operating characteristics curve (AUROC) was used as the performance index for evaluating the three models. Conclusions The ANN model outperformed the LR and DT models in terms of prediction accuracy. This study demonstrated the feasibility of using ANNs in medical decision support systems for predicting disease-free survival based on clinical databases in HCC patients who have received hepatic resection. PMID:22235270

  19. Ventral striatal dopamine reflects behavioral and neural signatures of model-based control during sequential decision making.

    PubMed

    Deserno, Lorenz; Huys, Quentin J M; Boehme, Rebecca; Buchert, Ralph; Heinze, Hans-Jochen; Grace, Anthony A; Dolan, Raymond J; Heinz, Andreas; Schlagenhauf, Florian

    2015-02-03

    Dual system theories suggest that behavioral control is parsed between a deliberative "model-based" and a more reflexive "model-free" system. A balance of control exerted by these systems is thought to be related to dopamine neurotransmission. However, in the absence of direct measures of human dopamine, it remains unknown whether this reflects a quantitative relation with dopamine either in the striatum or other brain areas. Using a sequential decision task performed during functional magnetic resonance imaging, combined with striatal measures of dopamine using [(18)F]DOPA positron emission tomography, we show that higher presynaptic ventral striatal dopamine levels were associated with a behavioral bias toward more model-based control. Higher presynaptic dopamine in ventral striatum was associated with greater coding of model-based signatures in lateral prefrontal cortex and diminished coding of model-free prediction errors in ventral striatum. Thus, interindividual variability in ventral striatal presynaptic dopamine reflects a balance in the behavioral expression and the neural signatures of model-free and model-based control. Our data provide a novel perspective on how alterations in presynaptic dopamine levels might be accompanied by a disruption of behavioral control as observed in aging or neuropsychiatric diseases such as schizophrenia and addiction.

  20. Generalizing Gillespie’s Direct Method to Enable Network-Free Simulations

    DOE PAGES

    Suderman, Ryan T.; Mitra, Eshan David; Lin, Yen Ting; ...

    2018-03-28

    Gillespie’s direct method for stochastic simulation of chemical kinetics is a staple of computational systems biology research. However, the algorithm requires explicit enumeration of all reactions and all chemical species that may arise in the system. In many cases, this is not feasible due to the combinatorial explosion of reactions and species in biological networks. Rule-based modeling frameworks provide a way to exactly represent networks containing such combinatorial complexity, and generalizations of Gillespie’s direct method have been developed as simulation engines for rule-based modeling languages. Here, we provide both a high-level description of the algorithms underlying the simulation engines, termedmore » network-free simulation algorithms, and how they have been applied in systems biology research. We also define a generic rule-based modeling framework and describe a number of technical details required for adapting Gillespie’s direct method for network-free simulation. Lastly, we briefly discuss potential avenues for advancing network-free simulation and the role they continue to play in modeling dynamical systems in biology.« less

  1. Generalizing Gillespie’s Direct Method to Enable Network-Free Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suderman, Ryan T.; Mitra, Eshan David; Lin, Yen Ting

    Gillespie’s direct method for stochastic simulation of chemical kinetics is a staple of computational systems biology research. However, the algorithm requires explicit enumeration of all reactions and all chemical species that may arise in the system. In many cases, this is not feasible due to the combinatorial explosion of reactions and species in biological networks. Rule-based modeling frameworks provide a way to exactly represent networks containing such combinatorial complexity, and generalizations of Gillespie’s direct method have been developed as simulation engines for rule-based modeling languages. Here, we provide both a high-level description of the algorithms underlying the simulation engines, termedmore » network-free simulation algorithms, and how they have been applied in systems biology research. We also define a generic rule-based modeling framework and describe a number of technical details required for adapting Gillespie’s direct method for network-free simulation. Lastly, we briefly discuss potential avenues for advancing network-free simulation and the role they continue to play in modeling dynamical systems in biology.« less

  2. Mathematical analysis of tuberculosis transmission model with delay

    NASA Astrophysics Data System (ADS)

    Lapaan, R. D.; Collera, J. A.; Addawe, J. M.

    2016-11-01

    In this paper, a delayed Tuberculosis infection model is formulated and investigated. We showed the existence of disease free equilibrium and endemic equilibrium points. We used La Salle-Lyapunov Invariance Principle to show that if the reproductive number R0 < 1, the disease-free equilibrium of the model is globally asymptotically stable. Numerical simulations are then performed to illustrate the existence of the disease free equilibrium and the endemic equilibrium point for a given value of R0. Thus, when R0 < 1, the disease dies out in the population.

  3. Predicting the Activity Coefficients of Free-Solvent for Concentrated Globular Protein Solutions Using Independently Determined Physical Parameters

    PubMed Central

    McBride, Devin W.; Rodgers, Victor G. J.

    2013-01-01

    The activity coefficient is largely considered an empirical parameter that was traditionally introduced to correct the non-ideality observed in thermodynamic systems such as osmotic pressure. Here, the activity coefficient of free-solvent is related to physically realistic parameters and a mathematical expression is developed to directly predict the activity coefficients of free-solvent, for aqueous protein solutions up to near-saturation concentrations. The model is based on the free-solvent model, which has previously been shown to provide excellent prediction of the osmotic pressure of concentrated and crowded globular proteins in aqueous solutions up to near-saturation concentrations. Thus, this model uses only the independently determined, physically realizable quantities: mole fraction, solvent accessible surface area, and ion binding, in its prediction. Predictions are presented for the activity coefficients of free-solvent for near-saturated protein solutions containing either bovine serum albumin or hemoglobin. As a verification step, the predictability of the model for the activity coefficient of sucrose solutions was evaluated. The predicted activity coefficients of free-solvent are compared to the calculated activity coefficients of free-solvent based on osmotic pressure data. It is observed that the predicted activity coefficients are increasingly dependent on the solute-solvent parameters as the protein concentration increases to near-saturation concentrations. PMID:24324733

  4. Calculation of free turbulent mixing by interaction approach.

    NASA Technical Reports Server (NTRS)

    Morel, T.; Torda, T. P.

    1973-01-01

    The applicability of Bradshaw's interaction hypothesis to two-dimensional free shear flows was investigated. According to it, flows with velocity extrema may be considered to consist of several interacting layers. The hypothesis leads to a new expression for the shear stress which removes the usual restriction that shear stress vanishes at the velocity extremum. The approach is based on kinetic energy and the length scale equations. The compressible flow equations are simplified by restriction to low Mach numbers, and the range of their applicability is discussed. The empirical functions of the turbulence model are found here to be correlated with the spreading rate of the shear layer. The analysis demonstrates that the interaction hypothesis is a workable concept.

  5. Removal of free fatty acid in Azadirachta indica (Neem) seed oil using phosphoric acid modified mordenite for biodiesel production.

    PubMed

    SathyaSelvabala, Vasanthakumar; Varathachary, Thiruvengadaravi Kadathur; Selvaraj, Dinesh Kirupha; Ponnusamy, Vijayalakshmi; Subramanian, Sivanesan

    2010-08-01

    In this study free fatty acids present in Azadirachta indica (Neem) oil were esterified with our synthesized phosphoric acid modified catalyst. During the esterification, the acid value was reduced from 24.4 to 1.8 mg KOH/g oil. Synthesized catalyst was characterized by NH(3) TPD, XRD, SEM, FTIR and TGA analysis. During phosphoric acid modification hydrophobic character and weak acid sites of the mordenite were increased, which lead to better esterification when compared to H-mordenite. A kinetic study demonstrates that the esterification reaction followed pseudo-first order kinetics. Thermodynamic studies were also done based on the Arrhenius model. (c) 2010 Elsevier Ltd. All rights reserved.

  6. Microscale heat transfer in fusion welding of glass by ultra-short pulse laser using dual phase lag effects

    NASA Astrophysics Data System (ADS)

    Bag, Swarup

    2018-04-01

    The heat transfer in microscale has very different physical basis than macroscale where energy transport depends on collisions among energy carriers (electron and phonon), mean free path for the lattice (~ 10 – 100 nm) and mean free time between energy carriers. The heat transport is described on the basis of different types of energy carriers averaging over the grain scale in space and collations between them in time scale. The physical bases of heat transfer are developed by phonon-electron interaction for metals and alloys and phonon scattering for insulators and dielectrics. The non-Fourier effects in heating become more and more predominant as the duration of heating pulse becomes extremely small that is comparable with mean free time of the energy carriers. The mean free time for electron – phonon and phonon-phonon interaction is of the order of 1 and 10 picoseconds, respectively. In the present study, the mathematical formulation of the problem is defined considering dual phase lag i.e. two relaxation times in heat transport assuming a volumetric heat generation for ultra-short pulse laser interaction with dielectrics. The relaxation times are estimated based on phonon scattering model. A three dimensional finite element model is developed to find transient temperature distribution using quadruple ellipsoidal heat source model. The analysis is performed for single and multiple pulses to generate the time temperature history at different location and at different instant of time. The simulated results are validated with experiments reported in independent literature. The effect of two relaxation times and pulse width on the temperature profile is studied through numerical simulation.

  7. Atomistic simulation of solid-liquid coexistence for molecular systems: application to triazole and benzene.

    PubMed

    Eike, David M; Maginn, Edward J

    2006-04-28

    A method recently developed to rigorously determine solid-liquid equilibrium using a free-energy-based analysis has been extended to analyze multiatom molecular systems. This method is based on using a pseudosupercritical transformation path to reversibly transform between solid and liquid phases. Integration along this path yields the free energy difference at a single state point, which can then be used to determine the free energy difference as a function of temperature and therefore locate the coexistence temperature at a fixed pressure. The primary extension reported here is the introduction of an external potential field capable of inducing center of mass order along with secondary orientational order for molecules. The method is used to calculate the melting point of 1-H-1,2,4-triazole and benzene. Despite the fact that the triazole model gives accurate bulk densities for the liquid and crystal phases, it is found to do a poor job of reproducing the experimental crystal structure and heat of fusion. Consequently, it yields a melting point that is 100 K lower than the experimental value. On the other hand, the benzene model has been parametrized extensively to match a wide range of properties and yields a melting point that is only 20 K lower than the experimental value. Previous work in which a simple "direct heating" method was used actually found that the melting point of the benzene model was 50 K higher than the experimental value. This demonstrates the importance of using proper free energy methods to compute phase behavior. It also shows that the melting point is a very sensitive measure of force field quality that should be considered in parametrization efforts. The method described here provides a relatively simple approach for computing melting points of molecular systems.

  8. An extensive assessment of network alignment algorithms for comparison of brain connectomes.

    PubMed

    Milano, Marianna; Guzzi, Pietro Hiram; Tymofieva, Olga; Xu, Duan; Hess, Christofer; Veltri, Pierangelo; Cannataro, Mario

    2017-06-06

    Recently the study of the complex system of connections in neural systems, i.e. the connectome, has gained a central role in neurosciences. The modeling and analysis of connectomes are therefore a growing area. Here we focus on the representation of connectomes by using graph theory formalisms. Macroscopic human brain connectomes are usually derived from neuroimages; the analyzed brains are co-registered in the image domain and brought to a common anatomical space. An atlas is then applied in order to define anatomically meaningful regions that will serve as the nodes of the network - this process is referred to as parcellation. The atlas-based parcellations present some known limitations in cases of early brain development and abnormal anatomy. Consequently, it has been recently proposed to perform atlas-free random brain parcellation into nodes and align brains in the network space instead of the anatomical image space, as a way to deal with the unknown correspondences of the parcels. Such process requires modeling of the brain using graph theory and the subsequent comparison of the structure of graphs. The latter step may be modeled as a network alignment (NA) problem. In this work, we first define the problem formally, then we test six existing state of the art of network aligners on diffusion MRI-derived brain networks. We compare the performances of algorithms by assessing six topological measures. We also evaluated the robustness of algorithms to alterations of the dataset. The results confirm that NA algorithms may be applied in cases of atlas-free parcellation for a fully network-driven comparison of connectomes. The analysis shows MAGNA++ is the best global alignment algorithm. The paper presented a new analysis methodology that uses network alignment for validating atlas-free parcellation brain connectomes. The methodology has been experimented on several brain datasets.

  9. Fracture mechanics analysis for various fiber/matrix interface loadings

    NASA Technical Reports Server (NTRS)

    Naik, R. A.; Crews, J. H., Jr.

    1991-01-01

    Fiber/matrix (F/M) cracking was analyzed to provide better understanding and guidance in developing F/M interface fracture toughness tests. Two configurations, corresponding to F/M cracking at a broken fiber and at the free edge, were investigated. The effects of mechanical loading, thermal cooldown, and friction were investigated. Each configuration was analyzed for two loadings: longitudinal and normal to the fiber. A nonlinear finite element analysis was performed to model friction and slip at the F/M interface. A new procedure for fitting a square-root singularity to calculated stresses was developed to determine stress intensity factors (K sub I and K sub II) for a bimaterial interface crack. For the case of F/M cracking at a broken fiber with longitudinal loading, crack tip conditions were strongly influenced by interface friction. As a result, an F/M interface toughness test based on this case was not recommended because nonlinear data analysis methods would be required. For the free edge crack configuration, both mechanical and thermal loading caused crack opening, thereby avoiding frictional effects. A F/M interface toughness test based on this configuration would provide data for K(sub I)/K(sub II) ratios of about 0.7 and 1.6 for fiber and radial normal loading, respectively. However, thermal effects must be accounted for in the data analysis.

  10. Fracture mechanics analysis for various fiber/matrix interface loadings

    NASA Technical Reports Server (NTRS)

    Naik, Rajiv A.; Crews, John H., Jr.

    1992-01-01

    Fiber/matrix (F/M) cracking was analyzed to provide better understanding and guidance in developing F/M interface fracture toughness tests. Two configurations, corresponding to F/M cracking at a broken fiber and at the free edge, were investigated. The effects of mechanical loading, thermal cooldown, and friction were investigated. Each configuration was analyzed for two loadings: longitudinal and normal to the fiber. A nonlinear finite element analysis was performed to model friction and slip at the F/M interface. A new procedure for fitting a square-root singularity to calculated stresses was developed to determine stress intensity factors (K sub I and K sub II) for a bimaterial interface crack. For the case of F/M cracking at a broken fiber with longitudinal loading, crack tip conditions were strongly influenced by interface friction. As a result, an F/M interface toughness test based on this case was not recommended because nonlinear data analysis methods would be required. For the free edge crack configuration, both mechanical and thermal loading caused crack opening, theory avoiding fractional effects. A F/M interface toughness test based on this configuration would provide data for K(sub I/K(sub II) ratios of about 0.7 and 1.6 for fiber and radial normal loading, respectively. However, thermal effects must be accounted for in the data analysis.

  11. Fracture mechanics analysis for various fiber/matrix interface loadings

    NASA Technical Reports Server (NTRS)

    Naik, R. A.; Crews, J. H., Jr.

    1991-01-01

    Fiber/matrix (F/M) cracking was analyzed to provide better understanding and guidance in developing F/M interface fracture toughness tests. Two configurations, corresponding to F/M cracking at a broken fiber and at the free edge, were investigated. The effects of mechanical loading, thermal cooldown, and friction were investigated. Each configuration was analyzed for two loadings: longitudinal and normal to the fiber. A nonlinear finite element analysis was performed to model friction and slip at the F/M interface. A new procedure for fitting a square-root singularity to calculated stresses was developed to determine stress intensity factors (K sub I and K sub II) for a bimaterial interface crack. For the case of F/M cracking at a broken fiber with longitudinal loading, crack tip conditions were strongly influenced by interface friction. As a result, an F/M interface toughness test based on this case was not recommended because nonlinear data analysis methods would be required. For the free edge crack configuration, both mechanical and thermal loading caused crack opening, thereby avoiding frictional effects. An F/M interface toughness test based on this configuration would provide data for K(sub I)/K(sub II) ratios of about 0.7 and 1.6 for fiber and radial normal loading, respectively. However, thermal effects must be accounted for in the data analysis.

  12. Logic-based models in systems biology: a predictive and parameter-free network analysis method†

    PubMed Central

    Wynn, Michelle L.; Consul, Nikita; Merajver, Sofia D.

    2012-01-01

    Highly complex molecular networks, which play fundamental roles in almost all cellular processes, are known to be dysregulated in a number of diseases, most notably in cancer. As a consequence, there is a critical need to develop practical methodologies for constructing and analysing molecular networks at a systems level. Mathematical models built with continuous differential equations are an ideal methodology because they can provide a detailed picture of a network’s dynamics. To be predictive, however, differential equation models require that numerous parameters be known a priori and this information is almost never available. An alternative dynamical approach is the use of discrete logic-based models that can provide a good approximation of the qualitative behaviour of a biochemical system without the burden of a large parameter space. Despite their advantages, there remains significant resistance to the use of logic-based models in biology. Here, we address some common concerns and provide a brief tutorial on the use of logic-based models, which we motivate with biological examples. PMID:23072820

  13. Analysis of plasma-mediated ablation in aqueous tissue

    NASA Astrophysics Data System (ADS)

    Jiao, Jian; Guo, Zhixiong

    2012-06-01

    Plasma-mediated ablation using ultrafast lasers in transparent media such as aqueous tissues is studied. It is postulated that a critical seed free electron density exists due to the multiphoton ionization in order to trigger the avalanche ionization which causes ablation and during the avalanche ionization process the contribution of laser-induced photon ionization is negligible. Based on this assumption, the ablation process can be treated as two separate processes - the multiphoton and avalanche ionizations - at different time stages; so that an analytical solution to the evolution of plasma formation is obtained for the first time. The analysis is applied to plasma-mediated ablation in corneal epithelium and validated via comparison with experimental data available in the literature. The critical seed free-electron density and the time to initiate the avalanche ionization for sub-picosecond laser pulses are analyzed. It is found that the critical seed free-electron density decreases as the pulse width increases, obeying a tp-5.65 rule. This model is further extended to the estimation of crater size in the ablation of tissue-mimic polydimethylsiloxane (PDMS). The results match well with the available experimental measurements.

  14. Ares I-X In-Flight Modal Identification

    NASA Technical Reports Server (NTRS)

    Bartkowicz, Theodore J.; James, George H., III

    2011-01-01

    Operational modal analysis is a procedure that allows the extraction of modal parameters of a structure in its operating environment. It is based on the idealized premise that input to the structure is white noise. In some cases, when free decay responses are corrupted by unmeasured random disturbances, the response data can be processed into cross-correlation functions that approximate free decay responses. Modal parameters can be computed from these functions by time domain identification methods such as the Eigenvalue Realization Algorithm (ERA). The extracted modal parameters have the same characteristics as impulse response functions of the original system. Operational modal analysis is performed on Ares I-X in-flight data. Since the dynamic system is not stationary due to propellant mass loss, modal identification is only possible by analyzing the system as a series of linearized models over short periods of time via a sliding time-window of short time intervals. A time-domain zooming technique was also employed to enhance the modal parameter extraction. Results of this study demonstrate that free-decay time domain modal identification methods can be successfully employed for in-flight launch vehicle modal extraction.

  15. Model-free data analysis for source separation based on Non-Negative Matrix Factorization and k-means clustering (NMFk)

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Alexandrov, B.

    2014-12-01

    The identification of the physical sources causing spatial and temporal fluctuations of state variables such as river stage levels and aquifer hydraulic heads is challenging. The fluctuations can be caused by variations in natural and anthropogenic sources such as precipitation events, infiltration, groundwater pumping, barometric pressures, etc. The source identification and separation can be crucial for conceptualization of the hydrological conditions and characterization of system properties. If the original signals that cause the observed state-variable transients can be successfully "unmixed", decoupled physics models may then be applied to analyze the propagation of each signal independently. We propose a new model-free inverse analysis of transient data based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS) coupled with k-means clustering algorithm, which we call NMFk. NMFk is capable of identifying a set of unique sources from a set of experimentally measured mixed signals, without any information about the sources, their transients, and the physical mechanisms and properties controlling the signal propagation through the system. A classical BSS conundrum is the so-called "cocktail-party" problem where several microphones are recording the sounds in a ballroom (music, conversations, noise, etc.). Each of the microphones is recording a mixture of the sounds. The goal of BSS is to "unmix'" and reconstruct the original sounds from the microphone records. Similarly to the "cocktail-party" problem, our model-freee analysis only requires information about state-variable transients at a number of observation points, m, where m > r, and r is the number of unknown unique sources causing the observed fluctuations. We apply the analysis on a dataset from the Los Alamos National Laboratory (LANL) site. We identify and estimate the impact and sources are barometric pressure and water-supply pumping effects. We also estimate the location of the water-supply pumping wells based on the available data. The possible applications of the NMFk algorithm are not limited to hydrology problems; NMFk can be applied to any problem where temporal system behavior is observed at multiple locations and an unknown number of physical sources are causing these fluctuations.

  16. Synthetic data sets for the identification of key ingredients for RNA-seq differential analysis.

    PubMed

    Rigaill, Guillem; Balzergue, Sandrine; Brunaud, Véronique; Blondet, Eddy; Rau, Andrea; Rogier, Odile; Caius, José; Maugis-Rabusseau, Cathy; Soubigou-Taconnat, Ludivine; Aubourg, Sébastien; Lurin, Claire; Martin-Magniette, Marie-Laure; Delannoy, Etienne

    2018-01-01

    Numerous statistical pipelines are now available for the differential analysis of gene expression measured with RNA-sequencing technology. Most of them are based on similar statistical frameworks after normalization, differing primarily in the choice of data distribution, mean and variance estimation strategy and data filtering. We propose an evaluation of the impact of these choices when few biological replicates are available through the use of synthetic data sets. This framework is based on real data sets and allows the exploration of various scenarios differing in the proportion of non-differentially expressed genes. Hence, it provides an evaluation of the key ingredients of the differential analysis, free of the biases associated with the simulation of data using parametric models. Our results show the relevance of a proper modeling of the mean by using linear or generalized linear modeling. Once the mean is properly modeled, the impact of the other parameters on the performance of the test is much less important. Finally, we propose to use the simple visualization of the raw P-value histogram as a practical evaluation criterion of the performance of differential analysis methods on real data sets. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. How enzymes can capture and transmit free energy from an oscillating electric field.

    PubMed

    Westerhoff, H V; Tsong, T Y; Chock, P B; Chen, Y D; Astumian, R D

    1986-07-01

    Recently, it has been demonstrated that free energy from an alternating electric field can drive the active transport of Rb+ by way of the Na+, K+-ATPase. In the present work, it is shown why many transmembrane enzymes can be expected to absorb free energy from an oscillating electric field and transduce that to chemical or transport work. In the theoretical analysis it turned out to be sufficient that (i) the catalytic process be accompanied by either net or cyclic charge translocation across the membrane and (ii) the stability of the enzyme states involved be asymmetric. Calculations based on a four-state model reveal that free-energy transduction occurs with sinusoidal, square-wave, and positive-only oscillating electric fields and for cases that exhibit either linear or exponential field-dependent rate constants. The results suggest that in addition to oscillating electric field-driven transport, the proposed mechanism can also be used to explain, in part, the "missing" free energy term in the cases in which ATP synthesis has been observed with insufficient transmembrane proton electrochemical potential difference.

  18. How enzymes can capture and transmit free energy from an oscillating electric field.

    PubMed Central

    Westerhoff, H V; Tsong, T Y; Chock, P B; Chen, Y D; Astumian, R D

    1986-01-01

    Recently, it has been demonstrated that free energy from an alternating electric field can drive the active transport of Rb+ by way of the Na+, K+-ATPase. In the present work, it is shown why many transmembrane enzymes can be expected to absorb free energy from an oscillating electric field and transduce that to chemical or transport work. In the theoretical analysis it turned out to be sufficient that (i) the catalytic process be accompanied by either net or cyclic charge translocation across the membrane and (ii) the stability of the enzyme states involved be asymmetric. Calculations based on a four-state model reveal that free-energy transduction occurs with sinusoidal, square-wave, and positive-only oscillating electric fields and for cases that exhibit either linear or exponential field-dependent rate constants. The results suggest that in addition to oscillating electric field-driven transport, the proposed mechanism can also be used to explain, in part, the "missing" free energy term in the cases in which ATP synthesis has been observed with insufficient transmembrane proton electrochemical potential difference. PMID:2941758

  19. Main rotor free wake geometry effects on blade air loads and response for helicopters in steady maneuvers. Volume 1: Theoretical formulation and analysis of results

    NASA Technical Reports Server (NTRS)

    Sadler, S. G.

    1972-01-01

    A mathematical model and computer program were implemented to study the main rotor free wake geometry effects on helicopter rotor blade air loads and response in steady maneuvers. The theoretical formulation and analysis of results are presented.

  20. Scale Free Reduced Rank Image Analysis.

    ERIC Educational Resources Information Center

    Horst, Paul

    In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

  1. Label-Free Raman Microspectral Analysis for Comparison of Cellular Uptake and Distribution between Non-Targeted and EGFR-Targeted Biodegradable Polymeric Nanoparticles

    PubMed Central

    Chernenko, Tatyana; Buyukozturk, Fulden; Miljkovic, Milos; Carrier, Rebecca; Diem, Max; Amiji, Mansoor

    2013-01-01

    Active targeted delivery of nanoparticle-encapsulated agents to tumor cells in vivo is expected to enhance therapeutic effect with significantly less non-specific toxicity. Active targeting is based on surface modification of nanoparticles with ligands that bind with extracellular targets and enhance payload delivery in the cells. In this study, we have used label-free Raman micro-spectral analysis and kinetic modeling to study cellular interactions and intracellular delivery of C6-ceramide using a non-targeted and an epidermal growth factor receptor (EGFR) targeted biodegradable polymeric nano-delivery systems, in EGFR-expressing human ovarian adenocarcinoma (SKOV3) cells. The results show that EGFR peptide-modified nanoparticles were rapidly internalized in SKOV3 cells leading to significant intracellular accumulation as compared to non-specific uptake by the non-targeted nanoparticles. Raman micro-spectral analysis enables visualization and quantification of the carrier system, drug-load, and responses of the biological systems interrogated, without exogenous staining and labeling procedures. PMID:24298430

  2. Critical asymmetry in renormalization group theory for fluids.

    PubMed

    Zhao, Wei; Wu, Liang; Wang, Long; Li, Liyan; Cai, Jun

    2013-06-21

    The renormalization-group (RG) approaches for fluids are employed to investigate critical asymmetry of vapour-liquid equilibrium (VLE) of fluids. Three different approaches based on RG theory for fluids are reviewed and compared. RG approaches are applied to various fluid systems: hard-core square-well fluids of variable ranges, hard-core Yukawa fluids, and square-well dimer fluids and modelling VLE of n-alkane molecules. Phase diagrams of simple model fluids and alkanes described by RG approaches are analyzed to assess the capability of describing the VLE critical asymmetry which is suggested in complete scaling theory. Results of thermodynamic properties obtained by RG theory for fluids agree with the simulation and experimental data. Coexistence diameters, which are smaller than the critical densities, are found in the RG descriptions of critical asymmetries of several fluids. Our calculation and analysis show that the approach coupling local free energy with White's RG iteration which aims to incorporate density fluctuations into free energy is not adequate for VLE critical asymmetry due to the inadequate order parameter and the local free energy functional used in the partition function.

  3. Cost-Effectiveness of Pembrolizumab Versus Ipilimumab in Ipilimumab-Naïve Patients with Advanced Melanoma in the United States.

    PubMed

    Wang, Jingshu; Chmielowski, Bartosz; Pellissier, James; Xu, Ruifeng; Stevinson, Kendall; Liu, Frank Xiaoqing

    2017-02-01

    Recent clinical trials have shown that pembrolizumab significantly prolonged progression-free survival and overall survival compared with ipilimumab in ipilimumab-naïve patients with unresectable or metastatic melanoma. However, there has been no published evidence on the cost-effectiveness of pembrolizumab for this indication. To assess the long-term cost-effectiveness of pembrolizumab versus ipilimumab in ipilimumab-naïve patients with unresectable or meta-static melanoma from a U.S. integrated health system perspective. A partitioned-survival model was developed, which divided overall survival time into progression-free survival and postprogression survival. The model used Kaplan-Meier estimates of progression-free survival and overall survival from a recent randomized phase 3 study (KEYNOTE-006) that compared pembrolizumab and ipilimumab. Extrapolation of progression-free survival and overall survival beyond the clinical trial was based on parametric functions and literature data. The base-case time horizon was 20 years, and costs and health outcomes were discounted at a rate of 3% per year. Clinical data-including progression-free survival and overall survival data spanning a median follow-up time of 15 months, as well as quality of life and adverse event data from the ongoing KEYNOTE-006 trial-and cost data from public sources were used to populate the model. Costs included those of drug acquisition, treatment administration, adverse event management, and disease management of advanced melanoma. The incremental cost-effectiveness ratio (ICER) expressed as cost difference per quality-adjusted life-year (QALY) gained was the main outcome, and a series of sensitivity analyses were performed to test the robustness of the results. In the base case, pembrolizumab was projected to increase the life expectancy of U.S. patients with advanced melanoma by 1.14 years, corresponding to a gain of 0.79 discounted QALYs over ipilimumab. The model also projected an average increase of $63,680 in discounted perpatient costs of treatment with pembrolizumab versus ipilimumab. The corresponding ICER was $81,091 per QALY ($68,712 per life-year) over a 20-year time horizon. With $100,000 per QALY as the threshold, when input parameters were varied in deterministic one-way sensitivity analyses, the use of pembrolizumab was cost-effective relative to ipilimumab in most ranges. Further, in a comprehensive probabilistic sensitivity analysis, the ICER was cost-effective in 83% of the simulations. Compared with ipilimumab, pembrolizumab had higher expected QALYs and was cost-effective for the treatment of patients with unresectable or metastatic melanoma from a U.S. integrated health system perspective. This study was supported by funding from Merck & Co., which reviewed and approved the manuscript before journal submission. Wang, Pellissier, Xu, Stevinson, and Liu are employees of, and own stock in, Merck & Co. Chmielowski has served as a paid consultant for Merck & Co. and received a consultant fee for clinical input in connection with this study. Chmielowski also reports receiving advisory board and speaker bureau fees from multiple major pharmaceutical companies. Wang led the modeling and writing of the manuscript. Chmielowski, Xu, Stevinson, and Pellissier contributed substantially to the modeling design and methodology. Liu led the data collection work and contributed substantially to writing the manuscript. In conducting the analysis and writing the manuscript, the authors followed Merck publication polices and the "cost-effectiveness analysis alongside clinical trials-good research practices and the CHEERS reporting format as recommended by the International Society for Pharmacoeconomics and Outcomes Research.

  4. Stabilizing effect of resistivity towards ELM-free H-mode discharge in lithium-conditioned NSTX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Debabrata; Zhu, Ping; Maingi, Rajesh

    Linear stability analysis of the national spherical torus experiment (NSTX) Li-conditioned ELM-free H-mode equilibria is carried out in the context of the extended magneto-hydrodynamic (MHD) model in NIMROD. Our purpose is to investigate the physical cause behind edge localized mode (ELM) suppression in experiment after the Li-coating of the divertor and the first wall of the NSTX tokamak. Besides ideal MHD modeling, including finite-Larmor radius effect and two-fluid Hall and electron diamagnetic drift contributions, a non-ideal resistivity model is employed, taking into account the increase of Z eff after Li-conditioning in ELM-free H-mode. And unlike an earlier conclusion from anmore » eigenvalue code analysis of these equilibria, NIMROD results find that after reduced recycling from divertor plates, profile modification is necessary but insufficient to explain the mechanism behind complete ELMs suppression in ideal two-fluid MHD. After considering the higher plasma resistivity due to higher Z eff, the complete stabilization could be explained. Furthermore, a thorough analysis of both pre-lithium ELMy and with-lithium ELM-free cases using ideal and non-ideal MHD models is presented, after accurately including a vacuum-like cold halo region in NIMROD to investigate ELMs.« less

  5. Stabilizing effect of resistivity towards ELM-free H-mode discharge in lithium-conditioned NSTX

    DOE PAGES

    Banerjee, Debabrata; Zhu, Ping; Maingi, Rajesh

    2017-05-12

    Linear stability analysis of the national spherical torus experiment (NSTX) Li-conditioned ELM-free H-mode equilibria is carried out in the context of the extended magneto-hydrodynamic (MHD) model in NIMROD. Our purpose is to investigate the physical cause behind edge localized mode (ELM) suppression in experiment after the Li-coating of the divertor and the first wall of the NSTX tokamak. Besides ideal MHD modeling, including finite-Larmor radius effect and two-fluid Hall and electron diamagnetic drift contributions, a non-ideal resistivity model is employed, taking into account the increase of Z eff after Li-conditioning in ELM-free H-mode. And unlike an earlier conclusion from anmore » eigenvalue code analysis of these equilibria, NIMROD results find that after reduced recycling from divertor plates, profile modification is necessary but insufficient to explain the mechanism behind complete ELMs suppression in ideal two-fluid MHD. After considering the higher plasma resistivity due to higher Z eff, the complete stabilization could be explained. Furthermore, a thorough analysis of both pre-lithium ELMy and with-lithium ELM-free cases using ideal and non-ideal MHD models is presented, after accurately including a vacuum-like cold halo region in NIMROD to investigate ELMs.« less

  6. Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Alder, J.; van Griensven, A.; Meixner, T.

    2003-12-01

    Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.

  7. A damage analysis for brittle materials using stochastic micro-structural information

    NASA Astrophysics Data System (ADS)

    Lin, Shih-Po; Chen, Jiun-Shyan; Liang, Shixue

    2016-03-01

    In this work, a micro-crack informed stochastic damage analysis is performed to consider the failures of material with stochastic microstructure. The derivation of the damage evolution law is based on the Helmholtz free energy equivalence between cracked microstructure and homogenized continuum. The damage model is constructed under the stochastic representative volume element (SRVE) framework. The characteristics of SRVE used in the construction of the stochastic damage model have been investigated based on the principle of the minimum potential energy. The mesh dependency issue has been addressed by introducing a scaling law into the damage evolution equation. The proposed methods are then validated through the comparison between numerical simulations and experimental observations of a high strength concrete. It is observed that the standard deviation of porosity in the microstructures has stronger effect on the damage states and the peak stresses than its effect on the Young's and shear moduli in the macro-scale responses.

  8. Verifiable Adaptive Control with Analytical Stability Margins by Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    This paper presents a verifiable model-reference adaptive control method based on an optimal control formulation for linear uncertain systems. A predictor model is formulated to enable a parameter estimation of the system parametric uncertainty. The adaptation is based on both the tracking error and predictor error. Using a singular perturbation argument, it can be shown that the closed-loop system tends to a linear time invariant model asymptotically under an assumption of fast adaptation. A stability margin analysis is given to estimate a lower bound of the time delay margin using a matrix measure method. Using this analytical method, the free design parameter n of the optimal control modification adaptive law can be determined to meet a specification of stability margin for verification purposes.

  9. Transient Two-Dimensional Analysis of Side Load in Liquid Rocket Engine Nozzles

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    2004-01-01

    Two-dimensional planar and axisymmetric numerical investigations on the nozzle start-up side load physics were performed. The objective of this study is to develop a computational methodology to identify nozzle side load physics using simplified two-dimensional geometries, in order to come up with a computational strategy to eventually predict the three-dimensional side loads. The computational methodology is based on a multidimensional, finite-volume, viscous, chemically reacting, unstructured-grid, and pressure-based computational fluid dynamics formulation, and a transient inlet condition based on an engine system modeling. The side load physics captured in the low aspect-ratio, two-dimensional planar nozzle include the Coanda effect, afterburning wave, and the associated lip free-shock oscillation. Results of parametric studies indicate that equivalence ratio, combustion and ramp rate affect the side load physics. The side load physics inferred in the high aspect-ratio, axisymmetric nozzle study include the afterburning wave; transition from free-shock to restricted-shock separation, reverting back to free-shock separation, and transforming to restricted-shock separation again; and lip restricted-shock oscillation. The Mach disk loci and wall pressure history studies reconfirm that combustion and the associated thermodynamic properties affect the formation and duration of the asymmetric flow.

  10. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates

    PubMed Central

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H. Irene

    2016-01-01

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. PMID:26567891

  11. Linear mixed-effects models for within-participant psychology experiments: an introductory tutorial and free, graphical user interface (LMMgui).

    PubMed

    Magezi, David A

    2015-01-01

    Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).

  12. Theoretical analysis of co-solvent effect on the proton transfer reaction of glycine in a water-acetonitrile mixture

    NASA Astrophysics Data System (ADS)

    Kasai, Yukako; Yoshida, Norio; Nakano, Haruyuki

    2015-05-01

    The co-solvent effect on the proton transfer reaction of glycine in a water-acetonitrile mixture was examined using the reference interaction-site model self-consistent field theory. The free energy profiles of the proton transfer reaction of glycine between the carboxyl oxygen and amino nitrogen were computed in a water-acetonitrile mixture solvent at various molar fractions. Two types of reactions, the intramolecular proton transfer and water-mediated proton transfer, were considered. In both types of the reactions, a similar tendency was observed. In the pure water solvent, the zwitterionic form, where the carboxyl oxygen is deprotonated while the amino nitrogen is protonated, is more stable than the neutral form. The reaction free energy is -10.6 kcal mol-1. On the other hand, in the pure acetonitrile solvent, glycine takes only the neutral form. The reaction free energy from the neutral to zwitterionic form gradually increases with increasing acetonitrile concentration, and in an equally mixed solvent, the zwitterionic and neutral forms are almost isoenergetic, with a difference of only 0.3 kcal mol-1. The free energy component analysis based on the thermodynamic cycle of the reaction also revealed that the free energy change of the neutral form is insensitive to the change of solvent environment but the zwitterionic form shows drastic changes. In particular, the excess chemical potential, one of the components of the solvation free energy, is dominant and contributes to the stabilization of the zwitterionic form.

  13. Experiment Analysis and Modelling of Compaction Behaviour of Ag60Cu30Sn10 Mixed Metal Powders

    NASA Astrophysics Data System (ADS)

    Zhou, Mengcheng; Huang, Shangyu; Liu, Wei; Lei, Yu; Yan, Shiwei

    2018-03-01

    A novel process method combines powder compaction and sintering was employed to fabricate thin sheets of cadmium-free silver based filler metals, the compaction densification behaviour of Ag60Cu30Sn10 mixed metal powders was investigated experimentally. Based on the equivalent density method, the density-dependent Drucker-Prager Cap (DPC) model was introduced to model the powder compaction behaviour. Various experiment procedures were completed to determine the model parameters. The friction coefficients in lubricated and unlubricated die were experimentally determined. The determined material parameters were validated by experiments and numerical simulation of powder compaction process using a user subroutine (USDFLD) in ABAQUS/Standard. The good agreement between the simulated and experimental results indicates that the determined model parameters are able to describe the compaction behaviour of the multicomponent mixed metal powders, which can be further used for process optimization simulations.

  14. Numerical study on wave loads and motions of two ships advancing in waves by using three-dimensional translating-pulsating source

    NASA Astrophysics Data System (ADS)

    Xu, Yong; Dong, Wen-Cai

    2013-08-01

    A frequency domain analysis method based on the three-dimensional translating-pulsating (3DTP) source Green function is developed to investigate wave loads and free motions of two ships advancing on parallel course in waves. Two experiments are carried out respectively to measure the wave loads and the freemotions for a pair of side-byside arranged ship models advancing with an identical speed in head regular waves. For comparison, each model is also tested alone. Predictions obtained by the present solution are found in favorable agreement with the model tests and are more accurate than the traditional method based on the three dimensional pulsating (3DP) source Green function. Numerical resonances and peak shift can be found in the 3DP predictions, which result from the wave energy trapped in the gap between two ships and the extremely inhomogeneous wave load distribution on each hull. However, they can be eliminated by 3DTP, in which the speed affects the free surface and most of the wave energy can be escaped from the gap. Both the experiment and the present prediction show that hydrodynamic interaction effects on wave loads and free motions are significant. The present solver may serve as a validated tool to predict wave loads and motions of two vessels under replenishment at sea, and may help to evaluate the hydrodynamic interaction effects on the ships safety in replenishment operation.

  15. Quantifying discrimination of Framingham risk functions with different survival C statistics.

    PubMed

    Pencina, Michael J; D'Agostino, Ralph B; Song, Linye

    2012-07-10

    Cardiovascular risk prediction functions offer an important diagnostic tool for clinicians and patients themselves. They are usually constructed with the use of parametric or semi-parametric survival regression models. It is essential to be able to evaluate the performance of these models, preferably with summaries that offer natural and intuitive interpretations. The concept of discrimination, popular in the logistic regression context, has been extended to survival analysis. However, the extension is not unique. In this paper, we define discrimination in survival analysis as the model's ability to separate those with longer event-free survival from those with shorter event-free survival within some time horizon of interest. This definition remains consistent with that used in logistic regression, in the sense that it assesses how well the model-based predictions match the observed data. Practical and conceptual examples and numerical simulations are employed to examine four C statistics proposed in the literature to evaluate the performance of survival models. We observe that they differ in the numerical values and aspects of discrimination that they capture. We conclude that the index proposed by Harrell is the most appropriate to capture discrimination described by the above definition. We suggest researchers report which C statistic they are using, provide a rationale for their selection, and be aware that comparing different indices across studies may not be meaningful. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Sensitivity analysis of free vibration characteristics of an in situ railway concrete sleeper to variations of rail pad parameters

    NASA Astrophysics Data System (ADS)

    Kaewunruen, Sakdirat; Remennikov, Alex M.

    2006-11-01

    The vibration of in situ concrete sleepers in a railway track structure is a major factor causing cracking of prestressed concrete sleepers and excessive railway track maintenance cost. Not only does the ballast interact with the sleepers, but the rail pads also take part in affecting their free vibration characteristics. This paper presents a sensitivity analysis of free vibration behaviors of an in situ railway concrete sleeper (standard gauge sleeper), incorporating sleeper/ballast interaction, subjected to the variations of rail pad properties. Through finite element analysis, Timoshenko-beam and spring elements were used in the in situ railway concrete sleeper modeling. This model highlights the influence of rail pad parameters on the free vibration characteristics of in situ sleepers. In addition, information on the first five flexural vibration modes indicates the dynamic performance of railway track when using different types of rail pads, as it plays a vital role in the cracking deterioration of concrete sleepers.

  17. Habitat of in vivo transformation influences the levels of free radical scavengers in Clinostomum complanatum: implications for free radical scavenger based vaccines against trematode infections.

    PubMed

    Zafar, Atif; Rizvi, Asim; Ahmad, Irshad; Ahmad, Masood

    2014-01-01

    Since free radical scavengers of parasite origin like glutathione-S-transferase and superoxide dismutase are being explored as prospective vaccine targets, availability of these molecules within the parasite infecting different hosts as well as different sites of infection is of considerable importance. Using Clinostomum complanatum, as a model helminth parasite, we analysed the effects of habitat of in vivo transformation on free radical scavengers of this trematode parasite. Using three different animal models for in vivo transformation and markedly different sites of infection, progenetic metacercaria of C. complanatum were transformed to adult ovigerous worms. Whole worm homogenates were used to estimate the levels of lipid peroxidation, a marker of oxidative stress and free radical scavengers. Site of in vivo transformation was found to drastically affect the levels of free radical scavengers in this model trematode parasite. It was observed that oxygen availability at the site of infection probably influences levels of free radical scavengers in trematode parasites. This is the first report showing that habitat of in vivo transformation affects levels of free radical scavengers in trematode parasites. Since free radical scavengers are prospective vaccine targets and parasite infection at ectopic sites is common, we propose that infections at different sites, may respond differently to free radical scavenger based vaccines.

  18. Equilibrium Fluctuation Relations for Voltage Coupling in Membrane Proteins

    PubMed Central

    Kim, Ilsoo; Warshel, Arieh

    2015-01-01

    A general theoretical framework is developed to account for the effects of an external potential on the energetics of membrane proteins. The framework is based on the free energy relation between two (forward/backward) probability densities, which was recently generalized to non-equilibrium processes, culminating in the work-fluctuation theorem. Starting from the probability densities of the conformational states along the reaction coordinate of “voltage coupling”, we investigate several interconnected free energy relations between these two conformational states, considering voltage activation of ion channels. The free energy difference at zero membrane potential (i.e., between the two “non-equilibrium” conformational states) is shown to be equivalent to the free energy difference between the two “equilibrium” conformational states along the one-dimensional reaction coordinate of voltage coupling. Furthermore, the requirement that the application of linear response approximation to the free energy functions (free energies) of voltage coupling should satisfy the general free energy relations, yields a novel expression for the gating charge in terms of other experimentally measurable quantities. This connection is familiar in statistical mechanics, known as the equilibrium fluctuation-response relation. The theory is illustrated by considering the movement of a unit charge within the membrane under the influence of an external potential, using a coarse-graining (CG) model of membrane proteins, which includes the membrane, the electrolytes and the electrodes. The CG model yields Marcus–type voltage dependent free energy parabolas for the two conformational states, which allow for quantitative estimations of an equilibrium free energy difference, a free energy of barrier, and the voltage dependency of channel activation (Q-V curve) for the unit charge movement. In addition, our analysis offers a quantitative rationale for the correlation between the free energy landscapes (parabolas) and the Q-V curve, upon site-directed mutagenesis or drug binding. Taken together, by introducing the voltage coupling as a reaction coordinate of energy gab, the present theory offers a firm physical foundation from the equilibrium theory of statistical mechanics for the thermodynamic models of voltage activation in voltage-sensitive membrane proteins. This formulation also provides a powerful bridge between the CG model and the conventional macroscopic treatments, offering an intuitive and quantitative framework for a better understating of the structure-function correlations of voltage gating in ion channels as well as electrogenic phenomena in ion pumps and transporters. PMID:26290960

  19. Analysis of electron transfer processes across liquid/liquid interfaces: estimation of free energy of activation using diffuse boundary model.

    PubMed

    Harinipriya, S; Sangaranarayanan, M V

    2006-01-31

    The evaluation of the free energy of activation pertaining to the electron-transfer reactions occurring at liquid/liquid interfaces is carried out employing a diffuse boundary model. The interfacial solvation numbers are estimated using a lattice gas model under the quasichemical approximation. The standard reduction potentials of the redox couples, appropriate inner potential differences, dielectric permittivities, as well as the width of the interface are included in the analysis. The methodology is applied to the reaction between [Fe(CN)6](3-/4-) and [Lu(biphthalocyanine)](3+/4+) at water/1,2-dichloroethane interface. The rate-determining step is inferred from the estimated free energy of activation for the constituent processes. The results indicate that the solvent shielding effect and the desolvation of the reactants at the interface play a central role in dictating the free energy of activation. The heterogeneous electron-transfer rate constant is evaluated from the molar reaction volume and the frequency factor.

  20. Reconstruction of the Foot and Ankle Using Pedicled or Free Flaps: Perioperative Flap Survival Analysis

    PubMed Central

    Li, Xiucun; Cui, Jianli; Maharjan, Suraj; Lu, Laijin; Gong, Xu

    2016-01-01

    Objective The purpose of this study is to determine the correlation between non-technical risk factors and the perioperative flap survival rate and to evaluate the choice of skin flap for the reconstruction of foot and ankle. Methods This was a clinical retrospective study. Nine variables were identified. The Kaplan-Meier method coupled with a log-rank test and a Cox regression model was used to predict the risk factors that influence the perioperative flap survival rate. The relationship between postoperative wound infection and risk factors was also analyzed using a logistic regression model. Results The overall flap survival rate was 85.42%. The necrosis rates of free flaps and pedicled flaps were 5.26% and 20.69%, respectively. According to the Cox regression model, flap type (hazard ratio [HR] = 2.592; 95% confidence interval [CI] (1.606, 4.184); P < 0.001) and postoperative wound infection (HR = 0.266; 95% CI (0.134, 0.529); P < 0.001) were found to be statistically significant risk factors associated with flap necrosis. Based on the logistic regression model, preoperative wound bed inflammation (odds ratio [OR] = 11.371,95% CI (3.117, 41.478), P < 0.001) was a statistically significant risk factor for postoperative wound infection. Conclusion Flap type and postoperative wound infection were both independent risk factors influencing the flap survival rate in the foot and ankle. However, postoperative wound infection was a risk factor for the pedicled flap but not for the free flap. Microvascular anastomosis is a major cause of free flap necrosis. To reconstruct complex or wide soft tissue defects of the foot or ankle, free flaps are safer and more reliable than pedicled flaps and should thus be the primary choice. PMID:27930679

  1. Distributed bearing fault diagnosis based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

  2. Variability in Dopamine Genes Dissociates Model-Based and Model-Free Reinforcement Learning

    PubMed Central

    Bath, Kevin G.; Daw, Nathaniel D.; Frank, Michael J.

    2016-01-01

    Considerable evidence suggests that multiple learning systems can drive behavior. Choice can proceed reflexively from previous actions and their associated outcomes, as captured by “model-free” learning algorithms, or flexibly from prospective consideration of outcomes that might occur, as captured by “model-based” learning algorithms. However, differential contributions of dopamine to these systems are poorly understood. Dopamine is widely thought to support model-free learning by modulating plasticity in striatum. Model-based learning may also be affected by these striatal effects, or by other dopaminergic effects elsewhere, notably on prefrontal working memory function. Indeed, prominent demonstrations linking striatal dopamine to putatively model-free learning did not rule out model-based effects, whereas other studies have reported dopaminergic modulation of verifiably model-based learning, but without distinguishing a prefrontal versus striatal locus. To clarify the relationships between dopamine, neural systems, and learning strategies, we combine a genetic association approach in humans with two well-studied reinforcement learning tasks: one isolating model-based from model-free behavior and the other sensitive to key aspects of striatal plasticity. Prefrontal function was indexed by a polymorphism in the COMT gene, differences of which reflect dopamine levels in the prefrontal cortex. This polymorphism has been associated with differences in prefrontal activity and working memory. Striatal function was indexed by a gene coding for DARPP-32, which is densely expressed in the striatum where it is necessary for synaptic plasticity. We found evidence for our hypothesis that variations in prefrontal dopamine relate to model-based learning, whereas variations in striatal dopamine function relate to model-free learning. SIGNIFICANCE STATEMENT Decisions can stem reflexively from their previously associated outcomes or flexibly from deliberative consideration of potential choice outcomes. Research implicates a dopamine-dependent striatal learning mechanism in the former type of choice. Although recent work has indicated that dopamine is also involved in flexible, goal-directed decision-making, it remains unclear whether it also contributes via striatum or via the dopamine-dependent working memory function of prefrontal cortex. We examined genetic indices of dopamine function in these regions and their relation to the two choice strategies. We found that striatal dopamine function related most clearly to the reflexive strategy, as previously shown, and that prefrontal dopamine related most clearly to the flexible strategy. These findings suggest that dissociable brain regions support dissociable choice strategies. PMID:26818509

  3. Mechanistic analysis of multi-omics datasets to generate kinetic parameters for constraint-based metabolic models.

    PubMed

    Cotten, Cameron; Reed, Jennifer L

    2013-01-30

    Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets.

  4. Mechanistic analysis of multi-omics datasets to generate kinetic parameters for constraint-based metabolic models

    PubMed Central

    2013-01-01

    Background Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. Results In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. Conclusions This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets. PMID:23360254

  5. Impact of Colic Pain as a Significant Factor for Predicting the Stone Free Rate of One-Session Shock Wave Lithotripsy for Treating Ureter Stones: A Bayesian Logistic Regression Model Analysis

    PubMed Central

    Chung, Doo Yong; Cho, Kang Su; Lee, Dae Hun; Han, Jang Hee; Kang, Dong Hyuk; Jung, Hae Do; Kown, Jong Kyou; Ham, Won Sik; Choi, Young Deuk; Lee, Joo Yong

    2015-01-01

    Purpose This study was conducted to evaluate colic pain as a prognostic pretreatment factor that can influence ureter stone clearance and to estimate the probability of stone-free status in shock wave lithotripsy (SWL) patients with a ureter stone. Materials and Methods We retrospectively reviewed the medical records of 1,418 patients who underwent their first SWL between 2005 and 2013. Among these patients, 551 had a ureter stone measuring 4–20 mm and were thus eligible for our analyses. The colic pain as the chief complaint was defined as either subjective flank pain during history taking and physical examination. Propensity-scores for established for colic pain was calculated for each patient using multivariate logistic regression based upon the following covariates: age, maximal stone length (MSL), and mean stone density (MSD). Each factor was evaluated as predictor for stone-free status by Bayesian and non-Bayesian logistic regression model. Results After propensity-score matching, 217 patients were extracted in each group from the total patient cohort. There were no statistical differences in variables used in propensity- score matching. One-session success and stone-free rate were also higher in the painful group (73.7% and 71.0%, respectively) than in the painless group (63.6% and 60.4%, respectively). In multivariate non-Bayesian and Bayesian logistic regression models, a painful stone, shorter MSL, and lower MSD were significant factors for one-session stone-free status in patients who underwent SWL. Conclusions Colic pain in patients with ureter calculi was one of the significant predicting factors including MSL and MSD for one-session stone-free status of SWL. PMID:25902059

  6. Wavelet based free-form deformations for nonrigid registration

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  7. A probabilistic approach for mine burial prediction

    NASA Astrophysics Data System (ADS)

    Barbu, Costin; Valent, Philip; Richardson, Michael; Abelev, Andrei; Plant, Nathaniel

    2004-09-01

    Predicting the degree of burial of mines in soft sediments is one of the main concerns of Naval Mine CounterMeasures (MCM) operations. This is a difficult problem to solve due to uncertainties and variability of the sediment parameters (i.e., density and shear strength) and of the mine state at contact with the seafloor (i.e., vertical and horizontal velocity, angular rotation rate, and pitch angle at the mudline). A stochastic approach is proposed in this paper to better incorporate the dynamic nature of free-falling cylindrical mines in the modeling of impact burial. The orientation, trajectory and velocity of cylindrical mines, after about 4 meters free-fall in the water column, are very strongly influenced by boundary layer effects causing quite chaotic behavior. The model's convolution of the uncertainty through its nonlinearity is addressed by employing Monte Carlo simulations. Finally a risk analysis based on the probability of encountering an undetectable mine is performed.

  8. Geometric and electrostatic modeling using molecular rigidity functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Xia, Kelin; Wei, Guowei

    Geometric and electrostatic modeling is an essential component in computational biophysics and molecular biology. Commonly used geometric representations admit geometric singularities such as cusps, tips and self-intersecting facets that lead to computational instabilities in the molecular modeling. Our present work explores the use of flexibility and rigidity index (FRI), which has a proved superiority in protein B-factor prediction, for biomolecular geometric representation and associated electrostatic analysis. FRI rigidity surfaces are free of geometric singularities. We propose a rigidity based Poisson–Boltzmann equation for biomolecular electrostatic analysis. These approaches to surface and electrostatic modeling are validated by a set of 21 proteins.more » Our results are compared with those of established methods. Finally, being smooth and analytically differentiable, FRI rigidity functions offer excellent curvature analysis, which characterizes concave and convex regions on protein surfaces. Polarized curvatures constructed by using the product of minimum curvature and electrostatic potential is shown to predict potential protein–ligand binding sites.« less

  9. Geometric and electrostatic modeling using molecular rigidity functions

    DOE PAGES

    Mu, Lin; Xia, Kelin; Wei, Guowei

    2017-03-01

    Geometric and electrostatic modeling is an essential component in computational biophysics and molecular biology. Commonly used geometric representations admit geometric singularities such as cusps, tips and self-intersecting facets that lead to computational instabilities in the molecular modeling. Our present work explores the use of flexibility and rigidity index (FRI), which has a proved superiority in protein B-factor prediction, for biomolecular geometric representation and associated electrostatic analysis. FRI rigidity surfaces are free of geometric singularities. We propose a rigidity based Poisson–Boltzmann equation for biomolecular electrostatic analysis. These approaches to surface and electrostatic modeling are validated by a set of 21 proteins.more » Our results are compared with those of established methods. Finally, being smooth and analytically differentiable, FRI rigidity functions offer excellent curvature analysis, which characterizes concave and convex regions on protein surfaces. Polarized curvatures constructed by using the product of minimum curvature and electrostatic potential is shown to predict potential protein–ligand binding sites.« less

  10. Model-based choices involve prospective neural activity

    PubMed Central

    Doll, Bradley B.; Duncan, Katherine D.; Simon, Dylan A.; Shohamy, Daphna; Daw, Nathaniel D.

    2015-01-01

    Decisions may arise via “model-free” repetition of previously reinforced actions, or by “model-based” evaluation, which is widely thought to follow from prospective anticipation of action consequences using a learned map or model. While choices and neural correlates of decision variables sometimes reflect knowledge of their consequences, it remains unclear whether this actually arises from prospective evaluation. Using functional MRI and a sequential reward-learning task in which paths contained decodable object categories, we found that humans’ model-based choices were associated with neural signatures of future paths observed at decision time, suggesting a prospective mechanism for choice. Prospection also covaried with the degree of model-based influences on neural correlates of decision variables, and was inversely related to prediction error signals thought to underlie model-free learning. These results dissociate separate mechanisms underlying model-based and model-free evaluation and support the hypothesis that model-based influences on choices and neural decision variables result from prospection. PMID:25799041

  11. Re-examining the tetraphenyl-arsonium/tetraphenyl-borate (TATB) hypothesis for single-ion solvation free energies

    NASA Astrophysics Data System (ADS)

    Pollard, Travis P.; Beck, Thomas L.

    2018-06-01

    Attempts to establish an absolute single-ion hydration free energy scale have followed multiple strategies. Two central themes consist of (1) employing bulk pair thermodynamic data and an underlying interfacial-potential-free model to partition the hydration free energy into individual contributions [Marcus, Latimer, and tetraphenyl-arsonium/tetraphenyl-borate (TATB) methods] or (2) utilizing bulk thermodynamic and cluster data to estimate the free energy to insert a proton into water, including in principle an interfacial potential contribution [the cluster pair approximation (CPA)]. While the results for the hydration free energy of the proton agree remarkably well between the three approaches in the first category, the value differs from the CPA result by roughly +10 kcal/mol, implying a value for the effective electrochemical surface potential of water of -0.4 V. This paper provides a computational re-analysis of the TATB method for single-ion free energies using quasichemical theory. A previous study indicated a significant discrepancy between the free energies of hydration for the TA cation and the TB anion. We show that the main contribution to this large computed difference is an electrostatic artifact arising from modeling interactions in periodic boundaries. No attempt is made here to develop more accurate models for the local ion/solvent interactions that may lead to further small free energy differences between the TA and TB ions, but the results clarify the primary importance of interfacial potential effects for analysis of the various free energy scales. Results are also presented, related to the TATB assumption in the organic solvents dimethyl sulfoxide and 1,2-dichloroethane.

  12. Modal coupling procedures adapted to NASTRAN analysis of the 1/8-scale shuttle structural dynamics model. Volume 1: Technical report

    NASA Technical Reports Server (NTRS)

    Zalesak, J.

    1975-01-01

    A dynamic substructuring analysis, utilizing the component modes technique, of the 1/8 scale space shuttle orbiter finite element model is presented. The analysis was accomplished in 3 phases, using NASTRAN RIGID FORMAT 3, with appropriate Alters, on the IBM 360-370. The orbiter was divided into 5 substructures, each of which was reduced to interface degrees of freedom and generalized normal modes. The reduced substructures were coupled to yield the first 23 symmetric free-free orbiter modes, and the eigenvectors in the original grid point degree of freedom lineup were recovered. A comparison was made with an analysis which was performed with the same model using the direct coordinate elimination approach. Eigenvalues were extracted using the inverse power method.

  13. Retrospective revaluation in sequential decision making: a tale of two systems.

    PubMed

    Gershman, Samuel J; Markman, Arthur B; Otto, A Ross

    2014-02-01

    Recent computational theories of decision making in humans and animals have portrayed 2 systems locked in a battle for control of behavior. One system--variously termed model-free or habitual--favors actions that have previously led to reward, whereas a second--called the model-based or goal-directed system--favors actions that causally lead to reward according to the agent's internal model of the environment. Some evidence suggests that control can be shifted between these systems using neural or behavioral manipulations, but other evidence suggests that the systems are more intertwined than a competitive account would imply. In 4 behavioral experiments, using a retrospective revaluation design and a cognitive load manipulation, we show that human decisions are more consistent with a cooperative architecture in which the model-free system controls behavior, whereas the model-based system trains the model-free system by replaying and simulating experience.

  14. Offset-Free Model Predictive Control of Open Water Channel Based on Moving Horizon Estimation

    NASA Astrophysics Data System (ADS)

    Ekin Aydin, Boran; Rutten, Martine

    2016-04-01

    Model predictive control (MPC) is a powerful control option which is increasingly used by operational water managers for managing water systems. The explicit consideration of constraints and multi-objective management are important features of MPC. However, due to the water loss in open water systems by seepage, leakage and evaporation a mismatch between the model and the real system will be created. These mismatch affects the performance of MPC and creates an offset from the reference set point of the water level. We present model predictive control based on moving horizon estimation (MHE-MPC) to achieve offset free control of water level for open water canals. MHE-MPC uses the past predictions of the model and the past measurements of the system to estimate unknown disturbances and the offset in the controlled water level is systematically removed. We numerically tested MHE-MPC on an accurate hydro-dynamic model of the laboratory canal UPC-PAC located in Barcelona. In addition, we also used well known disturbance modeling offset free control scheme for the same test case. Simulation experiments on a single canal reach show that MHE-MPC outperforms disturbance modeling offset free control scheme.

  15. Analysis of Composite Skin-Stiffener Debond Specimens Using Volume Elements and a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The debonding of a skin/stringer specimen subjected to tension was studied using three-dimensional volume element modeling and computational fracture mechanics. Mixed mode strain energy release rates were calculated from finite element results using the virtual crack closure technique. The simulations revealed an increase in total energy release rate in the immediate vicinity of the free edges of the specimen. Correlation of the computed mixed-mode strain energy release rates along the delamination front contour with a two-dimensional mixed-mode interlaminar fracture criterion suggested that in spite of peak total energy release rates at the free edge the delamination would not advance at the edges first. The qualitative prediction of the shape of the delamination front was confirmed by X-ray photographs of a specimen taken during testing. The good correlation between prediction based on analysis and experiment demonstrated the efficiency of a mixed-mode failure analysis for the investigation of skin/stiffener separation due to delamination in the adherents. The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to three-point bending is also demonstrated. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to capture the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlations of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents.

  16. Cost-Effectiveness Model for Chemoimmunotherapy Options in Patients with Previously Untreated Chronic Lymphocytic Leukemia Unsuitable for Full-Dose Fludarabine-Based Therapy.

    PubMed

    Becker, Ursula; Briggs, Andrew H; Moreno, Santiago G; Ray, Joshua A; Ngo, Phuong; Samanta, Kunal

    2016-06-01

    To evaluate the cost-effectiveness of treatment with anti-CD20 monoclonal antibody obinutuzumab plus chlorambucil (GClb) in untreated patients with chronic lymphocytic leukemia unsuitable for full-dose fludarabine-based therapy. A Markov model was used to assess the cost-effectiveness of GClb versus other chemoimmunotherapy options. The model comprised three mutually exclusive health states: "progression-free survival (with/without therapy)", "progression (refractory/relapsed lines)", and "death". Each state was assigned a health utility value representing patients' quality of life and a specific cost value. Comparisons between GClb and rituximab plus chlorambucil or only chlorambucil were performed using patient-level clinical trial data; other comparisons were performed via a network meta-analysis using information gathered in a systematic literature review. To support the model, a utility elicitation study was conducted from the perspective of the UK National Health Service. There was good agreement between the model-predicted progression-free and overall survival and that from the CLL11 trial. On incorporating data from the indirect treatment comparisons, it was found that GClb was cost-effective with a range of incremental cost-effectiveness ratios below a threshold of £30,000 per quality-adjusted life-year gained, and remained so during deterministic and probabilistic sensitivity analyses under various scenarios. GClb was estimated to increase both quality-adjusted life expectancy and treatment costs compared with several commonly used therapies, with incremental cost-effectiveness ratios below commonly referenced UK thresholds. This article offers a real example of how to combine direct and indirect evidence in a cost-effectiveness analysis of oncology drugs. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  17. A Novel BA Complex Network Model on Color Template Matching

    PubMed Central

    Han, Risheng; Yue, Guangxue; Ding, Hui

    2014-01-01

    A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235

  18. A novel BA complex network model on color template matching.

    PubMed

    Han, Risheng; Shen, Shigen; Yue, Guangxue; Ding, Hui

    2014-01-01

    A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching.

  19. Device-Free Passive Identity Identification via WiFi Signals.

    PubMed

    Lv, Jiguang; Yang, Wu; Man, Dapeng

    2017-11-02

    Device-free passive identity identification attracts much attention in recent years, and it is a representative application in sensorless sensing. It can be used in many applications such as intrusion detection and smart building. Previous studies show the sensing potential of WiFi signals in a device-free passive manner. It is confirmed that human's gait is unique from each other similar to fingerprint and iris. However, the identification accuracy of existing approaches is not satisfactory in practice. In this paper, we present Wii, a device-free WiFi-based Identity Identification approach utilizing human's gait based on Channel State Information (CSI) of WiFi signals. Principle Component Analysis (PCA) and low pass filter are applied to remove the noises in the signals. We then extract several entities' gait features from both time and frequency domain, and select the most effective features according to information gain. Based on these features, Wii realizes stranger recognition through Gaussian Mixture Model (GMM) and identity identification through a Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel. It is implemented using commercial WiFi devices and evaluated on a dataset with more than 1500 gait instances collected from eight subjects walking in a room. The results indicate that Wii can effectively recognize strangers and can achieves high identification accuracy with low computational cost. As a result, Wii has the potential to work in typical home security systems.

  20. Device-Free Passive Identity Identification via WiFi Signals

    PubMed Central

    Yang, Wu; Man, Dapeng

    2017-01-01

    Device-free passive identity identification attracts much attention in recent years, and it is a representative application in sensorless sensing. It can be used in many applications such as intrusion detection and smart building. Previous studies show the sensing potential of WiFi signals in a device-free passive manner. It is confirmed that human’s gait is unique from each other similar to fingerprint and iris. However, the identification accuracy of existing approaches is not satisfactory in practice. In this paper, we present Wii, a device-free WiFi-based Identity Identification approach utilizing human’s gait based on Channel State Information (CSI) of WiFi signals. Principle Component Analysis (PCA) and low pass filter are applied to remove the noises in the signals. We then extract several entities’ gait features from both time and frequency domain, and select the most effective features according to information gain. Based on these features, Wii realizes stranger recognition through Gaussian Mixture Model (GMM) and identity identification through a Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel. It is implemented using commercial WiFi devices and evaluated on a dataset with more than 1500 gait instances collected from eight subjects walking in a room. The results indicate that Wii can effectively recognize strangers and can achieves high identification accuracy with low computational cost. As a result, Wii has the potential to work in typical home security systems. PMID:29099091

  1. Analysis of frame structure of medium and small truck crane

    NASA Astrophysics Data System (ADS)

    Cao, Fuyi; Li, Jinlong; Cui, Mengkai

    2018-03-01

    Truck crane is an important part of hoisting machinery. Frame, as the support component of the quality of truck crane, determines the safety of crane jib load and the rationality of structural design. In this paper, the truck crane frame is a box structure, the three-dimensional model is established in CATIA software, and imported into Hyperworks software for finite element analysis. On the base of doing constraints and loads for the finite element model of the frame, the finite element static analysis is carried out. And the static stress test verifies whether the finite element model and the frame structure design are reasonable; then the free modal analysis of the frame and the analysis of the first 8 - order modal vibration deformation are carried out. The analysis results show that the maximum stress value of the frame is greater than the yield limit value of the material, and the low-order modal value is close to the excitation frequency value, which needs to be improved to provide theoretical reference for the structural design of the truck crane frame.

  2. Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models

    NASA Astrophysics Data System (ADS)

    Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan

    2017-04-01

    Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).

  3. Parsing Social Network Survey Data from Hidden Populations Using Stochastic Context-Free Grammars

    PubMed Central

    Poon, Art F. Y.; Brouwer, Kimberly C.; Strathdee, Steffanie A.; Firestone-Cruz, Michelle; Lozada, Remedios M.; Kosakovsky Pond, Sergei L.; Heckathorn, Douglas D.; Frost, Simon D. W.

    2009-01-01

    Background Human populations are structured by social networks, in which individuals tend to form relationships based on shared attributes. Certain attributes that are ambiguous, stigmatized or illegal can create a ÔhiddenÕ population, so-called because its members are difficult to identify. Many hidden populations are also at an elevated risk of exposure to infectious diseases. Consequently, public health agencies are presently adopting modern survey techniques that traverse social networks in hidden populations by soliciting individuals to recruit their peers, e.g., respondent-driven sampling (RDS). The concomitant accumulation of network-based epidemiological data, however, is rapidly outpacing the development of computational methods for analysis. Moreover, current analytical models rely on unrealistic assumptions, e.g., that the traversal of social networks can be modeled by a Markov chain rather than a branching process. Methodology/Principal Findings Here, we develop a new methodology based on stochastic context-free grammars (SCFGs), which are well-suited to modeling tree-like structure of the RDS recruitment process. We apply this methodology to an RDS case study of injection drug users (IDUs) in Tijuana, México, a hidden population at high risk of blood-borne and sexually-transmitted infections (i.e., HIV, hepatitis C virus, syphilis). Survey data were encoded as text strings that were parsed using our custom implementation of the inside-outside algorithm in a publicly-available software package (HyPhy), which uses either expectation maximization or direct optimization methods and permits constraints on model parameters for hypothesis testing. We identified significant latent variability in the recruitment process that violates assumptions of Markov chain-based methods for RDS analysis: firstly, IDUs tended to emulate the recruitment behavior of their own recruiter; and secondly, the recruitment of like peers (homophily) was dependent on the number of recruits. Conclusions SCFGs provide a rich probabilistic language that can articulate complex latent structure in survey data derived from the traversal of social networks. Such structure that has no representation in Markov chain-based models can interfere with the estimation of the composition of hidden populations if left unaccounted for, raising critical implications for the prevention and control of infectious disease epidemics. PMID:19738904

  4. Reusable Solid Rocket Motor Nozzle Joint-4 Thermal Analysis

    NASA Technical Reports Server (NTRS)

    Clayton, J. Louie

    2001-01-01

    This study provides for development and test verification of a thermal model used for prediction of joint heating environments, structural temperatures and seal erosions in the Space Shuttle Reusable Solid Rocket Motor (RSRM) Nozzle Joint-4. The heating environments are a result of rapid pressurization of the joint free volume assuming a leak path has occurred in the filler material used for assembly gap close out. Combustion gases flow along the leak path from nozzle environment to joint O-ring gland resulting in local heating to the metal housing and erosion of seal materials. Analysis of this condition was based on usage of the NASA Joint Pressurization Routine (JPR) for environment determination and the Systems Improved Numerical Differencing Analyzer (SINDA) for structural temperature prediction. Model generated temperatures, pressures and seal erosions are compared to hot fire test data for several different leak path situations. Investigated in the hot fire test program were nozzle joint-4 O-ring erosion sensitivities to leak path width in both open and confined joint geometries. Model predictions were in generally good agreement with the test data for the confined leak path cases. Worst case flight predictions are provided using the test-calibrated model. Analysis issues are discussed based on model calibration procedures.

  5. Serum-free keloid fibroblast cell culture: an in vitro model for the study of aberrant wound healing.

    PubMed

    Koch, R J; Goode, R L; Simpson, G T

    1997-04-01

    The purpose of this study was to develop an in vitro serum-free keloid fibroblast model. Keloid formation remains a problem for every surgeon. Prior evaluations of fibroblast characteristics in vitro, especially those of growth factor measurement, have been confounded by the presence of serum-containing tissue culture media. The serum itself contains growth factors, yet has been a "necessary evil" to sustain cell growth. The design of this study is laboratory-based and uses keloid fibroblasts obtained from five patients undergoing facial (ear lobule) keloid removal in a university-affiliated clinic. Keloid fibroblasts were established in primary cell culture and then propagated in a serum-free environment. The main outcome measures included sustained keloid fibroblast growth and viability, which was comparable to serum-based models. The keloid fibroblast cell cultures exhibited logarithmic growth, sustained a high cellular viability, maintained a monolayer, and displayed contact inhibition. Demonstrating model consistency, there was no statistically significant difference between the mean cell counts of the five keloid fibroblast cell lines at each experimental time point. The in vitro growth of keloid fibroblasts in a serum-free model has not been done previous to this study. The results of this study indicate that the proliferative characteristics described are comparable to those of serum-based models. The described model will facilitate the evaluation of potential wound healing modulators, and cellular effects and collagen modifications of laser resurfacing techniques, and may serve as a harvest source for contaminant-free fibroblast autoimplants. Perhaps its greatest utility will be in the evaluation of endogenous and exogenous growth factors.

  6. Predicting vertical phase segregation in polymer-fullerene bulk heterojunction solar cells by free energy analysis.

    PubMed

    Clark, Michael D; Jespersen, Michael L; Patel, Romesh J; Leever, Benjamin J

    2013-06-12

    Blends of poly(3-hexylthiophene) (P3HT) and C61-butyric acid methyl ester (PCBM) are widely used as a model system for bulk heterojunction active layers developed for solution-processable, flexible solar cells. In this work, vertical concentration profiles within the P3HT:PCBM active layer are predicted based on a thermodynamic analysis of the constituent materials and typical solvents. Surface energies of the active layer components and a common transport interlayer blend, poly(3,4-ethylenedioxythiophene) poly(styrenesulfonate) (PEDOT:PSS), are first extracted using contact angle measurements coupled with the acid-base model. From this data, intra- and interspecies interaction free energies are calculated, which reveal that the thermodynamically favored arrangement consists of a uniformly blended "bulk" structure capped with a P3HT-rich air interface and a slightly PCBM-rich buried interface. Although the "bulk" composition is solely determined by P3HT:PCBM ratio, composition near the buried interface is dependent on both the blend ratio and interaction free energy difference between solvated P3HT and PCBM deposition onto PEDOT:PSS. In contrast, the P3HT-rich overlayer is independent of processing conditions, allowing kinetic formation of a PCBM-rich sublayer during film casting due to limitations in long-range species diffusion. These thermodynamic calculations are experimentally validated by angle-resolved X-ray photoelectron spectroscopy (XPS) and low energy XPS depth profiling, which show that the actual composition profiles of the cast and annealed films closely match the predicted behavior. These experimentally derived profiles provide clear evidence that typical bulk heterojunction active layers are predominantly characterized by thermodynamically stable composition profiles. Furthermore, the predictive capabilities of the comprehensive free energy approach are demonstrated, which will enable investigation of structurally integrated devices and novel active layer systems including low band gap polymers, ternary systems, and small molecule blends.

  7. Reducing Spread in Climate Model Projections of a September Ice-Free Arctic

    NASA Technical Reports Server (NTRS)

    Liu, Jiping; Song, Mirong; Horton, Radley M.; Hu, Yongyun

    2013-01-01

    This paper addresses the specter of a September ice-free Arctic in the 21st century using newly available simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5). We find that large spread in the projected timing of the September ice-free Arctic in 30 CMIP5 models is associated at least as much with different atmospheric model components as with initial conditions. Here we reduce the spread in the timing of an ice-free state using two different approaches for the 30 CMIP5 models: (i) model selection based on the ability to reproduce the observed sea ice climatology and variability since 1979 and (ii) constrained estimation based on the strong and persistent relationship between present and future sea ice conditions. Results from the two approaches show good agreement. Under a high-emission scenario both approaches project that September ice extent will drop to approx. 1.7 million sq km in the mid 2040s and reach the ice-free state (defined as 1 million sq km) in 2054-2058. Under a medium-mitigation scenario, both approaches project a decrease to approx.1.7 million sq km in the early 2060s, followed by a leveling off in the ice extent.

  8. Exploring model based engineering for large telescopes: getting started with descriptive models

    NASA Astrophysics Data System (ADS)

    Karban, R.; Zamparelli, M.; Bauvir, B.; Koehler, B.; Noethe, L.; Balestra, A.

    2008-07-01

    Large telescopes pose a continuous challenge to systems engineering due to their complexity in terms of requirements, operational modes, long duty lifetime, interfaces and number of components. A multitude of decisions must be taken throughout the life cycle of a new system, and a prime means of coping with complexity and uncertainty is using models as one decision aid. The potential of descriptive models based on the OMG Systems Modeling Language (OMG SysMLTM) is examined in different areas: building a comprehensive model serves as the basis for subsequent activities of soliciting and review for requirements, analysis and design alike. Furthermore a model is an effective communication instrument against misinterpretation pitfalls which are typical of cross disciplinary activities when using natural language only or free-format diagrams. Modeling the essential characteristics of the system, like interfaces, system structure and its behavior, are important system level issues which are addressed. Also shown is how to use a model as an analysis tool to describe the relationships among disturbances, opto-mechanical effects and control decisions and to refine the control use cases. Considerations on the scalability of the model structure and organization, its impact on the development process, the relation to document-centric structures, style and usage guidelines and the required tool chain are presented.

  9. The analysis of HIV/AIDS drug-resistant on networks

    NASA Astrophysics Data System (ADS)

    Liu, Maoxing

    2014-01-01

    In this paper, we present an Human Immunodeficiency Virus (HIV)/Acquired Immune Deficiency Syndrome (AIDS) drug-resistant model using an ordinary differential equation (ODE) model on scale-free networks. We derive the threshold for the epidemic to be zero in infinite scale-free network. We also prove the stability of disease-free equilibrium (DFE) and persistence of HIV/AIDS infection. The effects of two immunization schemes, including proportional scheme and targeted vaccination, are studied and compared. We find that targeted strategy compare favorably to a proportional condom using has prominent effect to control HIV/AIDS spread on scale-free networks.

  10. Design and characterization of a prototype enzyme microreactor: quantification of immobilized transketolase kinetics.

    PubMed

    Matosevic, S; Lye, G J; Baganz, F

    2010-01-01

    In this work, we describe the design of an immobilized enzyme microreactor (IEMR) for use in transketolase (TK) bioconversion process characterization. The prototype microreactor is based on a 200-microm ID fused silica capillary for quantitative kinetic analysis. The concept is based on the reversible immobilization of His(6)-tagged enzymes via Ni-NTA linkage to surface derivatized silica. For the initial microreactor design, the mode of operation is a stop-flow analysis which promotes higher degrees of conversion. Kinetics for the immobilized TK-catalysed synthesis of L-erythrulose from substrates glycolaldehyde (GA) and hydroxypyruvate (HPA) were evaluated based on a Michaelis-Menten model. Results show that the TK kinetic parameters in the IEMR (V(max(app)) = 0.1 +/- 0.02 mmol min(-1), K(m(app)) = 26 +/- 4 mM) are comparable with those measured in free solution. Furthermore, the k(cat) for the microreactor of 4.1 x 10(5) s(-1) was close to the value for the bioconversion in free solution. This is attributed to the controlled orientation and monolayer surface coverage of the His(6)-immobilized TK. Furthermore, we show quantitative elution of the immobilized TK and the regeneration and reuse of the derivatized capillary over five cycles. The ability to quantify kinetic parameters of engineered enzymes at this scale has benefits for the rapid and parallel evaluation of evolved enzyme libraries for synthetic biology applications and for the generation of kinetic models to aid bioconversion process design and bioreactor selection as a more efficient alternative to previously established microwell-based systems for TK bioprocess characterization.

  11. 13C Metabolic Flux Analysis for Systematic Metabolic Engineering of S. cerevisiae for Overproduction of Fatty Acids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Amit; Ando, David; Gin, Jennifer

    Efficient redirection of microbial metabolism into the abundant production of desired bioproducts remains non-trivial. Here, we used flux-based modeling approaches to improve yields of fatty acids in Saccharomyces cerevisiae. We combined 13C labeling data with comprehensive genome-scale models to shed light onto microbial metabolism and improve metabolic engineering efforts. We concentrated on studying the balance of acetyl-CoA, a precursor metabolite for the biosynthesis of fatty acids. A genome-wide acetyl-CoA balance study showed ATP citrate lyase from Yarrowia lipolytica as a robust source of cytoplasmic acetyl-CoA and malate synthase as a desirable target for downregulation in terms of acetyl-CoA consumption. Thesemore » genetic modifications were applied to S. cerevisiae WRY2, a strain that is capable of producing 460 mg/L of free fatty acids. With the addition of ATP citrate lyase and downregulation of malate synthase, the engineered strain produced 26% more free fatty acids. Further increases in free fatty acid production of 33% were obtained by knocking out the cytoplasmic glycerol-3-phosphate dehydrogenase, which flux analysis had shown was competing for carbon flux upstream with the carbon flux through the acetyl-CoA production pathway in the cytoplasm. In total, the genetic interventions applied in this work increased fatty acid production by ~70%.« less

  12. 13C Metabolic Flux Analysis for Systematic Metabolic Engineering of S. cerevisiae for Overproduction of Fatty Acids

    DOE PAGES

    Ghosh, Amit; Ando, David; Gin, Jennifer; ...

    2016-10-05

    Efficient redirection of microbial metabolism into the abundant production of desired bioproducts remains non-trivial. Here, we used flux-based modeling approaches to improve yields of fatty acids in Saccharomyces cerevisiae. We combined 13C labeling data with comprehensive genome-scale models to shed light onto microbial metabolism and improve metabolic engineering efforts. We concentrated on studying the balance of acetyl-CoA, a precursor metabolite for the biosynthesis of fatty acids. A genome-wide acetyl-CoA balance study showed ATP citrate lyase from Yarrowia lipolytica as a robust source of cytoplasmic acetyl-CoA and malate synthase as a desirable target for downregulation in terms of acetyl-CoA consumption. Thesemore » genetic modifications were applied to S. cerevisiae WRY2, a strain that is capable of producing 460 mg/L of free fatty acids. With the addition of ATP citrate lyase and downregulation of malate synthase, the engineered strain produced 26% more free fatty acids. Further increases in free fatty acid production of 33% were obtained by knocking out the cytoplasmic glycerol-3-phosphate dehydrogenase, which flux analysis had shown was competing for carbon flux upstream with the carbon flux through the acetyl-CoA production pathway in the cytoplasm. In total, the genetic interventions applied in this work increased fatty acid production by ~70%.« less

  13. Efficiency of reactant site sampling in network-free simulation of rule-based models for biochemical systems

    PubMed Central

    Yang, Jin; Hlavacek, William S.

    2011-01-01

    Rule-based models, which are typically formulated to represent cell signaling systems, can now be simulated via various network-free simulation methods. In a network-free method, reaction rates are calculated for rules that characterize molecular interactions, and these rule rates, which each correspond to the cumulative rate of all reactions implied by a rule, are used to perform a stochastic simulation of reaction kinetics. Network-free methods, which can be viewed as generalizations of Gillespie’s method, are so named because these methods do not require that a list of individual reactions implied by a set of rules be explicitly generated, which is a requirement of other methods for simulating rule-based models. This requirement is impractical for rule sets that imply large reaction networks (i.e., long lists of individual reactions), as reaction network generation is expensive. Here, we compare the network-free simulation methods implemented in RuleMonkey and NFsim, general-purpose software tools for simulating rule-based models encoded in the BioNetGen language. The method implemented in NFsim uses rejection sampling to correct overestimates of rule rates, which introduces null events (i.e., time steps that do not change the state of the system being simulated). The method implemented in RuleMonkey uses iterative updates to track rule rates exactly, which avoids null events. To ensure a fair comparison of the two methods, we developed implementations of the rejection and rejection-free methods specific to a particular class of kinetic models for multivalent ligand-receptor interactions. These implementations were written with the intention of making them as much alike as possible, minimizing the contribution of irrelevant coding differences to efficiency differences. Simulation results show that performance of the rejection method is equal to or better than that of the rejection-free method over wide parameter ranges. However, when parameter values are such that ligand-induced aggregation of receptors yields a large connected receptor cluster, the rejection-free method is more efficient. PMID:21832806

  14. Comparing five alternative methods of breast reconstruction surgery: a cost-effectiveness analysis.

    PubMed

    Grover, Ritwik; Padula, William V; Van Vliet, Michael; Ridgway, Emily B

    2013-11-01

    The purpose of this study was to assess the cost-effectiveness of five standardized procedures for breast reconstruction to delineate the best reconstructive approach in postmastectomy patients in the settings of nonirradiated and irradiated chest walls. A decision tree was used to model five breast reconstruction procedures from the provider perspective to evaluate cost-effectiveness. Procedures included autologous flaps with pedicled tissue, autologous flaps with free tissue, latissimus dorsi flaps with breast implants, expanders with implant exchange, and immediate implant placement. All methods were compared with a "do-nothing" alternative. Data for model parameters were collected through a systematic review, and patient health utilities were calculated from an ad hoc survey of reconstructive surgeons. Results were measured in cost (2011 U.S. dollars) per quality-adjusted life-year. Univariate sensitivity analyses and Bayesian multivariate probabilistic sensitivity analysis were conducted. Pedicled autologous tissue and free autologous tissue reconstruction were cost-effective compared with the do-nothing alternative. Pedicled autologous tissue was the slightly more cost-effective of the two. The other procedures were not found to be cost-effective. The results were robust to a number of sensitivity analyses, although the margin between pedicled and free autologous tissue reconstruction is small and affected by some parameter values. Autologous pedicled tissue was slightly more cost-effective than free tissue reconstruction in irradiated and nonirradiated patients. Implant-based techniques were not cost-effective. This is in agreement with the growing trend at academic institutions to encourage autologous tissue reconstruction because of its natural recreation of the breast contour, suppleness, and resiliency in the setting of irradiated recipient beds.

  15. CAFE: aCcelerated Alignment-FrEe sequence analysis

    PubMed Central

    Lu, Yang Young; Tang, Kujin; Ren, Jie; Fuhrman, Jed A.; Waterman, Michael S.

    2017-01-01

    Abstract Alignment-free genome and metagenome comparisons are increasingly important with the development of next generation sequencing (NGS) technologies. Recently developed state-of-the-art k-mer based alignment-free dissimilarity measures including CVTree, \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$d_2^*$\\end{document} and \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$d_2^S$\\end{document} are more computationally expensive than measures based solely on the k-mer frequencies. Here, we report a standalone software, aCcelerated Alignment-FrEe sequence analysis (CAFE), for efficient calculation of 28 alignment-free dissimilarity measures. CAFE allows for both assembled genome sequences and unassembled NGS shotgun reads as input, and wraps the output in a standard PHYLIP format. In downstream analyses, CAFE can also be used to visualize the pairwise dissimilarity measures, including dendrograms, heatmap, principal coordinate analysis and network display. CAFE serves as a general k-mer based alignment-free analysis platform for studying the relationships among genomes and metagenomes, and is freely available at https://github.com/younglululu/CAFE. PMID:28472388

  16. Cost Effectiveness of Free Access to Smoking Cessation Treatment in France Considering the Economic Burden of Smoking-Related Diseases

    PubMed Central

    Cadier, Benjamin; Durand-Zaleski, Isabelle; Thomas, Daniel; Chevreul, Karine

    2016-01-01

    Context In France more than 70,000 deaths from diseases related to smoking are recorded each year, and since 2005 prevalence of tobacco has increased. Providing free access to smoking cessation treatment would reduce this burden. The aim of our study was to estimate the incremental cost-effectiveness ratios (ICER) of providing free access to cessation treatment taking into account the cost offsets associated with the reduction of the three main diseases related to smoking: lung cancer, chronic obstructive pulmonary disease (COPD) and cardiovascular disease (CVD). To measure the financial impact of such a measure we also conducted a probabilistic budget impact analysis. Methods and Findings We performed a cost-effectiveness analysis using a Markov state-transition model that compared free access to cessation treatment to the existing coverage of €50 provided by the French statutory health insurance, taking into account the cost offsets among current French smokers aged 15–75 years. Our results were expressed by the incremental cost-effectiveness ratio in 2009 Euros per life year gained (LYG) at the lifetime horizon. We estimated a base case scenario and carried out a Monte Carlo sensitivity analysis to account for uncertainty. Assuming a participation rate of 7.3%, the ICER value for free access to cessation treatment was €3,868 per LYG in the base case. The variation of parameters provided a range of ICER values from -€736 to €15,715 per LYG. In 99% of cases, the ICER for full coverage was lower than €11,187 per LYG. The probabilistic budget impact analysis showed that the potential cost saving for lung cancer, COPD and CVD ranges from €15 million to €215 million at the five-year horizon for an initial cessation treatment cost of €125 million to €421 million. Conclusion The results suggest that providing medical support to smokers in their attempts to quit is very cost-effective and may even result in cost savings. PMID:26909802

  17. Distributions of experimental protein structures on coarse-grained free energy landscapes

    PubMed Central

    Liu, Jie; Jernigan, Robert L.

    2015-01-01

    Predicting conformational changes of proteins is needed in order to fully comprehend functional mechanisms. With the large number of available structures in sets of related proteins, it is now possible to directly visualize the clusters of conformations and their conformational transitions through the use of principal component analysis. The most striking observation about the distributions of the structures along the principal components is their highly non-uniform distributions. In this work, we use principal component analysis of experimental structures of 50 diverse proteins to extract the most important directions of their motions, sample structures along these directions, and estimate their free energy landscapes by combining knowledge-based potentials and entropy computed from elastic network models. When these resulting motions are visualized upon their coarse-grained free energy landscapes, the basis for conformational pathways becomes readily apparent. Using three well-studied proteins, T4 lysozyme, serum albumin, and sarco-endoplasmic reticular Ca2+ adenosine triphosphatase (SERCA), as examples, we show that such free energy landscapes of conformational changes provide meaningful insights into the functional dynamics and suggest transition pathways between different conformational states. As a further example, we also show that Monte Carlo simulations on the coarse-grained landscape of HIV-1 protease can directly yield pathways for force-driven conformational changes. PMID:26723638

  18. Modeling of static electrical properties in organic field-effect transistors

    NASA Astrophysics Data System (ADS)

    Xu, Yong; Minari, Takeo; Tsukagoshi, Kazuhito; Gwoziecki, Romain; Coppard, Romain; Benwadih, Mohamed; Chroboczek, Jan; Balestra, Francis; Ghibaudo, Gerard

    2011-07-01

    A modeling of organic field-effect transistors' (OFETs') electrical characteristics is presented. This model is based on a one-dimensional (1-D) Poisson's equation solution that solves the potential profile in the organic semiconducting film. Most importantly, it demonstrates that, due to the common open-surface configuration used in organic transistors, the conduction occurs in the film volume below threshold. This is because the potential at the free surface is not fixed to zero but rather rises also with the gate bias. The tail of carrier concentration at the free surface is therefore significantly modulated by the gate bias, which partially explains the gate-voltage dependent contact resistance. At the same time in the so-called subthreshold region, we observe a clear charge trapping from the difference between C-V and I-V measurements; hence a traps study by numerical simulation is also performed. By combining the analytical modeling and the traps analysis, the questions on the C-V and I-V characteristics are answered. Finally, the combined results obtained with traps fit well the experimental data in both pentacene and bis(triisopropylsilylethynyl)-pentacene OFETs.

  19. Dopamine selectively remediates ‘model-based’ reward learning: a computational approach

    PubMed Central

    Sharp, Madeleine E.; Foerde, Karin; Daw, Nathaniel D.

    2016-01-01

    Patients with loss of dopamine due to Parkinson’s disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from ‘model-free’ learning. The other, ‘model-based’ learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson’s disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson’s disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson’s disease may be related to an inability to pursue reward based on complete representations of the environment. PMID:26685155

  20. Numerical simulation of large-scale bed load particle tracer advection-dispersion in rivers with free bars

    USGS Publications Warehouse

    Iwasaki, Toshiki; Nelson, Jonathan M.; Shimizu, Yasuyuki; Parker, Gary

    2017-01-01

    Asymptotic characteristics of the transport of bed load tracer particles in rivers have been described by advection-dispersion equations. Here we perform numerical simulations designed to study the role of free bars, and more specifically single-row alternate bars, on streamwise tracer particle dispersion. In treating the conservation of tracer particle mass, we use two alternative formulations for the Exner equation of sediment mass conservation: the flux-based formulation, in which bed elevation varies with the divergence of the bed load transport rate, and the entrainment-based formulation, in which bed elevation changes with the net deposition rate. Under the condition of no net bed aggradation/degradation, a 1-D flux-based deterministic model that does not describe free bars yields no streamwise dispersion. The entrainment-based 1-D formulation, on the other hand, models stochasticity via the probability density function (PDF) of particle step length, and as a result does show tracer dispersion. When the formulation is generalized to 2-D to include free alternate bars, however, both models yield almost identical asymptotic advection-dispersion characteristics, in which streamwise dispersion is dominated by randomness inherent in free bar morphodynamics. This randomness can result in a heavy-tailed PDF of waiting time. In addition, migrating bars may constrain the travel distance through temporary burial, causing a thin-tailed PDF of travel distance. The superdiffusive character of streamwise particle dispersion predicted by the model is attributable to the interaction of these two effects.

  1. Numerical simulation of large-scale bed load particle tracer advection-dispersion in rivers with free bars

    NASA Astrophysics Data System (ADS)

    Iwasaki, Toshiki; Nelson, Jonathan; Shimizu, Yasuyuki; Parker, Gary

    2017-04-01

    Asymptotic characteristics of the transport of bed load tracer particles in rivers have been described by advection-dispersion equations. Here we perform numerical simulations designed to study the role of free bars, and more specifically single-row alternate bars, on streamwise tracer particle dispersion. In treating the conservation of tracer particle mass, we use two alternative formulations for the Exner equation of sediment mass conservation: the flux-based formulation, in which bed elevation varies with the divergence of the bed load transport rate, and the entrainment-based formulation, in which bed elevation changes with the net deposition rate. Under the condition of no net bed aggradation/degradation, a 1-D flux-based deterministic model that does not describe free bars yields no streamwise dispersion. The entrainment-based 1-D formulation, on the other hand, models stochasticity via the probability density function (PDF) of particle step length, and as a result does show tracer dispersion. When the formulation is generalized to 2-D to include free alternate bars, however, both models yield almost identical asymptotic advection-dispersion characteristics, in which streamwise dispersion is dominated by randomness inherent in free bar morphodynamics. This randomness can result in a heavy-tailed PDF of waiting time. In addition, migrating bars may constrain the travel distance through temporary burial, causing a thin-tailed PDF of travel distance. The superdiffusive character of streamwise particle dispersion predicted by the model is attributable to the interaction of these two effects.

  2. Economic evaluation of test-and-treat and empirical treatment strategies in the eradication of Helicobacter pylori infection; A Markov model in an Iranian adult population.

    PubMed

    Mazdaki, Alireza; Ghiasvand, Hesam; Sarabi Asiabar, Ali; Naghdi, Seyran; Aryankhesal, Aidin

    2016-01-01

    Helicobacter pylori may cause many gastrointestinal problems in developing countries such as Iran. We aimed to analyze the cost- effectiveness and cost- utility of the test-and-treat and empirical treatment strategies in managing Helicobacter pylori infection. This was a Markov based economic evaluation. Effectiveness was defined as the symptoms free numbers and QALYs in 100,000 hypothetical adults. The sensitivity analysis was based on Monte Carlo approach. In the test- and- treat strategy, if the serology is the first diagnostic test vs. histology, the cost per symptoms free number would be 291,736.1 Rials while the cost per QALYs would be 339,226.1 Rials. The cost per symptoms free number and cost per QALYs when the 13 C-UBT was used as the first diagnostic test vs. serology was 1,283,200 and 1,492,103 Rials, respectively. In the empirical strategy, if histology is used as the first diagnostic test vs. 13 CUBT, the cost per symptoms free numbers and cost per QALYs would be 793,234 and 955,698 Rials, respectively. If serology were used as the first diagnostic test vs. histology, the cost per symptoms free and QALYs would be 793,234 and 368941 Rials, respectively. There was no significant and considerable dominancy between the alternatives and the diagnostic tests.

  3. Incorporation of the TIP4P water model into a continuum solvent for computing solvation free energy

    NASA Astrophysics Data System (ADS)

    Yang, Pei-Kun

    2014-10-01

    The continuum solvent model is one of the commonly used strategies to compute solvation free energy especially for large-scale conformational transitions such as protein folding or to calculate the binding affinity of protein-protein/ligand interactions. However, the dielectric polarization for computing solvation free energy from the continuum solvent is different than that obtained from molecular dynamic simulations. To mimic the dielectric polarization surrounding a solute in molecular dynamic simulations, the first-shell water molecules was modeled using a charge distribution of TIP4P in a hard sphere; the time-averaged charge distribution from the first-shell water molecules were estimated based on the coordination number of the solute, and the orientation distribution of the first-shell waters and the intermediate water molecules were treated as that of a bulk solvent. Based on this strategy, an equation describing the solvation free energy of ions was derived.

  4. Analysis of the Free-Energy Surface of Proteins from Reversible Folding Simulations

    PubMed Central

    Allen, Lucy R.; Krivov, Sergei V.; Paci, Emanuele

    2009-01-01

    Computer generated trajectories can, in principle, reveal the folding pathways of a protein at atomic resolution and possibly suggest general and simple rules for predicting the folded structure of a given sequence. While such reversible folding trajectories can only be determined ab initio using all-atom transferable force-fields for a few small proteins, they can be determined for a large number of proteins using coarse-grained and structure-based force-fields, in which a known folded structure is by construction the absolute energy and free-energy minimum. Here we use a model of the fast folding helical λ-repressor protein to generate trajectories in which native and non-native states are in equilibrium and transitions are accurately sampled. Yet, representation of the free-energy surface, which underlies the thermodynamic and dynamic properties of the protein model, from such a trajectory remains a challenge. Projections over one or a small number of arbitrarily chosen progress variables often hide the most important features of such surfaces. The results unequivocally show that an unprojected representation of the free-energy surface provides important and unbiased information and allows a simple and meaningful description of many-dimensional, heterogeneous trajectories, providing new insight into the possible mechanisms of fast-folding proteins. PMID:19593364

  5. Construction and simulation of the Bradyrhizobium diazoefficiens USDA110 metabolic network: a comparison between free-living and symbiotic states.

    PubMed

    Yang, Yi; Hu, Xiao-Pan; Ma, Bin-Guang

    2017-02-28

    Bradyrhizobium diazoefficiens is a rhizobium able to convert atmospheric nitrogen into ammonium by establishing mutualistic symbiosis with soybean. It has been recognized as an important parent strain for microbial agents and is widely applied in agricultural and environmental fields. In order to study the metabolic properties of symbiotic nitrogen fixation and the differences between a free-living cell and a symbiotic bacteroid, a genome-scale metabolic network of B. diazoefficiens USDA110 was constructed and analyzed. The metabolic network, iYY1101, contains 1031 reactions, 661 metabolites, and 1101 genes in total. Metabolic models reflecting free-living and symbiotic states were determined by defining the corresponding objective functions and substrate input sets, and were further constrained by high-throughput transcriptomic and proteomic data. Constraint-based flux analysis was used to compare the metabolic capacities and the effects on the metabolic targets of genes and reactions between the two physiological states. The results showed that a free-living rhizobium possesses a steady state flux distribution for sustaining a complex supply of biomass precursors while a symbiotic bacteroid maintains a relatively condensed one adapted to nitrogen-fixation. Our metabolic models may serve as a promising platform for better understanding the symbiotic nitrogen fixation of this species.

  6. Analysis of the free-energy surface of proteins from reversible folding simulations.

    PubMed

    Allen, Lucy R; Krivov, Sergei V; Paci, Emanuele

    2009-07-01

    Computer generated trajectories can, in principle, reveal the folding pathways of a protein at atomic resolution and possibly suggest general and simple rules for predicting the folded structure of a given sequence. While such reversible folding trajectories can only be determined ab initio using all-atom transferable force-fields for a few small proteins, they can be determined for a large number of proteins using coarse-grained and structure-based force-fields, in which a known folded structure is by construction the absolute energy and free-energy minimum. Here we use a model of the fast folding helical lambda-repressor protein to generate trajectories in which native and non-native states are in equilibrium and transitions are accurately sampled. Yet, representation of the free-energy surface, which underlies the thermodynamic and dynamic properties of the protein model, from such a trajectory remains a challenge. Projections over one or a small number of arbitrarily chosen progress variables often hide the most important features of such surfaces. The results unequivocally show that an unprojected representation of the free-energy surface provides important and unbiased information and allows a simple and meaningful description of many-dimensional, heterogeneous trajectories, providing new insight into the possible mechanisms of fast-folding proteins.

  7. Principles of assessing bacterial susceptibility to antibiotics using the agar diffusion method.

    PubMed

    Bonev, Boyan; Hooper, James; Parisot, Judicaël

    2008-06-01

    The agar diffusion assay is one method for quantifying the ability of antibiotics to inhibit bacterial growth. Interpretation of results from this assay relies on model-dependent analysis, which is based on the assumption that antibiotics diffuse freely in the solid nutrient medium. In many cases, this assumption may be incorrect, which leads to significant deviations of the predicted behaviour from the experiment and to inaccurate assessment of bacterial susceptibility to antibiotics. We sought a theoretical description of the agar diffusion assay that takes into consideration loss of antibiotic during diffusion and provides higher accuracy of the MIC determined from the assay. We propose a new theoretical framework for analysis of agar diffusion assays. MIC was determined by this technique for a number of antibiotics and analysis was carried out using both the existing free diffusion and the new dissipative diffusion models. A theory for analysis of antibiotic diffusion in solid media is described, in which we consider possible interactions of the test antibiotic with the solid medium or partial antibiotic inactivation during diffusion. This is particularly relevant to the analysis of diffusion of hydrophobic or amphipathic compounds. The model is based on a generalized diffusion equation, which includes the existing theory as a special case and contains an additional, dissipative term. Analysis of agar diffusion experiments using the new model allows significantly more accurate interpretation of experimental results and determination of MICs. The model has more general validity and is applicable to analysis of other dissipative processes, for example to antigen diffusion and to calculations of substrate load in affinity purification.

  8. CalFitter: a web server for analysis of protein thermal denaturation data.

    PubMed

    Mazurenko, Stanislav; Stourac, Jan; Kunka, Antonin; Nedeljkovic, Sava; Bednar, David; Prokop, Zbynek; Damborsky, Jiri

    2018-05-14

    Despite significant advances in the understanding of protein structure-function relationships, revealing protein folding pathways still poses a challenge due to a limited number of relevant experimental tools. Widely-used experimental techniques, such as calorimetry or spectroscopy, critically depend on a proper data analysis. Currently, there are only separate data analysis tools available for each type of experiment with a limited model selection. To address this problem, we have developed the CalFitter web server to be a unified platform for comprehensive data fitting and analysis of protein thermal denaturation data. The server allows simultaneous global data fitting using any combination of input data types and offers 12 protein unfolding pathway models for selection, including irreversible transitions often missing from other tools. The data fitting produces optimal parameter values, their confidence intervals, and statistical information to define unfolding pathways. The server provides an interactive and easy-to-use interface that allows users to directly analyse input datasets and simulate modelled output based on the model parameters. CalFitter web server is available free at https://loschmidt.chemi.muni.cz/calfitter/.

  9. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  10. Control of free-flying space robot manipulator systems

    NASA Technical Reports Server (NTRS)

    Cannon, Robert H., Jr.

    1988-01-01

    The focus of the work is to develop and perform a set of research projects using laboratory models of satellite robots. These devices use air cushion technology to simulate in two dimensions the drag-free, zero-g conditions of space. Five research areas are examined: cooperative manipulation on a fixed base; cooperative manipulation on a free-floating base; global navigation and control of a free-floating robot; an alternative transport mode call Locomotion Enhancement via Arm Push-Off (LEAP), and adaptive control of LEAP.

  11. A Study of Mexican Free-Tailed Bat Chirp Syllables: Bayesian Functional Mixed Models for Nonstationary Acoustic Time Series.

    PubMed

    Martinez, Josue G; Bohn, Kirsten M; Carroll, Raymond J; Morris, Jeffrey S

    2013-06-01

    We describe a new approach to analyze chirp syllables of free-tailed bats from two regions of Texas in which they are predominant: Austin and College Station. Our goal is to characterize any systematic regional differences in the mating chirps and assess whether individual bats have signature chirps. The data are analyzed by modeling spectrograms of the chirps as responses in a Bayesian functional mixed model. Given the variable chirp lengths, we compute the spectrograms on a relative time scale interpretable as the relative chirp position, using a variable window overlap based on chirp length. We use 2D wavelet transforms to capture correlation within the spectrogram in our modeling and obtain adaptive regularization of the estimates and inference for the regions-specific spectrograms. Our model includes random effect spectrograms at the bat level to account for correlation among chirps from the same bat, and to assess relative variability in chirp spectrograms within and between bats. The modeling of spectrograms using functional mixed models is a general approach for the analysis of replicated nonstationary time series, such as our acoustical signals, to relate aspects of the signals to various predictors, while accounting for between-signal structure. This can be done on raw spectrograms when all signals are of the same length, and can be done using spectrograms defined on a relative time scale for signals of variable length in settings where the idea of defining correspondence across signals based on relative position is sensible.

  12. A general model-based design of experiments approach to achieve practical identifiability of pharmacokinetic and pharmacodynamic models.

    PubMed

    Galvanin, Federico; Ballan, Carlo C; Barolo, Massimiliano; Bezzo, Fabrizio

    2013-08-01

    The use of pharmacokinetic (PK) and pharmacodynamic (PD) models is a common and widespread practice in the preliminary stages of drug development. However, PK-PD models may be affected by structural identifiability issues intrinsically related to their mathematical formulation. A preliminary structural identifiability analysis is usually carried out to check if the set of model parameters can be uniquely determined from experimental observations under the ideal assumptions of noise-free data and no model uncertainty. However, even for structurally identifiable models, real-life experimental conditions and model uncertainty may strongly affect the practical possibility to estimate the model parameters in a statistically sound way. A systematic procedure coupling the numerical assessment of structural identifiability with advanced model-based design of experiments formulations is presented in this paper. The objective is to propose a general approach to design experiments in an optimal way, detecting a proper set of experimental settings that ensure the practical identifiability of PK-PD models. Two simulated case studies based on in vitro bacterial growth and killing models are presented to demonstrate the applicability and generality of the methodology to tackle model identifiability issues effectively, through the design of feasible and highly informative experiments.

  13. Whole Organism High-Content Screening by Label-Free, Image-Based Bayesian Classification for Parasitic Diseases

    PubMed Central

    Paveley, Ross A.; Mansour, Nuha R.; Hallyburton, Irene; Bleicher, Leo S.; Benn, Alex E.; Mikic, Ivana; Guidi, Alessandra; Gilbert, Ian H.; Hopkins, Andrew L.; Bickle, Quentin D.

    2012-01-01

    Sole reliance on one drug, Praziquantel, for treatment and control of schistosomiasis raises concerns about development of widespread resistance, prompting renewed interest in the discovery of new anthelmintics. To discover new leads we designed an automated label-free, high content-based, high throughput screen (HTS) to assess drug-induced effects on in vitro cultured larvae (schistosomula) using bright-field imaging. Automatic image analysis and Bayesian prediction models define morphological damage, hit/non-hit prediction and larval phenotype characterization. Motility was also assessed from time-lapse images. In screening a 10,041 compound library the HTS correctly detected 99.8% of the hits scored visually. A proportion of these larval hits were also active in an adult worm ex-vivo screen and are the subject of ongoing studies. The method allows, for the first time, screening of large compound collections against schistosomes and the methods are adaptable to other whole organism and cell-based screening by morphology and motility phenotyping. PMID:22860151

  14. Warranty optimisation based on the prediction of costs to the manufacturer using neural network model and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Stamenkovic, Dragan D.; Popovic, Vladimir M.

    2015-02-01

    Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.

  15. Working-memory capacity protects model-based learning from stress.

    PubMed

    Otto, A Ross; Raio, Candace M; Chiang, Alice; Phelps, Elizabeth A; Daw, Nathaniel D

    2013-12-24

    Accounts of decision-making have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental advances suggest that this classic distinction between habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning, called model-free and model-based learning. Popular neurocomputational accounts of reward processing emphasize the involvement of the dopaminergic system in model-free learning and prefrontal, central executive-dependent control systems in model-based choice. Here we hypothesized that the hypothalamic-pituitary-adrenal (HPA) axis stress response--believed to have detrimental effects on prefrontal cortex function--should selectively attenuate model-based contributions to behavior. To test this, we paired an acute stressor with a sequential decision-making task that affords distinguishing the relative contributions of the two learning strategies. We assessed baseline working-memory (WM) capacity and used salivary cortisol levels to measure HPA axis stress response. We found that stress response attenuates the contribution of model-based, but not model-free, contributions to behavior. Moreover, stress-induced behavioral changes were modulated by individual WM capacity, such that low-WM-capacity individuals were more susceptible to detrimental stress effects than high-WM-capacity individuals. These results enrich existing accounts of the interplay between acute stress, working memory, and prefrontal function and suggest that executive function may be protective against the deleterious effects of acute stress.

  16. Working-memory capacity protects model-based learning from stress

    PubMed Central

    Otto, A. Ross; Raio, Candace M.; Chiang, Alice; Phelps, Elizabeth A.; Daw, Nathaniel D.

    2013-01-01

    Accounts of decision-making have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental advances suggest that this classic distinction between habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning, called model-free and model-based learning. Popular neurocomputational accounts of reward processing emphasize the involvement of the dopaminergic system in model-free learning and prefrontal, central executive–dependent control systems in model-based choice. Here we hypothesized that the hypothalamic-pituitary-adrenal (HPA) axis stress response—believed to have detrimental effects on prefrontal cortex function—should selectively attenuate model-based contributions to behavior. To test this, we paired an acute stressor with a sequential decision-making task that affords distinguishing the relative contributions of the two learning strategies. We assessed baseline working-memory (WM) capacity and used salivary cortisol levels to measure HPA axis stress response. We found that stress response attenuates the contribution of model-based, but not model-free, contributions to behavior. Moreover, stress-induced behavioral changes were modulated by individual WM capacity, such that low-WM-capacity individuals were more susceptible to detrimental stress effects than high-WM-capacity individuals. These results enrich existing accounts of the interplay between acute stress, working memory, and prefrontal function and suggest that executive function may be protective against the deleterious effects of acute stress. PMID:24324166

  17. Modeling visual-based pitch, lift and speed control strategies in hoverflies

    PubMed Central

    Vercher, Jean-Louis

    2018-01-01

    To avoid crashing onto the floor, a free falling fly needs to trigger its wingbeats quickly and control the orientation of its thrust accurately and swiftly to stabilize its pitch and hence its speed. Behavioural data have suggested that the vertical optic flow produced by the fall and crossing the visual field plays a key role in this anti-crash response. Free fall behavior analyses have also suggested that flying insect may not rely on graviception to stabilize their flight. Based on these two assumptions, we have developed a model which accounts for hoverflies´ position and pitch orientation recorded in 3D with a fast stereo camera during experimental free falls. Our dynamic model shows that optic flow-based control combined with closed-loop control of the pitch suffice to stabilize the flight properly. In addition, our model sheds a new light on the visual-based feedback control of fly´s pitch, lift and thrust. Since graviceptive cues are possibly not used by flying insects, the use of a vertical reference to control the pitch is discussed, based on the results obtained on a complete dynamic model of a virtual fly falling in a textured corridor. This model would provide a useful tool for understanding more clearly how insects may or not estimate their absolute attitude. PMID:29361632

  18. Separate encoding of model-based and model-free valuations in the human brain.

    PubMed

    Beierholm, Ulrik R; Anen, Cedric; Quartz, Steven; Bossaerts, Peter

    2011-10-01

    Behavioral studies have long shown that humans solve problems in two ways, one intuitive and fast (System 1, model-free), and the other reflective and slow (System 2, model-based). The neurobiological basis of dual process problem solving remains unknown due to challenges of separating activation in concurrent systems. We present a novel neuroeconomic task that predicts distinct subjective valuation and updating signals corresponding to these two systems. We found two concurrent value signals in human prefrontal cortex: a System 1 model-free reinforcement signal and a System 2 model-based Bayesian signal. We also found a System 1 updating signal in striatal areas and a System 2 updating signal in lateral prefrontal cortex. Further, signals in prefrontal cortex preceded choices that are optimal according to either updating principle, while signals in anterior cingulate cortex and globus pallidus preceded deviations from optimal choice for reinforcement learning. These deviations tended to occur when uncertainty regarding optimal values was highest, suggesting that disagreement between dual systems is mediated by uncertainty rather than conflict, confirming recent theoretical proposals. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Analysis of Tube Free Hydroforming using an Inverse Approach with FLD-based Adjustment of Process Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Johnson, Kenneth I.; Khaleel, Mohammad A.

    2003-04-01

    This paper employs an inverse approach (IA) formulation for the analysis of tubes under free hydroforming conditions. The IA formulation is derived from that of Guo et al. established for flat sheet hydroforming analysis using constant strain triangular membrane elements. At first, an incremental analysis of free hydroforming for a hot-dip galvanized (HG/Z140) DP600 tube is performed using the finite element Marc code. The deformed geometry obtained at the last converged increment is then used as the final configuration in the inverse analysis. This comparative study allows us to assess the predicting capability of the inverse analysis. The results willmore » be compared with the experimental values determined by Asnafi and Skogsgardh. After that, a procedure based on a forming limit diagram (FLD) is proposed to adjust the process parameters such as the axial feed and internal pressure. Finally, the adjustment process is illustrated through a re-analysis of the same tube using the inverse approach« less

  20. RNA-TVcurve: a Web server for RNA secondary structure comparison based on a multi-scale similarity of its triple vector curve representation.

    PubMed

    Li, Ying; Shi, Xiaohu; Liang, Yanchun; Xie, Juan; Zhang, Yu; Ma, Qin

    2017-01-21

    RNAs have been found to carry diverse functionalities in nature. Inferring the similarity between two given RNAs is a fundamental step to understand and interpret their functional relationship. The majority of functional RNAs show conserved secondary structures, rather than sequence conservation. Those algorithms relying on sequence-based features usually have limitations in their prediction performance. Hence, integrating RNA structure features is very critical for RNA analysis. Existing algorithms mainly fall into two categories: alignment-based and alignment-free. The alignment-free algorithms of RNA comparison usually have lower time complexity than alignment-based algorithms. An alignment-free RNA comparison algorithm was proposed, in which novel numerical representations RNA-TVcurve (triple vector curve representation) of RNA sequence and corresponding secondary structure features are provided. Then a multi-scale similarity score of two given RNAs was designed based on wavelet decomposition of their numerical representation. In support of RNA mutation and phylogenetic analysis, a web server (RNA-TVcurve) was designed based on this alignment-free RNA comparison algorithm. It provides three functional modules: 1) visualization of numerical representation of RNA secondary structure; 2) detection of single-point mutation based on secondary structure; and 3) comparison of pairwise and multiple RNA secondary structures. The inputs of the web server require RNA primary sequences, while corresponding secondary structures are optional. For the primary sequences alone, the web server can compute the secondary structures using free energy minimization algorithm in terms of RNAfold tool from Vienna RNA package. RNA-TVcurve is the first integrated web server, based on an alignment-free method, to deliver a suite of RNA analysis functions, including visualization, mutation analysis and multiple RNAs structure comparison. The comparison results with two popular RNA comparison tools, RNApdist and RNAdistance, showcased that RNA-TVcurve can efficiently capture subtle relationships among RNAs for mutation detection and non-coding RNA classification. All the relevant results were shown in an intuitive graphical manner, and can be freely downloaded from this server. RNA-TVcurve, along with test examples and detailed documents, are available at: http://ml.jlu.edu.cn/tvcurve/ .

  1. A Context-Based Theory of Recency and Contiguity in Free Recall

    ERIC Educational Resources Information Center

    Sederberg, Per B.; Howard, Marc W.; Kahana, Michael J.

    2008-01-01

    The authors present a new model of free recall on the basis of M. W. Howard and M. J. Kahana's temporal context model and M. Usher and J. L. McClelland's leaky-accumulator decision model. In this model, contextual drift gives rise to both short-term and long-term recency effects, and contextual retrieval gives rise to short-term and long-term…

  2. Regional Lung Ventilation Analysis Using Temporally Resolved Magnetic Resonance Imaging.

    PubMed

    Kolb, Christoph; Wetscherek, Andreas; Buzan, Maria Teodora; Werner, René; Rank, Christopher M; Kachelrie, Marc; Kreuter, Michael; Dinkel, Julien; Heuel, Claus Peter; Maier-Hein, Klaus

    We propose a computer-aided method for regional ventilation analysis and observation of lung diseases in temporally resolved magnetic resonance imaging (4D MRI). A shape model-based segmentation and registration workflow was used to create an atlas-derived reference system in which regional tissue motion can be quantified and multimodal image data can be compared regionally. Model-based temporal registration of the lung surfaces in 4D MRI data was compared with the registration of 4D computed tomography (CT) images. A ventilation analysis was performed on 4D MR images of patients with lung fibrosis; 4D MR ventilation maps were compared with corresponding diagnostic 3D CT images of the patients and 4D CT maps of subjects without impaired lung function (serving as reference). Comparison between the computed patient-specific 4D MR regional ventilation maps and diagnostic CT images shows good correlation in conspicuous regions. Comparison to 4D CT-derived ventilation maps supports the plausibility of the 4D MR maps. Dynamic MRI-based flow-volume loops and spirograms further visualize the free-breathing behavior. The proposed methods allow for 4D MR-based regional analysis of tissue dynamics and ventilation in spontaneous breathing and comparison of patient data. The proposed atlas-based reference coordinate system provides an automated manner of annotating and comparing multimodal lung image data.

  3. Survival modeling for the estimation of transition probabilities in model-based economic evaluations in the absence of individual patient data: a tutorial.

    PubMed

    Diaby, Vakaramoko; Adunlin, Georges; Montero, Alberto J

    2014-02-01

    Survival modeling techniques are increasingly being used as part of decision modeling for health economic evaluations. As many models are available, it is imperative for interested readers to know about the steps in selecting and using the most suitable ones. The objective of this paper is to propose a tutorial for the application of appropriate survival modeling techniques to estimate transition probabilities, for use in model-based economic evaluations, in the absence of individual patient data (IPD). An illustration of the use of the tutorial is provided based on the final progression-free survival (PFS) analysis of the BOLERO-2 trial in metastatic breast cancer (mBC). An algorithm was adopted from Guyot and colleagues, and was then run in the statistical package R to reconstruct IPD, based on the final PFS analysis of the BOLERO-2 trial. It should be emphasized that the reconstructed IPD represent an approximation of the original data. Afterwards, we fitted parametric models to the reconstructed IPD in the statistical package Stata. Both statistical and graphical tests were conducted to verify the relative and absolute validity of the findings. Finally, the equations for transition probabilities were derived using the general equation for transition probabilities used in model-based economic evaluations, and the parameters were estimated from fitted distributions. The results of the application of the tutorial suggest that the log-logistic model best fits the reconstructed data from the latest published Kaplan-Meier (KM) curves of the BOLERO-2 trial. Results from the regression analyses were confirmed graphically. An equation for transition probabilities was obtained for each arm of the BOLERO-2 trial. In this paper, a tutorial was proposed and used to estimate the transition probabilities for model-based economic evaluation, based on the results of the final PFS analysis of the BOLERO-2 trial in mBC. The results of our study can serve as a basis for any model (Markov) that needs the parameterization of transition probabilities, and only has summary KM plots available.

  4. Model selection and constraints from holographic dark energy scenarios

    NASA Astrophysics Data System (ADS)

    Akhlaghi, I. A.; Malekjani, M.; Basilakos, S.; Haghi, H.

    2018-07-01

    In this study, we combine the expansion and the growth data in order to investigate the ability of the three most popular holographic dark energy models, namely event future horizon, Ricci scale, and Granda-Oliveros IR cutoffs, to fit the data. Using a standard χ2 minimization method, we place tight constraints on the free parameters of the models. Based on the values of the Akaike and Bayesian information criteria, we find that two out of three holographic dark energy models are disfavoured by the data, because they predict a non-negligible amount of fractional dark energy density at early enough times. Although the growth rate data are relatively consistent with the holographic dark energy models which are based on Ricci scale and Granda-Oliveros IR cutoffs, the combined analysis provides strong indications against these models. Finally, we find that the model for which the holographic dark energy is related with the future horizon is consistent with the combined observational data.

  5. Grid-Based Surface Generalized Born Model for Calculation of Electrostatic Binding Free Energies.

    PubMed

    Forouzesh, Negin; Izadi, Saeed; Onufriev, Alexey V

    2017-10-23

    Fast and accurate calculation of solvation free energies is central to many applications, such as rational drug design. In this study, we present a grid-based molecular surface implementation of "R6" flavor of the generalized Born (GB) implicit solvent model, named GBNSR6. The speed, accuracy relative to numerical Poisson-Boltzmann treatment, and sensitivity to grid surface parameters are tested on a set of 15 small protein-ligand complexes and a set of biomolecules in the range of 268 to 25099 atoms. Our results demonstrate that the proposed model provides a relatively successful compromise between the speed and accuracy of computing polar components of the solvation free energies (ΔG pol ) and binding free energies (ΔΔG pol ). The model tolerates a relatively coarse grid size h = 0.5 Å, where the grid artifact error in computing ΔΔG pol remains in the range of k B T ∼ 0.6 kcal/mol. The estimated ΔΔG pol s are well correlated (r 2 = 0.97) with the numerical Poisson-Boltzmann reference, while showing virtually no systematic bias and RMSE = 1.43 kcal/mol. The grid-based GBNSR6 model is available in Amber (AmberTools) package of molecular simulation programs.

  6. Stability Analysis Susceptible, Exposed, Infected, Recovered (SEIR) Model for Spread Model for Spread of Dengue Fever in Medan

    NASA Astrophysics Data System (ADS)

    Side, Syafruddin; Molliq Rangkuti, Yulita; Gerhana Pane, Dian; Setia Sinaga, Marlina

    2018-01-01

    Dengue fever is endemic disease which spread through vector, Aedes Aegypty. This disease is found more than 100 countries, such as, United State, Africa as well Asia, especially in country that have tropic climate. Mathematical modeling in this paper, discusses the speed of the spread of dengue fever. The model adopting divided over four classes, such as Susceptible (S), Exposed (E), Infected (I) and Recovered (R). SEIR model further analyzed to detect the re-breeding value based on the number reported case by dengue in Medan city. Analysis of the stability of the system in this study is asymptotically stable indicating a case of endemic and unstable that show cases the endemic cases. Simulation on the mathematical model of SEIR showed that require a very long time to produce infected humans will be free of dengue virus infection. This happens because of dengue virus infection that occurs continuously between human and vector populations.

  7. Support vector machine learning-based fMRI data group analysis.

    PubMed

    Wang, Ze; Childress, Anna R; Wang, Jiongjiong; Detre, John A

    2007-07-15

    To explore the multivariate nature of fMRI data and to consider the inter-subject brain response discrepancies, a multivariate and brain response model-free method is fundamentally required. Two such methods are presented in this paper by integrating a machine learning algorithm, the support vector machine (SVM), and the random effect model. Without any brain response modeling, SVM was used to extract a whole brain spatial discriminance map (SDM), representing the brain response difference between the contrasted experimental conditions. Population inference was then obtained through the random effect analysis (RFX) or permutation testing (PMU) on the individual subjects' SDMs. Applied to arterial spin labeling (ASL) perfusion fMRI data, SDM RFX yielded lower false-positive rates in the null hypothesis test and higher detection sensitivity for synthetic activations with varying cluster size and activation strengths, compared to the univariate general linear model (GLM)-based RFX. For a sensory-motor ASL fMRI study, both SDM RFX and SDM PMU yielded similar activation patterns to GLM RFX and GLM PMU, respectively, but with higher t values and cluster extensions at the same significance level. Capitalizing on the absence of temporal noise correlation in ASL data, this study also incorporated PMU in the individual-level GLM and SVM analyses accompanied by group-level analysis through RFX or group-level PMU. Providing inferences on the probability of being activated or deactivated at each voxel, these individual-level PMU-based group analysis methods can be used to threshold the analysis results of GLM RFX, SDM RFX or SDM PMU.

  8. Mastitomics, the integrated omics of bovine milk in an experimental model of Streptococcus uberis mastitis: 2. Label-free relative quantitative proteomics† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c6mb00290k Click here for additional data file.

    PubMed Central

    Mudaliar, Manikhandan; Tassi, Riccardo; Thomas, Funmilola C.; McNeilly, Tom N.; Weidt, Stefan K.; McLaughlin, Mark; Wilson, David; Burchmore, Richard; Herzyk, Pawel; Eckersall, P. David

    2016-01-01

    Mastitis, inflammation of the mammary gland, is the most common and costly disease of dairy cattle in the western world. It is primarily caused by bacteria, with Streptococcus uberis as one of the most prevalent causative agents. To characterize the proteome during Streptococcus uberis mastitis, an experimentally induced model of intramammary infection was used. Milk whey samples obtained from 6 cows at 6 time points were processed using label-free relative quantitative proteomics. This proteomic analysis complements clinical, bacteriological and immunological studies as well as peptidomic and metabolomic analysis of the same challenge model. A total of 2552 non-redundant bovine peptides were identified, and from these, 570 bovine proteins were quantified. Hierarchical cluster analysis and principal component analysis showed clear clustering of results by stage of infection, with similarities between pre-infection and resolution stages (0 and 312 h post challenge), early infection stages (36 and 42 h post challenge) and late infection stages (57 and 81 h post challenge). Ingenuity pathway analysis identified upregulation of acute phase protein pathways over the course of infection, with dominance of different acute phase proteins at different time points based on differential expression analysis. Antimicrobial peptides, notably cathelicidins and peptidoglycan recognition protein, were upregulated at all time points post challenge and peaked at 57 h, which coincided with 10 000-fold decrease in average bacterial counts. The integration of clinical, bacteriological, immunological and quantitative proteomics and other-omic data provides a more detailed systems level view of the host response to mastitis than has been achieved previously. PMID:27412694

  9. Modelling Students' Visualisation of Chemical Reaction

    ERIC Educational Resources Information Center

    Cheng, Maurice M. W.; Gilbert, John K.

    2017-01-01

    This paper proposes a model-based notion of "submicro representations of chemical reactions". Based on three structural models of matter (the simple particle model, the atomic model and the free electron model of metals), we suggest there are two major models of reaction in school chemistry curricula: (a) reactions that are simple…

  10. Altered functional brain connectivity in children and young people with opsoclonus-myoclonus syndrome.

    PubMed

    Chekroud, Adam M; Anand, Geetha; Yong, Jean; Pike, Michael; Bridge, Holly

    2017-01-01

    Opsoclonus-myoclonus syndrome (OMS) is a rare, poorly understood condition that can result in long-term cognitive, behavioural, and motor sequelae. Several studies have investigated structural brain changes associated with this condition, but little is known about changes in function. This study aimed to investigate changes in brain functional connectivity in patients with OMS. Seven patients with OMS and 10 age-matched comparison participants underwent 3T magnetic resonance imaging (MRI) to acquire resting-state functional MRI data (whole-brain echo-planar images; 2mm isotropic voxels; multiband factor ×2) for a cross-sectional study. A seed-based analysis identified brain regions in which signal changes over time correlated with the cerebellum. Model-free analysis was used to determine brain networks showing altered connectivity. In patients with OMS, the motor cortex showed significantly reduced connectivity, and the occipito-parietal region significantly increased connectivity with the cerebellum relative to the comparison group. A model-free analysis also showed extensive connectivity within a visual network, including the cerebellum and basal ganglia, not present in the comparison group. No other networks showed any differences between groups. Patients with OMS showed reduced connectivity between the cerebellum and motor cortex, but increased connectivity with occipito-parietal regions. This pattern of change supports widespread brain involvement in OMS. © 2016 Mac Keith Press.

  11. Four-dimensional computed tomography based respiratory-gated radiotherapy with respiratory guidance system: analysis of respiratory signals and dosimetric comparison.

    PubMed

    Lee, Jung Ae; Kim, Chul Yong; Yang, Dae Sik; Yoon, Won Sup; Park, Young Je; Lee, Suk; Kim, Young Bum

    2014-01-01

    To investigate the effectiveness of respiratory guidance system in 4-dimensional computed tomography (4 DCT) based respiratory-gated radiation therapy (RGRT) by comparing respiratory signals and dosimetric analysis of treatment plans. The respiratory amplitude and period of the free, the audio device-guided, and the complex system-guided breathing were evaluated in eleven patients with lung or liver cancers. The dosimetric parameters were assessed by comparing free breathing CT plan and 4 DCT-based 30-70% maximal intensity projection (MIP) plan. The use of complex system-guided breathing showed significantly less variation in respiratory amplitude and period compared to the free or audio-guided breathing regarding the root mean square errors (RMSE) of full inspiration (P = 0.031), full expiration (P = 0.007), and period (P = 0.007). The dosimetric parameters including V(5 Gy), V(10 Gy), V(20 Gy), V(30 Gy), V(40 Gy), and V(50 Gy) of normal liver or lung in 4 DCT MIP plan were superior over free breathing CT plan. The reproducibility and regularity of respiratory amplitude and period were significantly improved with the complex system-guided breathing compared to the free or the audio-guided breathing. In addition, the treatment plan based on the 4D CT-based MIP images acquired with the complex system guided breathing showed better normal tissue sparing than that on the free breathing CT.

  12. Artificial neural network approach to predict surgical site infection after free-flap reconstruction in patients receiving surgery for head and neck cancer.

    PubMed

    Kuo, Pao-Jen; Wu, Shao-Chun; Chien, Peng-Chen; Chang, Shu-Shya; Rau, Cheng-Shyuan; Tai, Hsueh-Ling; Peng, Shu-Hui; Lin, Yi-Chun; Chen, Yi-Chun; Hsieh, Hsiao-Yun; Hsieh, Ching-Hua

    2018-03-02

    The aim of this study was to develop an effective surgical site infection (SSI) prediction model in patients receiving free-flap reconstruction after surgery for head and neck cancer using artificial neural network (ANN), and to compare its predictive power with that of conventional logistic regression (LR). There were 1,836 patients with 1,854 free-flap reconstructions and 438 postoperative SSIs in the dataset for analysis. They were randomly assigned tin ratio of 7:3 into a training set and a test set. Based on comprehensive characteristics of patients and diseases in the absence or presence of operative data, prediction of SSI was performed at two time points (pre-operatively and post-operatively) with a feed-forward ANN and the LR models. In addition to the calculated accuracy, sensitivity, and specificity, the predictive performance of ANN and LR were assessed based on area under the curve (AUC) measures of receiver operator characteristic curves and Brier score. ANN had a significantly higher AUC (0.892) of post-operative prediction and AUC (0.808) of pre-operative prediction than LR (both P <0.0001). In addition, there was significant higher AUC of post-operative prediction than pre-operative prediction by ANN (p<0.0001). With the highest AUC and the lowest Brier score (0.090), the post-operative prediction by ANN had the highest overall predictive performance. The post-operative prediction by ANN had the highest overall performance in predicting SSI after free-flap reconstruction in patients receiving surgery for head and neck cancer.

  13. Bibliography of Technical Publications and Papers October 1977 - September 1978

    DTIC Science & Technology

    1978-11-01

    MCNUTT. Sweetness of fructose in a dry beverage base. Food Processing Industry, 47(555): 28-29 (1978). 91. KELCH, W. J., and J. S. LEE. Modeling...1978, pp. 510-513. 170. HARRIS, N. E. Sweeteners , noncarbohydrate (low concentra- tion). In Encyclopedia of Food Science. M. S. Peterson and A. H...Reports 188. BALL, D. H., and E. WETZEL. Liquid chromatographic analysis of the free sugars in sweet corn: A method indicative of maturity and of quality

  14. Model-free iterative control of repetitive dynamics for high-speed scanning in atomic force microscopy.

    PubMed

    Li, Yang; Bechhoefer, John

    2009-01-01

    We introduce an algorithm for calculating, offline or in real time and with no explicit system characterization, the feedforward input required for repetitive motions of a system. The algorithm is based on the secant method of numerical analysis and gives accurate motion at frequencies limited only by the signal-to-noise ratio and the actuator power and range. We illustrate the secant-solver algorithm on a stage used for atomic force microscopy.

  15. Para-hydrogen and helium cluster size distributions in free jet expansions based on Smoluchowski theory with kernel scaling.

    PubMed

    Kornilov, Oleg; Toennies, J Peter

    2015-02-21

    The size distribution of para-H2 (pH2) clusters produced in free jet expansions at a source temperature of T0 = 29.5 K and pressures of P0 = 0.9-1.96 bars is reported and analyzed according to a cluster growth model based on the Smoluchowski theory with kernel scaling. Good overall agreement is found between the measured and predicted, Nk = A k(a) e(-bk), shape of the distribution. The fit yields values for A and b for values of a derived from simple collision models. The small remaining deviations between measured abundances and theory imply a (pH2)k magic number cluster of k = 13 as has been observed previously by Raman spectroscopy. The predicted linear dependence of b(-(a+1)) on source gas pressure was verified and used to determine the value of the basic effective agglomeration reaction rate constant. A comparison of the corresponding effective growth cross sections σ11 with results from a similar analysis of He cluster size distributions indicates that the latter are much larger by a factor 6-10. An analysis of the three body recombination rates, the geometric sizes and the fact that the He clusters are liquid independent of their size can explain the larger cross sections found for He.

  16. How to Run FAST Simulations.

    PubMed

    Zimmerman, M I; Bowman, G R

    2016-01-01

    Molecular dynamics (MD) simulations are a powerful tool for understanding enzymes' structures and functions with full atomistic detail. These physics-based simulations model the dynamics of a protein in solution and store snapshots of its atomic coordinates at discrete time intervals. Analysis of the snapshots from these trajectories provides thermodynamic and kinetic properties such as conformational free energies, binding free energies, and transition times. Unfortunately, simulating biologically relevant timescales with brute force MD simulations requires enormous computing resources. In this chapter we detail a goal-oriented sampling algorithm, called fluctuation amplification of specific traits, that quickly generates pertinent thermodynamic and kinetic information by using an iterative series of short MD simulations to explore the vast depths of conformational space. © 2016 Elsevier Inc. All rights reserved.

  17. BER Analysis of Coherent Free-Space Optical Communication Systems with a Focal-Plane-Based Wavefront Sensor

    NASA Astrophysics Data System (ADS)

    Cao, Jingtai; Zhao, Xiaohui; Liu, Wei; Gu, Haijun

    2018-03-01

    A wavefront sensor is one of most important units for an adaptive optics system. Based on our previous works, in this paper, we discuss the bit-error-rate (BER) performance of coherent free space optical communication systems with a focal-plane-based wavefront sensor. Firstly, the theory of a focal-plane-based wavefront sensor is given. Then the relationship between the BER and the mixing efficiency with a homodyne receiver is discussed on the basis of binary-phase-shift-keying (BPSK) modulation. Finally, the numerical simulation results are shown that the BER will be decreased obviously after aberrations correction with the focal-plane-based wavefront sensor. In addition, the BER will decrease along with increasing number of photons received within a single bit. These analysis results will provide a reference for the design of the coherent Free space optical communication (FSOC) system.

  18. [Quality evaluation of rhubarb dispensing granules based on multi-component simultaneous quantitative analysis and bioassay].

    PubMed

    Tan, Peng; Zhang, Hai-Zhu; Zhang, Ding-Kun; Wu, Shan-Na; Niu, Ming; Wang, Jia-Bo; Xiao, Xiao-He

    2017-07-01

    This study attempts to evaluate the quality of Chinese formula granules by combined use of multi-component simultaneous quantitative analysis and bioassay. The rhubarb dispensing granules were used as the model drug for demonstrative study. The ultra-high performance liquid chromatography (UPLC) method was adopted for simultaneously quantitative determination of the 10 anthraquinone derivatives (such as aloe emodin-8-O-β-D-glucoside) in rhubarb dispensing granules; purgative biopotency of different batches of rhubarb dispensing granules was determined based on compound diphenoxylate tablets-induced mouse constipation model; blood activating biopotency of different batches of rhubarb dispensing granules was determined based on in vitro rat antiplatelet aggregation model; SPSS 22.0 statistical software was used for correlation analysis between 10 anthraquinone derivatives and purgative biopotency, blood activating biopotency. The results of multi-components simultaneous quantitative analysisshowed that there was a great difference in chemical characterizationand certain differences inpurgative biopotency and blood activating biopotency among 10 batches of rhubarb dispensing granules. The correlation analysis showed that the intensity of purgative biopotency was significantly correlated with the content of conjugated anthraquinone glycosides (P<0.01), and the intensity of blood activating biopotency was significantly correlated with the content of free anthraquinone (P<0.01). In summary, the combined use of multi-component simultaneous quantitative analysis and bioassay can achieve objective quantification and more comprehensive reflection on overall quality difference among different batches of rhubarb dispensing granules. Copyright© by the Chinese Pharmaceutical Association.

  19. School-Based Practices and Programs That Promote Safe and Drug-Free Schools. CASE/CCBD Mini-Library Series on Safe, Drug-Free, and Effective Schools.

    ERIC Educational Resources Information Center

    Guthrie, Patricia M.

    This monograph focuses on school-based practices and programs that promote safe and drug-free schools. It begins with a description of the key characteristics of schools with effective programs and provides a model for school-wide support. Necessary steps for developing an effective system of universal prevention are listed and include: (1)…

  20. Free energy analysis of cell spreading.

    PubMed

    McEvoy, Eóin; Deshpande, Vikram S; McGarry, Patrick

    2017-10-01

    In this study we present a steady-state adaptation of the thermodynamically motivated stress fiber (SF) model of Vigliotti et al. (2015). We implement this steady-state formulation in a non-local finite element setting where we also consider global conservation of the total number of cytoskeletal proteins within the cell, global conservation of the number of binding integrins on the cell membrane, and adhesion limiting ligand density on the substrate surface. We present a number of simulations of cell spreading in which we consider a limited subset of the possible deformed spread-states assumed by the cell in order to examine the hypothesis that free energy minimization drives the process of cell spreading. Simulations suggest that cell spreading can be viewed as a competition between (i) decreasing cytoskeletal free energy due to strain induced assembly of cytoskeletal proteins into contractile SFs, and (ii) increasing elastic free energy due to stretching of the mechanically passive components of the cell. The computed minimum free energy spread area is shown to be lower for a cell on a compliant substrate than on a rigid substrate. Furthermore, a low substrate ligand density is found to limit cell spreading. The predicted dependence of cell spread area on substrate stiffness and ligand density is in agreement with the experiments of Engler et al. (2003). We also simulate the experiments of Théry et al. (2006), whereby initially circular cells deform and adhere to "V-shaped" and "Y-shaped" ligand patches. Analysis of a number of different spread states reveals that deformed configurations with the lowest free energy exhibit a SF distribution that corresponds to experimental observations, i.e. a high concentration of highly aligned SFs occurs along free edges, with lower SF concentrations in the interior of the cell. In summary, the results of this study suggest that cell spreading is driven by free energy minimization based on a competition between decreasing cytoskeletal free energy and increasing passive elastic free energy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    NASA Astrophysics Data System (ADS)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  2. Feminist Policy Analysis: Expanding Traditional Social Work Methods

    ERIC Educational Resources Information Center

    Kanenberg, Heather

    2013-01-01

    In an effort to move the methodology of policy analysis beyond the traditional and artificial position of being objective and value-free, this article is a call to those working and teaching in social work to consider a feminist policy analysis lens. A review of standard policy analysis models is presented alongside feminist models. Such a…

  3. Predictions for the Effects of Free Stream Turbulence on Turbine Blade Heat Transfer

    NASA Technical Reports Server (NTRS)

    Boyle, Robert J.; Giel, Paul W.; Ames, Forrest E.

    2004-01-01

    An approach to predicting the effects of free stream turbulence on turbine vane and blade heat transfer is described. Four models for predicting the effects of free stream turbulence were in incorporated into a Navier-Stokes CFD analysis. Predictions were compared with experimental data in order to identify an appropriate model for use across a wide range of flow conditions. The analyses were compared with data from five vane geometries and from four rotor geometries. Each of these nine geometries had data for different Reynolds numbers. Comparisons were made for twenty four cases. Steady state calculations were done because all experimental data were obtained in steady state tests. High turbulence levels often result in suction surface transition upstream of the throat, while at low to moderate Reynolds numbers the pressure surface remains laminar. A two-dimensional analysis was used because the flow is predominately two-dimensional in the regions where free stream turbulence significantly augments surface heat transfer. Because the evaluation of models for predicting turbulence effects can be affected by other factors, the paper discusses modeling for transition, relaminarization, and near wall damping. Quantitative comparisons are given between the predictions and data.

  4. Meta-Analysis of Free-Response Studies, 1992-2008: Assessing the Noise Reduction Model in Parapsychology

    ERIC Educational Resources Information Center

    Storm, Lance; Tressoldi, Patrizio E.; Di Risio, Lorenzo

    2010-01-01

    We report the results of meta-analyses on 3 types of free-response study: (a) ganzfeld (a technique that enhances a communication anomaly referred to as "psi"); (b) nonganzfeld noise reduction using alleged psi-enhancing techniques such as dream psi, meditation, relaxation, or hypnosis; and (c) standard free response (nonganzfeld, no noise…

  5. Traction free finite elements with the assumed stress hybrid model. M.S. Thesis, 1981

    NASA Technical Reports Server (NTRS)

    Kafie, Kurosh

    1991-01-01

    An effective approach in the finite element analysis of the stress field at the traction free boundary of a solid continuum was studied. Conventional displacement and assumed stress finite elements were used in the determination of stress concentrations around circular and elliptical holes. Specialized hybrid elements were then developed to improve the satisfaction of prescribed traction boundary conditions. Results of the stress analysis indicated that finite elements which exactly satisfy the free stress boundary conditions are the most accurate and efficient in such problems. A general approach for hybrid finite elements which incorporate traction free boundaries of arbitrary geometry was formulated.

  6. On the usage of divergence nudging in the DMI nowcasting system

    NASA Astrophysics Data System (ADS)

    Korsholm, Ulrik; Petersen, Claus; Hansen Sass, Bent; Woetmann Nielsen, Niels; Getreuer Jensen, David; Olsen, Bjarke Tobias; Vedel, Henrik

    2014-05-01

    DMI has recently proposed a new method for nudging radar reflectivity CAPPI products into their operational nowcasting system. The system is based on rapid update cycles (with hourly frequency) with the High Resolution Limited Area Model combined with surface and upper air analysis at each initial time. During the first 1.5 hours of a simulation the model dynamical state is nudged in accordance with the CAPPI product after which a free forecast is produced with a forecast length of 12 hours. The nudging method is based on the assumption that precipitation is forced by low level moisture convergence and an enhanced moisture source will lead to convective triggering of the model cloud scheme. If the model under-predicts precipitation before cut-off horizontal low level divergence is nudged towards an estimated value. These pseudo observations are calculated from the CAPPI product by assuming a specific vertical profile of the change in divergence field. The strength of the nudging is proportional to the difference between observed and modelled precipitation. When over-predicting, the low level moisture source is reduced, and in-cloud moisture is nudged towards environmental values. Results have been analysed in terms of the fractions skill score and the ability of the nudging method to position the precipitation cells correctly is discussed. The ability of the model to retain memory of the precipitation systems in the free forecast has also been investigated and examples of combining the nudging method with extrapolated reflectivity fields are also shown.

  7. Efficient identification and referral of low-income women at high risk for hereditary breast cancer: a practice-based approach.

    PubMed

    Joseph, G; Kaplan, C; Luce, J; Lee, R; Stewart, S; Guerra, C; Pasick, R

    2012-01-01

    Identification of low-income women with the rare but serious risk of hereditary cancer and their referral to appropriate services presents an important public health challenge. We report the results of formative research to reach thousands of women for efficient identification of those at high risk and expedient access to free genetic services. External validity is maximized by emphasizing intervention fit with the two end-user organizations who must connect to make this possible. This study phase informed the design of a subsequent randomized controlled trial. We conducted a randomized controlled pilot study (n = 38) to compare two intervention models for feasibility and impact. The main outcome was receipt of genetic counseling during a two-month intervention period. Model 1 was based on the usual outcall protocol of an academic hospital genetic risk program, and Model 2 drew on the screening and referral procedures of a statewide toll-free phone line through which large numbers of high-risk women can be identified. In Model 1, the risk program proactively calls patients to schedule genetic counseling; for Model 2, women are notified of their eligibility for counseling and make the call themselves. We also developed and pretested a family history screener for administration by phone to identify women appropriate for genetic counseling. There was no statistically significant difference in receipt of genetic counseling between women randomized to Model 1 (3/18) compared with Model 2 (3/20) during the intervention period. However, when unresponsive women in Model 2 were called after 2 months, 7 more obtained counseling; 4 women from Model 1 were also counseled after the intervention. Thus, the intervention model that closely aligned with the risk program's outcall to high-risk women was found to be feasible and brought more low-income women to free genetic counseling. Our screener was easy to administer by phone and appeared to identify high-risk callers effectively. The model and screener are now in use in the main trial to test the effectiveness of this screening and referral intervention. A validation analysis of the screener is also underway. Identification of intervention strategies and tools, and their systematic comparison for impact and efficiency in the context where they will ultimately be used are critical elements of practice-based research. Copyright © 2012 S. Karger AG, Basel.

  8. Interactive computer graphics and its role in control system design of large space structures

    NASA Technical Reports Server (NTRS)

    Reddy, A. S. S. R.

    1985-01-01

    This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.

  9. Development of a Probabilistic Dynamic Synthesis Method for the Analysis of Nondeterministic Structures

    NASA Technical Reports Server (NTRS)

    Brown, A. M.

    1998-01-01

    Accounting for the statistical geometric and material variability of structures in analysis has been a topic of considerable research for the last 30 years. The determination of quantifiable measures of statistical probability of a desired response variable, such as natural frequency, maximum displacement, or stress, to replace experience-based "safety factors" has been a primary goal of these studies. There are, however, several problems associated with their satisfactory application to realistic structures, such as bladed disks in turbomachinery. These include the accurate definition of the input random variables (rv's), the large size of the finite element models frequently used to simulate these structures, which makes even a single deterministic analysis expensive, and accurate generation of the cumulative distribution function (CDF) necessary to obtain the probability of the desired response variables. The research presented here applies a methodology called probabilistic dynamic synthesis (PDS) to solve these problems. The PDS method uses dynamic characteristics of substructures measured from modal test as the input rv's, rather than "primitive" rv's such as material or geometric uncertainties. These dynamic characteristics, which are the free-free eigenvalues, eigenvectors, and residual flexibility (RF), are readily measured and for many substructures, a reasonable sample set of these measurements can be obtained. The statistics for these rv's accurately account for the entire random character of the substructure. Using the RF method of component mode synthesis, these dynamic characteristics are used to generate reduced-size sample models of the substructures, which are then coupled to form system models. These sample models are used to obtain the CDF of the response variable by either applying Monte Carlo simulation or by generating data points for use in the response surface reliability method, which can perform the probabilistic analysis with an order of magnitude less computational effort. Both free- and forced-response analyses have been performed, and the results indicate that, while there is considerable room for improvement, the method produces usable and more representative solutions for the design of realistic structures with a substantial savings in computer time.

  10. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, M.; Wieseman, C. D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few a priori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  11. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay; Wieseman, Carol D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  12. Theoretical analysis of co-solvent effect on the proton transfer reaction of glycine in a water–acetonitrile mixture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasai, Yukako; Yoshida, Norio, E-mail: noriwo@chem.kyushu-univ.jp; Nakano, Haruyuki

    2015-05-28

    The co-solvent effect on the proton transfer reaction of glycine in a water–acetonitrile mixture was examined using the reference interaction-site model self-consistent field theory. The free energy profiles of the proton transfer reaction of glycine between the carboxyl oxygen and amino nitrogen were computed in a water–acetonitrile mixture solvent at various molar fractions. Two types of reactions, the intramolecular proton transfer and water-mediated proton transfer, were considered. In both types of the reactions, a similar tendency was observed. In the pure water solvent, the zwitterionic form, where the carboxyl oxygen is deprotonated while the amino nitrogen is protonated, is moremore » stable than the neutral form. The reaction free energy is −10.6 kcal mol{sup −1}. On the other hand, in the pure acetonitrile solvent, glycine takes only the neutral form. The reaction free energy from the neutral to zwitterionic form gradually increases with increasing acetonitrile concentration, and in an equally mixed solvent, the zwitterionic and neutral forms are almost isoenergetic, with a difference of only 0.3 kcal mol{sup −1}. The free energy component analysis based on the thermodynamic cycle of the reaction also revealed that the free energy change of the neutral form is insensitive to the change of solvent environment but the zwitterionic form shows drastic changes. In particular, the excess chemical potential, one of the components of the solvation free energy, is dominant and contributes to the stabilization of the zwitterionic form.« less

  13. Modeling of Wall-Bounded Complex Flows and Free Shear Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Zhu, Jiang; Lumley, John L.

    1994-01-01

    Various wall-bounded flows with complex geometries and free shear flows have been studied with a newly developed realizable Reynolds stress algebraic equation model. The model development is based on the invariant theory in continuum mechanics. This theory enables us to formulate a general constitutive relation for the Reynolds stresses. Pope was the first to introduce this kind of constitutive relation to turbulence modeling. In our study, realizability is imposed on the truncated constitutive relation to determine the coefficients so that, unlike the standard k-E eddy viscosity model, the present model will not produce negative normal stresses in any situations of rapid distortion. The calculations based on the present model have shown an encouraging success in modeling complex turbulent flows.

  14. Comparison of finite-difference schemes for analysis of shells of revolution. [stress and free vibration analysis

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Stephens, W. B.

    1973-01-01

    Several finite difference schemes are applied to the stress and free vibration analysis of homogeneous isotropic and layered orthotropic shells of revolution. The study is based on a form of the Sanders-Budiansky first-approximation linear shell theory modified such that the effects of shear deformation and rotary inertia are included. A Fourier approach is used in which all the shell stress resultants and displacements are expanded in a Fourier series in the circumferential direction, and the governing equations reduce to ordinary differential equations in the meridional direction. While primary attention is given to finite difference schemes used in conjunction with first order differential equation formulation, comparison is made with finite difference schemes used with other formulations. These finite difference discretization models are compared with respect to simplicity of application, convergence characteristics, and computational efficiency. Numerical studies are presented for the effects of variations in shell geometry and lamination parameters on the accuracy and convergence of the solutions obtained by the different finite difference schemes. On the basis of the present study it is shown that the mixed finite difference scheme based on the first order differential equation formulation and two interlacing grids for the different fundamental unknowns combines a number of advantages over other finite difference schemes previously reported in the literature.

  15. The structure of the solution obtained with Reynolds-stress-transport models at the free-stream edges of turbulent flows

    NASA Astrophysics Data System (ADS)

    Cazalbou, J.-B.; Chassaing, P.

    2002-02-01

    The behavior of Reynolds-stress-transport models at the free-stream edges of turbulent flows is investigated. Current turbulent-diffusion models are found to produce propagative (possibly weak) solutions of the same type as those reported earlier by Cazalbou, Spalart, and Bradshaw [Phys. Fluids 6, 1797 (1994)] for two-equation models. As in the latter study, an analysis is presented that provides qualitative information on the flow structure predicted near the edge if a condition on the values of the diffusion constants is satisfied. In this case, the solution appears to be fairly insensitive to the residual free-stream turbulence levels needed with conventional numerical methods. The main specific result is that, depending on the diffusion model, the propagative solution can force turbulence toward definite and rather extreme anisotropy states at the edge (one- or two-component limit). This is not the case with the model of Daly and Harlow [Phys. Fluids 13, 2634 (1970)]; it may be one of the reasons why this "old" scheme is still the most widely used, even in recent Reynolds-stress-transport models. In addition, the analysis helps us to interpret some difficulties encountered in computing even very simple flows with Lumley's pressure-diffusion model [Adv. Appl. Mech. 18, 123 (1978)]. A new realizability condition, according to which the diffusion model should not globally become "anti-diffusive," is introduced, and a recalibration of Lumley's model satisfying this condition is performed using information drawn from the analysis.

  16. Solvent-based and solvent-free characterization of low solubility and low molecular weight polyamides by mass spectrometry: a complementary approach.

    PubMed

    Barrère, Caroline; Hubert-Roux, Marie; Lange, Catherine M; Rejaibi, Majed; Kebir, Nasreddine; Désilles, Nicolas; Lecamp, Laurence; Burel, Fabrice; Loutelier-Bourhis, Corinne

    2012-06-15

    Polyamides (PA) belong to the most used classes of polymers because of their attractive chemical and mechanical properties. In order to monitor original PA design, it is essential to develop analytical methods for the characterization of these compounds that are mostly insoluble in usual solvents. A low molecular weight polyamide (PA11), synthesized with a chain limiter, has been used as a model compound and characterized by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS). In the solvent-based approach, specific solvents for PA, i.e. trifluoroacetic acid (TFA) and hexafluoroisopropanol (HFIP), were tested. Solvent-based sample preparation methods, dried-droplet and thin layer, were optimized through the choice of matrix and salt. Solvent-based (thin layer) and solvent-free methods were then compared for this low solubility polymer. Ultra-high-performance liquid chromatography/electrospray ionization (UHPLC/ESI)-TOF-MS analyses were then used to confirm elemental compositions through accurate mass measurement. Sodium iodide (NaI) and 2,5-dihydroxybenzoic acid (2,5-DHB) are, respectively, the best cationizing agent and matrix. The dried-droplet sample preparation method led to inhomogeneous deposits, but the thin-layer method could overcome this problem. Moreover, the solvent-free approach was the easiest and safest sample preparation method giving equivalent results to solvent-based methods. Linear as well as cyclic oligomers were observed. Although the PA molecular weights obtained by MALDI-TOF-MS were lower than those obtained by (1)H NMR and acido-basic titration, this technique allowed us to determine the presence of cyclic and linear species, not differentiated by the other techniques. TFA was shown to induce modification of linear oligomers that permitted cyclic and linear oligomers to be clearly highlighted in spectra. Optimal sample preparation conditions were determined for the MALDI-TOF-MS analysis of PA11, a model of polyamide analogues. The advantages of the solvent-free and solvent-based approaches were shown. Molecular weight determination using MALDI was discussed. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Towards construction of ghost-free higher derivative gravity from bigravity

    NASA Astrophysics Data System (ADS)

    Akagi, Satoshi

    2018-06-01

    In this paper, the ghost-freeness of the higher derivative theory proposed by Hassan et al. in [Universe 1, 92 (2015), 10.3390/universe1020092] is investigated. Hassan et al. believed the ghost-freeness of the higher derivative theory based on the analysis in the linear approximation. However, in order to obtain the complete correspondence, we have to analyze the model without any approximations. In this paper, we analyze the two-scalar model proposed in [Universe 1, 92 (2015), 10.3390/universe1020092] with arbitrary nonderivative interaction terms. In any order with respect to perturbative parameters, we prove that we can eliminate the ghost for the model with any nonderivative interaction terms.

  18. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.

    PubMed

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene

    2016-04-30

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Coarse-grained versus atomistic simulations: realistic interaction free energies for real proteins.

    PubMed

    May, Ali; Pool, René; van Dijk, Erik; Bijlard, Jochem; Abeln, Sanne; Heringa, Jaap; Feenstra, K Anton

    2014-02-01

    To assess whether two proteins will interact under physiological conditions, information on the interaction free energy is needed. Statistical learning techniques and docking methods for predicting protein-protein interactions cannot quantitatively estimate binding free energies. Full atomistic molecular simulation methods do have this potential, but are completely unfeasible for large-scale applications in terms of computational cost required. Here we investigate whether applying coarse-grained (CG) molecular dynamics simulations is a viable alternative for complexes of known structure. We calculate the free energy barrier with respect to the bound state based on molecular dynamics simulations using both a full atomistic and a CG force field for the TCR-pMHC complex and the MP1-p14 scaffolding complex. We find that the free energy barriers from the CG simulations are of similar accuracy as those from the full atomistic ones, while achieving a speedup of >500-fold. We also observe that extensive sampling is extremely important to obtain accurate free energy barriers, which is only within reach for the CG models. Finally, we show that the CG model preserves biological relevance of the interactions: (i) we observe a strong correlation between evolutionary likelihood of mutations and the impact on the free energy barrier with respect to the bound state; and (ii) we confirm the dominant role of the interface core in these interactions. Therefore, our results suggest that CG molecular simulations can realistically be used for the accurate prediction of protein-protein interaction strength. The python analysis framework and data files are available for download at http://www.ibi.vu.nl/downloads/bioinformatics-2013-btt675.tgz.

  20. Theoretical study of reactive and nonreactive turbulent coaxial jets

    NASA Technical Reports Server (NTRS)

    Gupta, R. N.; Wakelyn, N. T.

    1976-01-01

    The hydrodynamic properties and the reaction kinetics of axisymmetric coaxial turbulent jets having steady mean quantities are investigated. From the analysis, limited to free turbulent boundary layer mixing of such jets, it is found that the two-equation model of turbulence is adequate for most nonreactive flows. For the reactive flows, where an allowance must be made for second order correlations of concentration fluctuations in the finite rate chemistry for initially inhomogeneous mixture, an equation similar to the concentration fluctuation equation of a related model is suggested. For diffusion limited reactions, the eddy breakup model based on concentration fluctuations is found satisfactory and simple to use. The theoretical results obtained from these various models are compared with some of the available experimental data.

Top