Science.gov

Sample records for functional modelling approach

  1. Thermoplasmonics modeling: A Green's function approach

    NASA Astrophysics Data System (ADS)

    Baffou, Guillaume; Quidant, Romain; Girard, Christian

    2010-10-01

    We extend the discrete dipole approximation (DDA) and the Green’s dyadic tensor (GDT) methods—previously dedicated to all-optical simulations—to investigate the thermodynamics of illuminated plasmonic nanostructures. This extension is based on the use of the thermal Green’s function and a original algorithm that we named Laplace matrix inversion. It allows for the computation of the steady-state temperature distribution throughout plasmonic systems. This hybrid photothermal numerical method is suited to investigate arbitrarily complex structures. It can take into account the presence of a dielectric planar substrate and is simple to implement in any DDA or GDT code. Using this numerical framework, different applications are discussed such as thermal collective effects in nanoparticles assembly, the influence of a substrate on the temperature distribution and the heat generation in a plasmonic nanoantenna. This numerical approach appears particularly suited for new applications in physics, chemistry, and biology such as plasmon-induced nanochemistry and catalysis, nanofluidics, photothermal cancer therapy, or phase-transition control at the nanoscale.

  2. Functional state modelling approach validation for yeast and bacteria cultivations

    PubMed Central

    Roeva, Olympia; Pencheva, Tania

    2014-01-01

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778

  3. Functional state modelling approach validation for yeast and bacteria cultivations.

    PubMed

    Roeva, Olympia; Pencheva, Tania

    2014-09-03

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach.

  4. Stochastic Functional Data Analysis: A Diffusion Model-based Approach

    PubMed Central

    Zhu, Bin; Song, Peter X.-K.; Taylor, Jeremy M.G.

    2011-01-01

    Summary This paper presents a new modeling strategy in functional data analysis. We consider the problem of estimating an unknown smooth function given functional data with noise. The unknown function is treated as the realization of a stochastic process, which is incorporated into a diffusion model. The method of smoothing spline estimation is connected to a special case of this approach. The resulting models offer great flexibility to capture the dynamic features of functional data, and allow straightforward and meaningful interpretation. The likelihood of the models is derived with Euler approximation and data augmentation. A unified Bayesian inference method is carried out via a Markov Chain Monte Carlo algorithm including a simulation smoother. The proposed models and methods are illustrated on some prostate specific antigen data, where we also show how the models can be used for forecasting. PMID:21418053

  5. Functional renormalization group approach to the Kraichnan model.

    PubMed

    Pagani, Carlo

    2015-09-01

    We study the anomalous scaling of the structure functions of a scalar field advected by a random Gaussian velocity field, the Kraichnan model, by means of functional renormalization group techniques. We analyze the symmetries of the model and derive the leading correction to the structure functions considering the renormalization of composite operators and applying the operator product expansion.

  6. Functional Renormalization Group Approach to the Sine-Gordon Model

    SciTech Connect

    Nagy, S.; Sailer, K.; Nandori, I.; Polonyi, J.

    2009-06-19

    The renormalization group flow is presented for the two-dimensional sine-Gordon model within the framework of the functional renormalization group method by including the wave-function renormalization constant. The Kosterlitz-Thouless-Berezinski type phase structure is recovered as the interpolating scaling law between two competing IR attractive area of the global renormalization group flow.

  7. The Thirring-Wess model revisited: a functional integral approach

    SciTech Connect

    Belvedere, L.V. . E-mail: armflavio@if.uff.br

    2005-06-01

    We consider the Wess-Zumino-Witten theory to obtain the functional integral bosonization of the Thirring-Wess model with an arbitrary regularization parameter. Proceeding a systematic of decomposing the Bose field algebra into gauge-invariant- and gauge-non-invariant field subalgebras, we obtain the local decoupled quantum action. The generalized operator solutions for the equations of motion are reconstructed from the functional integral formalism. The isomorphism between the QED {sub 2} (QCD {sub 2}) with broken gauge symmetry by a regularization prescription and the Abelian (non-Abelian) Thirring-Wess model with a fixed bare mass for the meson field is established.

  8. A Model-Based Approach to Constructing Music Similarity Functions

    NASA Astrophysics Data System (ADS)

    West, Kris; Lamere, Paul

    2006-12-01

    Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  9. A functional-structural modelling approach to autoregulation of nodulation.

    PubMed

    Han, Liqi; Gresshoff, Peter M; Hanan, Jim

    2011-04-01

    Autoregulation of nodulation is a long-distance shoot-root signalling regulatory system that regulates nodule meristem proliferation in legume plants. However, due to the intricacy and subtleness of the signalling nature in plants, molecular and biochemical details underlying mechanisms of autoregulation of nodulation remain largely unknown. The purpose of this study is to use functional-structural plant modelling to investigate the complexity of this signalling system. There are two major challenges to be met: modelling the 3D architecture of legume roots with nodulation and co-ordinating signalling-developmental processes with various rates. Soybean (Glycine max) was chosen as the target legume. Its root system was observed to capture lateral root branching and nodule distribution patterns. L-studio, a software tool supporting context-sensitive L-system modelling, was used for the construction of the architectural model and integration with the internal signalling. A branching pattern with regular radial angles was found between soybean lateral roots, from which a root mapping method was developed to characterize the laterals. Nodules were mapped based on 'nodulation section' to reveal nodule distribution. A root elongation algorithm was then developed for simulation of root development. Based on the use of standard sub-modules, a synchronization algorithm was developed to co-ordinate multi-rate signalling and developmental processes. The modelling methods developed here not only allow recreation of legume root architecture with lateral branching and nodulation details, but also enable parameterization of internal signalling to produce different regulation results. This provides the basis for using virtual experiments to help in investigating the signalling mechanisms at work.

  10. Towards a model independent approach to fragmentation functions

    NASA Astrophysics Data System (ADS)

    Christova, Ekaterina; Leader, Elliot

    2009-01-01

    We show that the difference cross sections in unpolarized semi-inclusive deep inelastic scattering e+N→e+h+X and pp hadron production p+p→h+X determine independently in a model independent way, in any order in QCD, the two fragmentation functions (FFs): Duh- hmacr and Ddh- hmacr , h=π±, K± or a sum over charged hadrons. If both K± and Ks0 are measured, then e+e-→K+X, e+N→e+K+X, and p+p→K+X present independent measurements of just one FF: Du-dK++K-. The above results allow one to test the existing parametrizations, obtained with various different assumptions about the FFs, and to test the Q2 evolution and factorization.

  11. A Model Independent Approach for Determining the Fragmentation Functions

    NASA Astrophysics Data System (ADS)

    Christova, Ekaterina; Leader, Elliot

    2009-08-01

    We show that the difference cross sections in unpolarized semi-inclusive deep inelastic scattering (SIDIS) e+N→e+h+X and pp hadron production p+p→h+X determine independently in a model independent way, in any order in Quantum Chromodynamics (QCD), the two FFs: Duh-h¯ and Ddh-h¯, h = π±, K± or a sum over charged hadrons. If both K± and K20 are measured, then e+e-→K+X, e+N→e+K+X and p+p→K+X present independent measurements of just one FF: Du-dK++K-. The above results allow to test the existing parameterizations, obtained with various different assumptions about the FFs, and to test the Q2 evolution and factorization.

  12. Model approach to starch functionality in bread making.

    PubMed

    Goesaert, Hans; Leman, Pedro; Delcour, Jan A

    2008-08-13

    We used modified wheat starches in gluten-starch flour models to study the role of starch in bread making. Incorporation of hydroxypropylated starch in the recipe reduced loaf volume and initial crumb firmness and increased crumb gas cell size. Firming rate and firmness after storage increased for loaves containing the least hydroxypropylated starch. Inclusion of cross-linked starch had little effect on loaf volume or crumb structure but increased crumb firmness. The firming rate was mostly similar to that of control samples. Presumably, the moment and extent of starch gelatinization and the concomitant water migration influence the structure formation during baking. Initial bread firmness seems determined by the rigidity of the gelatinized granules and leached amylose. Amylopectin retrogradation and strengthening of a long-range network by intensifying the inter- and intramolecular starch-starch and possibly also starch-gluten interactions (presumably because of water incorporation in retrograded amylopectin crystallites) play an important role in firming.

  13. Approaches to Modelling the Dynamical Activity of Brain Function Based on the Electroencephalogram

    NASA Astrophysics Data System (ADS)

    Liley, David T. J.; Frascoli, Federico

    The brain is arguably the quintessential complex system as indicated by the patterns of behaviour it produces. Despite many decades of concentrated research efforts, we remain largely ignorant regarding the essential processes that regulate and define its function. While advances in functional neuroimaging have provided welcome windows into the coarse organisation of the neuronal networks that underlie a range of cognitive functions, they have largely ignored the fact that behaviour, and by inference brain function, unfolds dynamically. Modelling the brain's dynamics is therefore a critical step towards understanding the underlying mechanisms of its functioning. To date, models have concentrated on describing the sequential organisation of either abstract mental states (functionalism, hard AI) or the objectively measurable manifestations of the brain's ongoing activity (rCBF, EEG, MEG). While the former types of modelling approach may seem to better characterise brain function, they do so at the expense of not making a definite connection with the actual physical brain. Of the latter, only models of the EEG (or MEG) offer a temporal resolution well matched to the anticipated temporal scales of brain (mental processes) function. This chapter will outline the most pertinent of these modelling approaches, and illustrate, using the electrocortical model of Liley et al, how the detailed application of the methods of nonlinear dynamics and bifurcation theory is central to exploring and characterising their various dynamical features. The rich repertoire of dynamics revealed by such dynamical systems approaches arguably represents a critical step towards an understanding of the complexity of brain function.

  14. A Markov chain Monte Carlo with Gibbs sampling approach to anisotropic receiver function forward modeling

    NASA Astrophysics Data System (ADS)

    Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.

    2017-01-01

    Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper-mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach for the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (<˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe trade-offs-an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.

  15. A Markov chain Monte Carlo with Gibbs sampling approach to anisotropic receiver function forward modeling

    NASA Astrophysics Data System (ADS)

    Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.

    2016-10-01

    Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach to the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (< ˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe tradeoffs - an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.

  16. a Radiative Transfer Equation/phase Function Approach to Vegetation Canopy Reflectance Modeling

    NASA Astrophysics Data System (ADS)

    Randolph, Marion Herbert

    Vegetation canopy reflectance models currently in use differ considerably in their treatment of the radiation scattering problem, and it is this fundamental difference which stimulated this investigation of the radiative transfer equation/phase function approach. The primary objective of this thesis is the development of vegetation canopy phase functions which describe the probability of radiation scattering within a canopy in terms of its biological and physical characteristics. In this thesis a technique based upon quadrature formulae is used to numerically generate a variety of vegetation canopy phase functions. Based upon leaf inclination distribution functions, phase functions are generated for plagiophile, extremophile, erectophile, spherical, planophile, blue grama (Bouteloua gracilis), and soybean canopies. The vegetation canopy phase functions generated are symmetric with respect to the incident and exitant angles, and hence satisfy the principle of reciprocity. The remaining terms in the radiative transfer equation are also derived in terms of canopy geometry and optical properties to complete the development of the radiative transfer equation/phase function description for vegetation canopy reflectance modeling. In order to test the radiative transfer equation/phase function approach the iterative discrete ordinates method for solving the radiative transfer equation is implemented. In comparison with field data, the approach tends to underestimate the visible reflectance and overestimate infrared reflectance. The approach does compare well, however, with other extant canopy reflectance models; for example, it agrees to within ten to fifteen percent of the Suits model (Suits, 1972). Sensitivity analysis indicates that canopy geometry may influence reflectance as much as 100 percent for a given wavelength. Optical thickness produces little change in reflectance after a depth of 2.5 (Leaf area index of 4.0) is reached, and reflectance generally increases

  17. Functional modelling of planar cell polarity: an approach for identifying molecular function

    PubMed Central

    2013-01-01

    Background Cells in some tissues acquire a polarisation in the plane of the tissue in addition to apical-basal polarity. This polarisation is commonly known as planar cell polarity and has been found to be important in developmental processes, as planar polarity is required to define the in-plane tissue coordinate system at the cellular level. Results We have built an in-silico functional model of cellular polarisation that includes cellular asymmetry, cell-cell signalling and a response to a global cue. The model has been validated and parameterised against domineering non-autonomous wing hair phenotypes in Drosophila. Conclusions We have carried out a systematic comparison of in-silico polarity phenotypes with patterns observed in vivo under different genetic manipulations in the wing. This has allowed us to classify the specific functional roles of proteins involved in generating cell polarity, providing new hypotheses about their specific functions, in particular for Pk and Dsh. The predictions from the model allow direct assignment of functional roles of genes from genetic mosaic analysis of Drosophila wings. PMID:23672397

  18. A Bayesian approach to functional-based multilevel modeling of longitudinal data: applications to environmental epidemiology.

    PubMed

    Berhane, Kiros; Molitor, Nuoo-Ting

    2008-10-01

    Flexible multilevel models are proposed to allow for cluster-specific smooth estimation of growth curves in a mixed-effects modeling format that includes subject-specific random effects on the growth parameters. Attention is then focused on models that examine between-cluster comparisons of the effects of an ecologic covariate of interest (e.g. air pollution) on nonlinear functionals of growth curves (e.g. maximum rate of growth). A Gibbs sampling approach is used to get posterior mean estimates of nonlinear functionals along with their uncertainty estimates. A second-stage ecologic random-effects model is used to examine the association between a covariate of interest (e.g. air pollution) and the nonlinear functionals. A unified estimation procedure is presented along with its computational and theoretical details. The models are motivated by, and illustrated with, lung function and air pollution data from the Southern California Children's Health Study.

  19. A corpus driven approach applying the "frame semantic" method for modeling functional status terminology.

    PubMed

    Ruggieri, Alexander P; Pakhomov, Serguei V; Chute, Christopher G

    2004-01-01

    In an effort to unearth semantic models that could prove fruitful to functional-status terminology development we applied the "frame semantic" method, derived from the linguistic theory of thematic roles currently exemplified in the Berkeley "FrameNet" Project. Full descriptive sentences with functional-status conceptual meaning were derived from structured content within a corpus of questionnaire assessment instruments commonly used in clinical practice for functional-status assessment. Syntactic components in those sentences were delineated through manual annotation and mark-up. The annotated syntactic constituents were tagged as frame elements according to their semantic role within the context of the derived functional-status expression. Through this process generalizable "semantic frames" were elaborated with recurring "frame elements". The "frame semantic" method as an approach to rendering semantic models for functional-status terminology development and its use as a basis for machine recognition of functional status data in clinical narratives are discussed.

  20. A perturbative approach to the redshift space correlation function: beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Bose, Benjamin; Koyama, Kazuya

    2017-08-01

    We extend our previous redshift space power spectrum code to the redshift space correlation function. Here we focus on the Gaussian Streaming Model (GSM). Again, the code accommodates a wide range of modified gravity and dark energy models. For the non-linear real space correlation function used in the GSM we use the Fourier transform of the RegPT 1-loop matter power spectrum. We compare predictions of the GSM for a Vainshtein screened and Chameleon screened model as well as GR. These predictions are compared to the Fourier transform of the Taruya, Nishimichi and Saito (TNS) redshift space power spectrum model which is fit to N-body data. We find very good agreement between the Fourier transform of the TNS model and the GSM predictions, with <= 6% deviations in the first two correlation function multipoles for all models for redshift space separations in 50Mpch <= s <= 180Mpc/h. Excellent agreement is found in the differences between the modified gravity and GR multipole predictions for both approaches to the redshift space correlation function, highlighting their matched ability in picking up deviations from GR. We elucidate the timeliness of such non-standard templates at the dawn of stage-IV surveys and discuss necessary preparations and extensions needed for upcoming high quality data.

  1. Global land model development: time to shift from a plant functional type to a plant functional trait approach

    NASA Astrophysics Data System (ADS)

    Reich, P. B.; Butler, E. E.

    2015-12-01

    This project will advance global land models by shifting from the current plant functional type approach to one that better utilizes what is known about the importance and variability of plant traits, within a framework of simultaneously improving fundamental physiological relations that are at the core of model carbon cycling algorithms. Existing models represent the global distribution of vegetation types using the Plant Functional Typeconcept. Plant Functional Types are classes of plant species with similar evolutionary and life history withpresumably similar responses to environmental conditions like CO2, water and nutrient availability. Fixedproperties for each Plant Functional Type are specified through a collection of physiological parameters, or traits.These traits, mostly physiological in nature (e.g., leaf nitrogen and longevity) are used in model algorithms to estimate ecosystem properties and/or drive calculated process rates. In most models, 5 to 15 functional types represent terrestrial vegetation; in essence, they assume there are a total of only 5 to 15 different kinds of plants on the entire globe. This assumption of constant plant traits captured within the functional type concept has serious limitations, as a single set of traits does not reflect trait variation observed within and between species and communities. While this simplification was necessary decades past, substantial improvement is now possible. Rather than assigning a small number of constant parameter values to all grid cells in a model, procedures will be developed that predict a frequency distribution of values for any given grid cell. Thus, the mean and variance, and how these change with time, will inform and improve model performance. The trait-based approach will improve land modeling by (1) incorporating patterns and heterogeneity of traits into model parameterization, thus evolving away from a framework that considers large areas of vegetation to have near identical trait

  2. Modeling and Simulation Approaches for Cardiovascular Function and Their Role in Safety Assessment

    PubMed Central

    Collins, TA; Bergenholm, L; Abdulla, T; Yates, JWT; Evans, N; Chappell, MJ; Mettetal, JT

    2015-01-01

    Systems pharmacology modeling and pharmacokinetic-pharmacodynamic (PK/PD) analysis of drug-induced effects on cardiovascular (CV) function plays a crucial role in understanding the safety risk of new drugs. The aim of this review is to outline the current modeling and simulation (M&S) approaches to describe and translate drug-induced CV effects, with an emphasis on how this impacts drug safety assessment. Current limitations are highlighted and recommendations are made for future effort in this vital area of drug research. PMID:26225237

  3. A new approach to wall modeling in LES of incompressible flow via function enrichment

    NASA Astrophysics Data System (ADS)

    Krank, Benjamin; Wall, Wolfgang A.

    2016-07-01

    A novel approach to wall modeling for the incompressible Navier-Stokes equations including flows of moderate and large Reynolds numbers is presented. The basic idea is that a problem-tailored function space allows prediction of turbulent boundary layer gradients with very coarse meshes. The proposed function space consists of a standard polynomial function space plus an enrichment, which is constructed using Spalding's law-of-the-wall. The enrichment function is not enforced but "allowed" in a consistent way and the overall methodology is much more general and also enables other enrichment functions. The proposed method is closely related to detached-eddy simulation as near-wall turbulence is modeled statistically and large eddies are resolved in the bulk flow. Interpreted in terms of a three-scale separation within the variational multiscale method, the standard scale resolves large eddies and the enrichment scale represents boundary layer turbulence in an averaged sense. The potential of the scheme is shown applying it to turbulent channel flow of friction Reynolds numbers from Reτ = 590 and up to 5,000, flow over periodic constrictions at the Reynolds numbers ReH = 10 , 595 and 19,000 as well as backward-facing step flow at Reh = 5 , 000, all with extremely coarse meshes. Excellent agreement with experimental and DNS data is observed with the first grid point located at up to y1+ = 500 and especially under adverse pressure gradients as well as in separated flows.

  4. A signal subspace approach for modeling the hemodynamic response function in fMRI.

    PubMed

    Hossein-Zadeh, Gholam-Ali; Ardekani, Babak A; Soltanian-Zadeh, Hamid

    2003-10-01

    Many fMRI analysis methods use a model for the hemodynamic response function (HRF). Common models of the HRF, such as the Gaussian or Gamma functions, have parameters that are usually selected a priori by the data analyst. A new method is presented that characterizes the HRF over a wide range of parameters via three basis signals derived using principal component analysis (PCA). Covering the HRF variability, these three basis signals together with the stimulation pattern define signal subspaces which are applicable to both linear and nonlinear modeling and identification of the HRF and for various activation detection strategies. Analysis of simulated fMRI data using the proposed signal subspace showed increased detection sensitivity compared to the case of using a previously proposed trigonometric subspace. The methodology was also applied to activation detection in both event-related and block design experimental fMRI data using both linear and nonlinear modeling of the HRF. The activated regions were consistent with previous studies, indicating the ability of the proposed approach in detecting brain activation without a priori assumptions about the shape parameters of the HRF. The utility of the proposed basis functions in identifying the HRF is demonstrated by estimating the HRF in different activated regions.

  5. A function space approach to state and model error estimation for elliptic systems

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    An approach is advanced for the concurrent estimation of the state and of the model errors of a system described by elliptic equations. The estimates are obtained by a deterministic least-squares approach that seeks to minimize a quadratic functional of the model errors, or equivalently, to find the vector of smallest norm subject to linear constraints in a suitably defined function space. The minimum norm solution can be obtained by solving either a Fredholm integral equation of the second kind for the case with continuously distributed data or a related matrix equation for the problem with discretely located measurements. Solution of either one of these equations is obtained in a batch-processing mode in which all of the data is processed simultaneously or, in certain restricted geometries, in a spatially scanning mode in which the data is processed recursively. After the methods for computation of the optimal esimates are developed, an analysis of the second-order statistics of the estimates and of the corresponding estimation error is conducted. Based on this analysis, explicit expressions for the mean-square estimation error associated with both the state and model error estimates are then developed. While this paper focuses on theoretical developments, applications arising in the area of large structure static shape determination are contained in a closely related paper (Rodriguez and Scheid, 1982).

  6. A function space approach to state and model error estimation for elliptic systems

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    An approach is advanced for the concurrent estimation of the state and of the model errors of a system described by elliptic equations. The estimates are obtained by a deterministic least-squares approach that seeks to minimize a quadratic functional of the model errors, or equivalently, to find the vector of smallest norm subject to linear constraints in a suitably defined function space. The minimum norm solution can be obtained by solving either a Fredholm integral equation of the second kind for the case with continuously distributed data or a related matrix equation for the problem with discretely located measurements. Solution of either one of these equations is obtained in a batch-processing mode in which all of the data is processed simultaneously or, in certain restricted geometries, in a spatially scanning mode in which the data is processed recursively. After the methods for computation of the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the corresponding estimation error is conducted. Based on this analysis, explicit expressions for the mean-square estimation error associated with both the state and model error estimates are then developed.

  7. Modeling and multi-response optimization of pervaporation of organic aqueous solutions using desirability function approach.

    PubMed

    Cojocaru, C; Khayet, M; Zakrzewska-Trznadel, G; Jaworska, A

    2009-08-15

    The factorial design of experiments and desirability function approach has been applied for multi-response optimization in pervaporation separation process. Two organic aqueous solutions were considered as model mixtures, water/acetonitrile and water/ethanol mixtures. Two responses have been employed in multi-response optimization of pervaporation, total permeate flux and organic selectivity. The effects of three experimental factors (feed temperature, initial concentration of organic compound in feed solution, and downstream pressure) on the pervaporation responses have been investigated. The experiments were performed according to a 2(3) full factorial experimental design. The factorial models have been obtained from experimental design and validated statistically by analysis of variance (ANOVA). The spatial representations of the response functions were drawn together with the corresponding contour line plots. Factorial models have been used to develop the overall desirability function. In addition, the overlap contour plots were presented to identify the desirability zone and to determine the optimum point. The optimal operating conditions were found to be, in the case of water/acetonitrile mixture, a feed temperature of 55 degrees C, an initial concentration of 6.58% and a downstream pressure of 13.99 kPa, while for water/ethanol mixture a feed temperature of 55 degrees C, an initial concentration of 4.53% and a downstream pressure of 9.57 kPa. Under such optimum conditions it was observed experimentally an improvement of both the total permeate flux and selectivity.

  8. Optogenetic approaches to evaluate striatal function in animal models of Parkinson disease

    PubMed Central

    Parker, Krystal L.; Kim, Youngcho; Alberico, Stephanie L.; Emmons, Eric B.; Narayanan, Nandakumar S.

    2016-01-01

    Optogenetics refers to the ability to control cells that have been genetically modified to express light-sensitive ion channels. The introduction of optogenetic approaches has facilitated the dissection of neural circuits. Optogenetics allows for the precise stimulation and inhibition of specific sets of neurons and their projections with fine temporal specificity. These techniques are ideally suited to investigating neural circuitry underlying motor and cognitive dysfunction in animal models of human disease. Here, we focus on how optogenetics has been used over the last decade to probe striatal circuits that are involved in Parkinson disease, a neurodegenerative condition involving motor and cognitive abnormalities resulting from degeneration of midbrain dopaminergic neurons. The precise mechanisms underlying the striatal contribution to both cognitive and motor dysfunction in Parkinson disease are unknown. Although optogenetic approaches are somewhat removed from clinical use, insight from these studies can help identify novel therapeutic targets and may inspire new treatments for Parkinson disease. Elucidating how neuronal and behavioral functions are influenced and potentially rescued by optogenetic manipulation in animal models could prove to be translatable to humans. These insights can be used to guide future brain-stimulation approaches for motor and cognitive abnormalities in Parkinson disease and other neuropsychiatric diseases. PMID:27069384

  9. Optogenetic approaches to evaluate striatal function in animal models of Parkinson disease.

    PubMed

    Parker, Krystal L; Kim, Youngcho; Alberico, Stephanie L; Emmons, Eric B; Narayanan, Nandakumar S

    2016-03-01

    Optogenetics refers to the ability to control cells that have been genetically modified to express light-sensitive ion channels. The introduction of optogenetic approaches has facilitated the dissection of neural circuits. Optogenetics allows for the precise stimulation and inhibition of specific sets of neurons and their projections with fine temporal specificity. These techniques are ideally suited to investigating neural circuitry underlying motor and cognitive dysfunction in animal models of human disease. Here, we focus on how optogenetics has been used over the last decade to probe striatal circuits that are involved in Parkinson disease, a neurodegenerative condition involving motor and cognitive abnormalities resulting from degeneration of midbrain dopaminergic neurons. The precise mechanisms underlying the striatal contribution to both cognitive and motor dysfunction in Parkinson disease are unknown. Although optogenetic approaches are somewhat removed from clinical use, insight from these studies can help identify novel therapeutic targets and may inspire new treatments for Parkinson disease. Elucidating how neuronal and behavioral functions are influenced and potentially rescued by optogenetic manipulation in animal models could prove to be translatable to humans. These insights can be used to guide future brain-stimulation approaches for motor and cognitive abnormalities in Parkinson disease and other neuropsychiatric diseases.

  10. General atomistic approach for modeling metal-semiconductor interfaces using density functional theory and nonequilibrium Green's function

    NASA Astrophysics Data System (ADS)

    Stradi, Daniele; Martinez, Umberto; Blom, Anders; Brandbyge, Mads; Stokbro, Kurt

    2016-04-01

    Metal-semiconductor contacts are a pillar of modern semiconductor technology. Historically, their microscopic understanding has been hampered by the inability of traditional analytical and numerical methods to fully capture the complex physics governing their operating principles. Here we introduce an atomistic approach based on density functional theory and nonequilibrium Green's function, which includes all the relevant ingredients required to model realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via I -Vbias curve simulations. We apply this method to characterize an Ag/Si interface relevant for photovoltaic applications and study the rectifying-to-Ohmic transition as a function of the semiconductor doping. We also demonstrate that the standard "activation energy" method for the analysis of I -Vbias data might be inaccurate for nonideal interfaces as it neglects electron tunneling, and that finite-size atomistic models have problems in describing these interfaces in the presence of doping due to a poor representation of space-charge effects. Conversely, the present method deals effectively with both issues, thus representing a valid alternative to conventional procedures for the accurate characterization of metal-semiconductor interfaces.

  11. Optimizing the general linear model for functional near-infrared spectroscopy: an adaptive hemodynamic response function approach

    PubMed Central

    Uga, Minako; Dan, Ippeita; Sano, Toshifumi; Dan, Haruka; Watanabe, Eiju

    2014-01-01

    Abstract. An increasing number of functional near-infrared spectroscopy (fNIRS) studies utilize a general linear model (GLM) approach, which serves as a standard statistical method for functional magnetic resonance imaging (fMRI) data analysis. While fMRI solely measures the blood oxygen level dependent (BOLD) signal, fNIRS measures the changes of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) signals at a temporal resolution severalfold higher. This suggests the necessity of adjusting the temporal parameters of a GLM for fNIRS signals. Thus, we devised a GLM-based method utilizing an adaptive hemodynamic response function (HRF). We sought the optimum temporal parameters to best explain the observed time series data during verbal fluency and naming tasks. The peak delay of the HRF was systematically changed to achieve the best-fit model for the observed oxy- and deoxy-Hb time series data. The optimized peak delay showed different values for each Hb signal and task. When the optimized peak delays were adopted, the deoxy-Hb data yielded comparable activations with similar statistical power and spatial patterns to oxy-Hb data. The adaptive HRF method could suitably explain the behaviors of both Hb parameters during tasks with the different cognitive loads during a time course, and thus would serve as an objective method to fully utilize the temporal structures of all fNIRS data. PMID:26157973

  12. Modeling the link between soil microbial community structure and function in a bottom-up approach

    NASA Astrophysics Data System (ADS)

    Kaiser, C.; Richter, A.; Franklin, O.; Evans, S. E.; Dieckmann, U.

    2012-12-01

    Understanding mechanisms of soil carbon (C) turnover requires understanding the link between microbial community dynamics and soil decomposition processes. We present here an individual-based model that aims at elucidating this link by a bottom-up approach. Our approach differs from traditional soil C cycling models in that the overall dynamics of soil organic matter turnover emerges as the result of interactions between individual microbes at the soil microsite level, rather than being described by stock and flow rate equations at the bulk soil level. All soil microbes are modeled individually, each belonging to one of several functional groups defined by functional traits. Specifically, functional traits determine (1) growth and turnover rates, (2) production of extracellular enzymes and (3) microbial cell stoichiometry. Our model incorporates competition for space and nutrients (C and nitrogen, N) as well as synergistic interactions between individual microbes in a spatially structured environment represented by a two-dimensional grid. Due to different C and N limitations of different functional groups, community composition is sensitive to the availability of complex and labile C and N. Thus, altered resource availability changes microbial community composition, which in turn affects CO2 and N release from the soil. In our model, microbes constantly alter their own environment through the decomposition of different substrates, thereby exerting a feedback on community composition, which leads to a succession of microbial groups. We used the model's intrinsic link between resource availability, community dynamics and decomposition function to investigate the mechanism underlying the rhizosphere priming effect (i.e. increased decomposition of older soil C triggered by the input of labile C). In particular, we examined the spatial growth of a root releasing exudates of varying C:N ratios under the presence or absence of different functional groups. We find that a

  13. Three-dimensional ionospheric tomography reconstruction using the model function approach in Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Wang, Sicheng; Huang, Sixun; Xiang, Jie; Fang, Hanxian; Feng, Jian; Wang, Yu

    2016-12-01

    Ionospheric tomography is based on the observed slant total electron content (sTEC) along different satellite-receiver rays to reconstruct the three-dimensional electron density distributions. Due to incomplete measurements provided by the satellite-receiver geometry, it is a typical ill-posed problem, and how to overcome the ill-posedness is still a crucial content of research. In this paper, Tikhonov regularization method is used and the model function approach is applied to determine the optimal regularization parameter. This algorithm not only balances the weights between sTEC observations and background electron density field but also converges globally and rapidly. The background error covariance is given by multiplying background model variance and location-dependent spatial correlation, and the correlation model is developed by using sample statistics from an ensemble of the International Reference Ionosphere 2012 (IRI2012) model outputs. The Global Navigation Satellite System (GNSS) observations in China are used to present the reconstruction results, and measurements from two ionosondes are used to make independent validations. Both the test cases using artificial sTEC observations and actual GNSS sTEC measurements show that the regularization method can effectively improve the background model outputs.

  14. Plant functional diversity increases grassland productivity-related water vapor fluxes: an Ecotron and modeling approach.

    PubMed

    Milcu, Alexandru; Eugster, Werner; Bachmann, Dörte; Guderle, Marcus; Roscher, Christiane; Gockele, Annette; Landais, Damien; Ravel, Olivier; Gessler, Arthur; Lange, Markus; Ebeling, Anne; Weisser, Wolfgang W; Roy, Jacques; Hildebrandt, Anke; Buchmann, Nina

    2016-08-01

    The impact of species richness and functional diversity of plants on ecosystem water vapor fluxes has been little investigated. To address this knowledge gap, we combined a lysimeter setup in a controlled environment facility (Ecotron) with large ecosystem samples/monoliths originating from a long-term biodiversity experiment (The Jena Experiment) and a modeling approach. Our goals were (1) quantifying the impact of plant species richness (four vs. 16 species) on day- and nighttime ecosystem water vapor fluxes; (2) partitioning ecosystem evapotranspiration into evaporation and plant transpiration using the Shuttleworth and Wallace (SW) energy partitioning model; and (3) identifying the most parsimonious predictors of water vapor fluxes using plant functional-trait-based metrics such as functional diversity and community weighted means. Daytime measured and modeled evapotranspiration were significantly higher in the higher plant diversity treatment, suggesting increased water acquisition. The SW model suggests that, at low plant species richness, a higher proportion of the available energy was diverted to evaporation (a non-productive flux), while, at higher species richness, the proportion of ecosystem transpiration (a productivity-related water flux) increased. While it is well established that LAI controls ecosystem transpiration, here we also identified that the diversity of leaf nitrogen concentration among species in a community is a consistent predictor of ecosystem water vapor fluxes during daytime. The results provide evidence that, at the peak of the growing season, higher leaf area index (LAI) and lower percentage of bare ground at high plant diversity diverts more of the available water to transpiration, a flux closely coupled with photosynthesis and productivity. Higher rates of transpiration presumably contribute to the positive effect of diversity on productivity.

  15. An overview of the recent approaches for terroir functional modelling, footprinting and zoning

    NASA Astrophysics Data System (ADS)

    Vaudour, E.; Costantini, E.; Jones, G. V.; Mocali, S.

    2014-11-01

    Notions of terroir and their conceptualization through agri-environmental sciences have become popular in many parts of world. Originally developed for wine, terroir now encompasses many other crops including fruits, vegetables, cheese, olive oil, coffee, cacao and other crops, linking the uniqueness and quality of both beverages and foods to the environment where they are produced, giving the consumer a sense of place. Climate, geology, geomorphology, and soil are the main environmental factors which compose the terroir effect at different scales. Often considered immutable at the cultural scale, the natural components of terroir are actually a set of processes, which together create a delicate equilibrium and regulation of its effect on products in both space and time. Due to both a greater need to better understand regional to site variations in crop production and the growth in spatial analytic technologies, the study of terroir has shifted from a largely descriptive regional science to a more applied, technical research field. Furthermore, the explosion of spatial data availability and sensing technologies has made the within-field scale of study more valuable to the individual grower. The result has been greater adoption but also issues associated with both the spatial and temporal scales required for practical applications, as well as the relevant approaches for data synthesis. Moreover, as soil microbial communities are known to be of vital importance for terrestrial processes by driving the major soil geochemical cycles and supporting healthy plant growth, an intensive investigation of the microbial organization and their function is also required. Our objective is to present an overview of existing data and modelling approaches for terroir functional modelling, footprinting and zoning at local and regional scales. This review will focus on three main areas of recent terroir research: (1) quantifying the influences of terroir components on plant growth

  16. A conditional Granger causality model approach for group analysis in functional magnetic resonance imaging.

    PubMed

    Zhou, Zhenyu; Wang, Xunheng; Klahr, Nelson J; Liu, Wei; Arias, Diana; Liu, Hongzhi; von Deneen, Karen M; Wen, Ying; Lu, Zuhong; Xu, Dongrong; Liu, Yijun

    2011-04-01

    Granger causality model (GCM) derived from multivariate vector autoregressive models of data has been employed to identify effective connectivity in the human brain with functional magnetic resonance imaging (fMRI) and to reveal complex temporal and spatial dynamics underlying a variety of cognitive processes. In the most recent fMRI effective connectivity measures, pair-wise GCM has commonly been applied based on single-voxel values or average values from special brain areas at the group level. Although a few novel conditional GCM methods have been proposed to quantify the connections between brain areas, our study is the first to propose a viable standardized approach for group analysis of fMRI data with GCM. To compare the effectiveness of our approach with traditional pair-wise GCM models, we applied a well-established conditional GCM to preselected time series of brain regions resulting from general linear model (GLM) and group spatial kernel independent component analysis of an fMRI data set in the temporal domain. Data sets consisting of one task-related and one resting-state fMRI were used to investigate connections among brain areas with the conditional GCM method. With the GLM-detected brain activation regions in the emotion-related cortex during the block design paradigm, the conditional GCM method was proposed to study the causality of the habituation between the left amygdala and pregenual cingulate cortex during emotion processing. For the resting-state data set, it is possible to calculate not only the effective connectivity between networks but also the heterogeneity within a single network. Our results have further shown a particular interacting pattern of default mode network that can be characterized as both afferent and efferent influences on the medial prefrontal cortex and posterior cingulate cortex. These results suggest that the conditional GCM approach based on a linear multivariate vector autoregressive model can achieve greater accuracy

  17. A conditional Granger causality model approach for group analysis in functional MRI

    PubMed Central

    Zhou, Zhenyu; Wang, Xunheng; Klahr, Nelson J.; Liu, Wei; Arias, Diana; Liu, Hongzhi; von Deneen, Karen M.; Wen, Ying; Lu, Zuhong; Xu, Dongrong; Liu, Yijun

    2011-01-01

    Granger causality model (GCM) derived from multivariate vector autoregressive models of data has been employed for identifying effective connectivity in the human brain with functional MR imaging (fMRI) and to reveal complex temporal and spatial dynamics underlying a variety of cognitive processes. In the most recent fMRI effective connectivity measures, pairwise GCM has commonly been applied based on single voxel values or average values from special brain areas at the group level. Although a few novel conditional GCM methods have been proposed to quantify the connections between brain areas, our study is the first to propose a viable standardized approach for group analysis of an fMRI data with GCM. To compare the effectiveness of our approach with traditional pairwise GCM models, we applied a well-established conditional GCM to pre-selected time series of brain regions resulting from general linear model (GLM) and group spatial kernel independent component analysis (ICA) of an fMRI dataset in the temporal domain. Datasets consisting of one task-related and one resting-state fMRI were used to investigate connections among brain areas with the conditional GCM method. With the GLM detected brain activation regions in the emotion related cortex during the block design paradigm, the conditional GCM method was proposed to study the causality of the habituation between the left amygdala and pregenual cingulate cortex during emotion processing. For the resting-state dataset, it is possible to calculate not only the effective connectivity between networks but also the heterogeneity within a single network. Our results have further shown a particular interacting pattern of default mode network (DMN) that can be characterized as both afferent and efferent influences on the medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC). These results suggest that the conditional GCM approach based on a linear multivariate vector autoregressive (MVAR) model can achieve

  18. The Effect of Alternative Approaches to Design Instruction (Structural or Functional) on Students' Mental Models of Technological Design Processes

    ERIC Educational Resources Information Center

    Mioduser, David; Dagan, Osnat

    2007-01-01

    The study aimed to examine the relationship between alternative approaches towards problem solving/design teaching (structural or functional), students' mental modeling of the design process, and the quality of their solutions to design tasks. The "structural" approach emphasizes the need for an ordered and systematic learning of the design…

  19. An overview of the recent approaches to terroir functional modelling, footprinting and zoning

    NASA Astrophysics Data System (ADS)

    Vaudour, E.; Costantini, E.; Jones, G. V.; Mocali, S.

    2015-03-01

    Notions of terroir and their conceptualization through agro-environmental sciences have become popular in many parts of world. Originally developed for wine, terroir now encompasses many other crops including fruits, vegetables, cheese, olive oil, coffee, cacao and other crops, linking the uniqueness and quality of both beverages and foods to the environment where they are produced, giving the consumer a sense of place. Climate, geology, geomorphology and soil are the main environmental factors which make up the terroir effect on different scales. Often considered immutable culturally, the natural components of terroir are actually a set of processes, which together create a delicate equilibrium and regulation of its effect on products in both space and time. Due to both a greater need to better understand regional-to-site variations in crop production and the growth in spatial analytic technologies, the study of terroir has shifted from a largely descriptive regional science to a more applied, technical research field. Furthermore, the explosion of spatial data availability and sensing technologies has made the within-field scale of study more valuable to the individual grower. The result has been greater adoption of these technologies but also issues associated with both the spatial and temporal scales required for practical applications, as well as the relevant approaches for data synthesis. Moreover, as soil microbial communities are known to be of vital importance for terrestrial processes by driving the major soil geochemical cycles and supporting healthy plant growth, an intensive investigation of the microbial organization and their function is also required. Our objective is to present an overview of existing data and modelling approaches for terroir functional modelling, footprinting and zoning on local and regional scales. This review will focus on two main areas of recent terroir research: (1) using new tools to unravel the biogeochemical cycles of both

  20. An overview of the recent approaches for terroir functional modelling, footprinting and zoning

    NASA Astrophysics Data System (ADS)

    Costantini, Edoardo; Emmanuelle, Vaudour; Jones, Gregory; Mocali, Stefano

    2014-05-01

    Notions of terroir and their conceptualization through agri-environmental sciences have become popular in many parts of world. Originally developed for wine, terroir is now investigated for fruits, vegetables, cheese, olive oil, coffee, cacao and other crops, linking the uniqueness and quality of both beverages and foods to the environment where they are produced, giving the consumer a sense of place. Climate, geology, geomorphology, and soil are the main environmental factors which compose the terroir effect at different scales. Often considered immutable at the cultural scale, the natural components of terroir are actually a set of processes, which together create a delicate equilibrium and regulation of its effect on products in both space and time. Due to both a greater need to better understand regional to site variations in crop production and the growth in spatial analytic technologies, the study of terroir has shifted from a largely descriptive regional science to a more applied, technical research field. Furthermore, the explosion of spatial data availability and elaboration technologies have made the scale of study more valuable to the individual grower, resulting in greater adoption and application. Moreover, as soil microbial communities are known to be of vital importance for terrestrial processes by driving the major soil geochemical cycles and supporting healthy plant growth, an intensive investigation of the microbial organization and their function is also required. Our objective is to present an overview of existing data and modeling approaches for terroir functional modeling, footprinting and zoning at local and regional scales. This review will focus on four main areas of recent terroir research: 1) quantifying the influences of terroir components on plant growth, fruit composition and quality, mostly examining climate-soil-water relationships; 2) the metagenomic approach as new tool to unravel the biogeochemical cycles of both macro- and

  1. Modeling solvation effects in real-space and real-time within density functional approaches

    SciTech Connect

    Delgado, Alain; Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea

    2015-10-14

    The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the OCTOPUS code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.

  2. Modeling solvation effects in real-space and real-time within density functional approaches.

    PubMed

    Delgado, Alain; Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea

    2015-10-14

    The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the Octopus code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.

  3. Modeling solvation effects in real-space and real-time within density functional approaches

    NASA Astrophysics Data System (ADS)

    Delgado, Alain; Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea

    2015-10-01

    The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the Octopus code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.

  4. MIRAGE: a functional genomics-based approach for metabolic network model reconstruction and its application to cyanobacteria networks.

    PubMed

    Vitkin, Edward; Shlomi, Tomer

    2012-11-29

    Genome-scale metabolic network reconstructions are considered a key step in quantifying the genotype-phenotype relationship. We present a novel gap-filling approach, MetabolIc Reconstruction via functionAl GEnomics (MIRAGE), which identifies missing network reactions by integrating metabolic flux analysis and functional genomics data. MIRAGE's performance is demonstrated on the reconstruction of metabolic network models of E. coli and Synechocystis sp. and validated via existing networks for these species. Then, it is applied to reconstruct genome-scale metabolic network models for 36 sequenced cyanobacteria amenable for constraint-based modeling analysis and specifically for metabolic engineering. The reconstructed network models are supplied via standard SBML files.

  5. Revealing the Functions of Tenascin-C in 3-D Breast Cancer Models Using Cell Biological and in Silico Approaches

    DTIC Science & Technology

    2008-03-01

    in 3-D Breast Cancer Models Using Cell Biological and In Silico Approaches PRINCIPAL INVESTIGATOR: Agne Tarasevicuite...Functions of Tenascin-C in 3-D Breast Cancer 5a. CONTRACT NUMBER Models Using Cell Biological and In Silico Approaches 5b. GRANT NUMBER W81XWH... cancer development and progression, yet its role in this disease remains obscure. To investigate the effects of stromal TN-C on normal human mammary

  6. An Approach toward the Development of a Functional Encoding Model of Short Term Memory during Reading.

    ERIC Educational Resources Information Center

    Herndon, Mary Anne

    1978-01-01

    In a model of the functioning of short term memory, the encoding of information for subsequent storage in long term memory is simulated. In the encoding process, semantically equivalent paragraphs are detected for recombination into a macro information unit. (HOD)

  7. An Approach toward the Development of a Functional Encoding Model of Short Term Memory during Reading.

    ERIC Educational Resources Information Center

    Herndon, Mary Anne

    1978-01-01

    In a model of the functioning of short term memory, the encoding of information for subsequent storage in long term memory is simulated. In the encoding process, semantically equivalent paragraphs are detected for recombination into a macro information unit. (HOD)

  8. Integrative approaches for modeling regulation and function of the respiratory system.

    PubMed

    Ben-Tal, Alona; Tawhai, Merryn H

    2013-01-01

    Mathematical models have been central to understanding the interaction between neural control and breathing. Models of the entire respiratory system-which comprises the lungs and the neural circuitry that controls their ventilation-have been derived using simplifying assumptions to compartmentalize each component of the system and to define the interactions between components. These full system models often rely-through necessity-on empirically derived relationships or parameters, in addition to physiological values. In parallel with the development of whole respiratory system models are mathematical models that focus on furthering a detailed understanding of the neural control network, or of the several functions that contribute to gas exchange within the lung. These models are biophysically based, and rely on physiological parameters. They include single-unit models for a breathing lung or neural circuit, through to spatially distributed models of ventilation and perfusion, or multicircuit models for neural control. The challenge is to bring together these more recent advances in models of neural control with models of lung function, into a full simulation for the respiratory system that builds upon the more detailed models but remains computationally tractable. This requires first understanding the mathematical models that have been developed for the respiratory system at different levels, and which could be used to study how physiological levels of O2 and CO2 in the blood are maintained.

  9. Integrative approaches for modeling regulation and function of the respiratory system

    PubMed Central

    Ben-Tal, Alona

    2013-01-01

    Mathematical models have been central to understanding the interaction between neural control and breathing. Models of the entire respiratory system – which comprises the lungs and the neural circuitry that controls their ventilation - have been derived using simplifying assumptions to compartmentalise each component of the system and to define the interactions between components. These full system models often rely – through necessity - on empirically derived relationships or parameters, in addition to physiological values. In parallel with the development of whole respiratory system models are mathematical models that focus on furthering a detailed understanding of the neural control network, or of the several functions that contribute to gas exchange within the lung. These models are biophysically based, and rely on physiological parameters. They include single-unit models for a breathing lung or neural circuit, through to spatially-distributed models of ventilation and perfusion, or multi-circuit models for neural control. The challenge is to bring together these more recent advances in models of neural control with models of lung function, into a full simulation for the respiratory system that builds upon the more detailed models but remains computationally tractable. This requires first understanding the mathematical models that have been developed for the respiratory system at different levels, and which could be used to study how physiological levels of O2 and CO2 in the blood are maintained. PMID:24591490

  10. Functionally relevant climate variables for arid lands: Aclimatic water deficit approach for modelling desert shrub distributions

    Treesearch

    Thomas E. Dilts; Peter J. Weisberg; Camie M. Dencker; Jeanne C. Chambers

    2015-01-01

    We have three goals. (1) To develop a suite of functionally relevant climate variables for modelling vegetation distribution on arid and semi-arid landscapes of the Great Basin, USA. (2) To compare the predictive power of vegetation distribution models based on mechanistically proximate factors (water deficit variables) and factors that are more mechanistically removed...

  11. A new approach for determining fully empirical altimeter wind speed model functions

    NASA Technical Reports Server (NTRS)

    Freilich, M. H.; Challenor, Peter G.

    1994-01-01

    A statistical technique is developed for determining fully empirical model functions relating altimeter backscatter (sigma(sub 0)) measurements to near-surface neutral stability wind speed. By assuming that sigma(sub 0) varies monotonically and uniquely with wind speed, the method requires knowledge only of the separate, rather than joint distribution functions of sigma(sub 0) and wind speed. Analytic simplifications result from using a Weibull distribution to approximate the global ocean wind speed distribution; several different wind data sets are used to demonstrate the validity of the Weibull approximation. The technique has been applied to 1 year of Geosat data. Validation of the new and historical model functions using an independent buoy data set demonstrates that the present model function not only has small overall bias and root mean square (RMS) errors, but yields smaller systematic error trends with wind speed and pseudowave age than previously published models. The present analysis suggests that generally accuracte altimeter model functions can be derived without the use of colocated measurements, nor is additional significant wave height information measured by the altimeter necessary.

  12. A new approach for determining fully empirical altimeter wind speed model functions

    NASA Technical Reports Server (NTRS)

    Freilich, M. H.; Challenor, Peter G.

    1994-01-01

    A statistical technique is developed for determining fully empirical model functions relating altimeter backscatter (sigma(sub 0)) measurements to near-surface neutral stability wind speed. By assuming that sigma(sub 0) varies monotonically and uniquely with wind speed, the method requires knowledge only of the separate, rather than joint distribution functions of sigma(sub 0) and wind speed. Analytic simplifications result from using a Weibull distribution to approximate the global ocean wind speed distribution; several different wind data sets are used to demonstrate the validity of the Weibull approximation. The technique has been applied to 1 year of Geosat data. Validation of the new and historical model functions using an independent buoy data set demonstrates that the present model function not only has small overall bias and root mean square (RMS) errors, but yields smaller systematic error trends with wind speed and pseudowave age than previously published models. The present analysis suggests that generally accuracte altimeter model functions can be derived without the use of colocated measurements, nor is additional significant wave height information measured by the altimeter necessary.

  13. Relation between transverse momentum dependent distribution functions and parton distribution functions in the covariant parton model approach

    SciTech Connect

    A.V. Efremov, P. Schweitzer, O.V. Teryaev, P. Zavada

    2011-03-01

    We derive relations between transverse momentum dependent distribution functions (TMDs) and the usual parton distribution functions (PDFs) in the 3D covariant parton model, which follow from Lorentz invariance and the assumption of a rotationally symmetric distribution of parton momenta in the nucleon rest frame. Using the known PDFs f_1(x) and g_1(x) as input we predict the x- and pT-dependence of all twist-2 T-even TMDs.

  14. Optimization of global model composed of radial basis functions using the term-ranking approach

    SciTech Connect

    Cai, Peng; Tao, Chao Liu, Xiao-Jun

    2014-03-15

    A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.

  15. Optimization of global model composed of radial basis functions using the term-ranking approach

    SciTech Connect

    Cai, Peng; Tao, Chao Liu, Xiao-Jun

    2014-03-15

    A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.

  16. Challenges and implications of global modeling approaches that are alternatives to using constant plant functional types

    NASA Astrophysics Data System (ADS)

    Bodegom, P. V.

    2015-12-01

    In recent years a number of approaches have been developed to provide alternatives to the use of plant functional types (PFTs) with constant vegetation characteristics for simulating vegetation responses to climate changes. In this presentation, an overview of those approaches and their challenges is given. Some new approaches aim at removing PFTs altogether by determining the combination of vegetation characteristics that would fit local conditions best. Others describe the variation in traits within PFTs as a function of environmental drivers, based on community assembly principles. In the first approach, after an equilibrium has been established, vegetation composition and its functional attributes can change by allowing the emergence of a new type that is more fit. In the latter case, changes in vegetation attributes in space and time as assumed to be the result intraspecific variation, genetic adaptation and species turnover, without quantifying their respective importance. Hence, it is assumed that -by whatever mechanism- the community as a whole responds without major time lags to changes in environmental drivers. Recently, we showed that intraspecific variation is highly species- and trait-specific and that none of the current hypotheses on drivers of this variation seems to hold. Also genetic adaptation varies considerably among species and it is uncertain whether it will be fast enough to cope with climate change. Species turnover within a community is especially fast in herbaceous communities, but much slower in forest communities. Hence, it seems that assumptions made may not hold for forested ecosystems, but solutions to deal with this do not yet exist. Even despite the fact that responsiveness of vegetation to environmental change may be overestimated, we showed that -upon implementation of trait-environment relationships- major changes in global vegetation distribution are projected, to similar extents as to those without such responsiveness.

  17. Operator function modeling: An approach to cognitive task analysis in supervisory control systems

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1987-01-01

    In a study of models of operators in complex, automated space systems, an operator function model (OFM) methodology was extended to represent cognitive as well as manual operator activities. Development continued on a software tool called OFMdraw, which facilitates construction of an OFM by permitting construction of a heterarchic network of nodes and arcs. Emphasis was placed on development of OFMspert, an expert system designed both to model human operation and to assist real human operators. The system uses a blackboard method of problem solving to make an on-line representation of operator intentions, called ACTIN (actions interpreter).

  18. MIRAGE: a functional genomics-based approach for metabolic network model reconstruction and its application to cyanobacteria networks

    PubMed Central

    2012-01-01

    Genome-scale metabolic network reconstructions are considered a key step in quantifying the genotype-phenotype relationship. We present a novel gap-filling approach, MetabolIc Reconstruction via functionAl GEnomics (MIRAGE), which identifies missing network reactions by integrating metabolic flux analysis and functional genomics data. MIRAGE's performance is demonstrated on the reconstruction of metabolic network models of E. coli and Synechocystis sp. and validated via existing networks for these species. Then, it is applied to reconstruct genome-scale metabolic network models for 36 sequenced cyanobacteria amenable for constraint-based modeling analysis and specifically for metabolic engineering. The reconstructed network models are supplied via standard SBML files. PMID:23194418

  19. Characteristic-function approach to the Jaynes-Cummings-model revivals

    NASA Astrophysics Data System (ADS)

    Pimenta, Hudson; James, Daniel F. V.

    2016-11-01

    A two-level system interacting with an electromagnetic mode experiences inversion collapses and revivals. They are an indirect signature of the field quantization and also hold information about the mode. Thus, they may be harnessed for quantum-state reconstruction. In this work, we investigate the inversion via the characteristic function of the field mode photon-number distribution. The characteristic function is the spectral representation of the photon-number probability distribution. Exploiting the characteristic function periodicity, we find that the inversion can be understood as the result of interference between a set of structures akin to a free quantum-mechanical wave packet, with each structure corresponding to a snapshot of this packet for different degrees of dispersion. The Fourier representation of each packet Fourier determines the photon-number distribution. We also derive an integral equation whose solution yields the underlying packets. This approach allows the retrieval of the field photon-number distribution directly from the inversion under fairly general conditions and paves the way for a partial tomography technique.

  20. A Data-Driven Approach to Reverse Engineering Customer Engagement Models: Towards Functional Constructs

    PubMed Central

    de Vries, Natalie Jane; Carlson, Jamie; Moscato, Pablo

    2014-01-01

    Online consumer behavior in general and online customer engagement with brands in particular, has become a major focus of research activity fuelled by the exponential increase of interactive functions of the internet and social media platforms and applications. Current research in this area is mostly hypothesis-driven and much debate about the concept of Customer Engagement and its related constructs remains existent in the literature. In this paper, we aim to propose a novel methodology for reverse engineering a consumer behavior model for online customer engagement, based on a computational and data-driven perspective. This methodology could be generalized and prove useful for future research in the fields of consumer behaviors using questionnaire data or studies investigating other types of human behaviors. The method we propose contains five main stages; symbolic regression analysis, graph building, community detection, evaluation of results and finally, investigation of directed cycles and common feedback loops. The ‘communities’ of questionnaire items that emerge from our community detection method form possible ‘functional constructs’ inferred from data rather than assumed from literature and theory. Our results show consistent partitioning of questionnaire items into such ‘functional constructs’ suggesting the method proposed here could be adopted as a new data-driven way of human behavior modeling. PMID:25036766

  1. A data-driven approach to reverse engineering customer engagement models: towards functional constructs.

    PubMed

    de Vries, Natalie Jane; Carlson, Jamie; Moscato, Pablo

    2014-01-01

    Online consumer behavior in general and online customer engagement with brands in particular, has become a major focus of research activity fuelled by the exponential increase of interactive functions of the internet and social media platforms and applications. Current research in this area is mostly hypothesis-driven and much debate about the concept of Customer Engagement and its related constructs remains existent in the literature. In this paper, we aim to propose a novel methodology for reverse engineering a consumer behavior model for online customer engagement, based on a computational and data-driven perspective. This methodology could be generalized and prove useful for future research in the fields of consumer behaviors using questionnaire data or studies investigating other types of human behaviors. The method we propose contains five main stages; symbolic regression analysis, graph building, community detection, evaluation of results and finally, investigation of directed cycles and common feedback loops. The 'communities' of questionnaire items that emerge from our community detection method form possible 'functional constructs' inferred from data rather than assumed from literature and theory. Our results show consistent partitioning of questionnaire items into such 'functional constructs' suggesting the method proposed here could be adopted as a new data-driven way of human behavior modeling.

  2. Simulation model based approach for long exposure atmospheric point spread function reconstruction for laser guide star multiconjugate adaptive optics.

    PubMed

    Gilles, Luc; Correia, Carlos; Véran, Jean-Pierre; Wang, Lianqi; Ellerbroek, Brent

    2012-11-01

    This paper discusses an innovative simulation model based approach for long exposure atmospheric point spread function (PSF) reconstruction in the context of laser guide star (LGS) multiconjugate adaptive optics (MCAO). The approach is inspired from the classical scheme developed by Véran et al. [J. Opt. Soc. Am. A14, 3057 (1997)] and Flicker et al. [Astron. Astrophys.400, 1199 (2003)] and reconstructs the long exposure optical transfer function (OTF), i.e., the Fourier transformed PSF, as a product of separate long-exposure tip/tilt removed and tip/tilt OTFs, each estimated by postprocessing system and simulation telemetry data. Sample enclosed energy results assessing reconstruction accuracy are presented for the Thirty Meter Telescope LGS MCAO system currently under design and show that percent level absolute and differential photometry over a 30 arcsec diameter field of view are achievable provided the simulation model faithfully represents the real system.

  3. A Functional Generalization of the Field-Theoretical Renormalization Group Approach for the Single-Impurity Anderson Model

    NASA Astrophysics Data System (ADS)

    Freire, Hermann; Corrêa, Eberth

    2012-02-01

    We apply a functional implementation of the field-theoretical renormalization group (RG) method up to two loops to the single-impurity Anderson model. To achieve this, we follow a RG strategy similar to that proposed by Vojta et al. (in Phys. Rev. Lett. 85:4940, 2000), which consists of defining a soft ultraviolet regulator in the space of Matsubara frequencies for the renormalized Green's function. Then we proceed to derive analytically and solve numerically integro-differential flow equations for the effective couplings and the quasiparticle weight of the present model, which fully treat the interplay of particle-particle and particle-hole parquet diagrams and the effect of the two-loop self-energy feedback into them. We show that our results correctly reproduce accurate numerical renormalization group data for weak to slightly moderate interactions. These results are in excellent agreement with other functional Wilsonian RG works available in the literature. Since the field-theoretical RG method turns out to be easier to implement at higher loops than the Wilsonian approach, higher-order calculations within the present approach could improve further the results for this model at stronger couplings. We argue that the present RG scheme could thus offer a possible alternative to other functional RG methods to describe electronic correlations within this model.

  4. A HIERARCHICAL FUNCTIONAL DATA ANALYTIC APPROACH FOR ANALYZING PHYSIOLOGICALLY BASED PHARMACOKINETIC MODELS.

    PubMed

    Mandal, Siddhartha; Sen, Pranab K; Peddada, Shyamal D

    2013-05-01

    Ordinary differential equation (ODE) based models find application in a wide variety of biological and physiological phenomena. For instance, they arise in the description of gene regulatory networks, study of viral dynamics and other infectious diseases, etc. In the field of toxicology, they are used in physiologically based pharmacokinetic (PBPK) models for describing absorption, distribution, metabolism and excretion (ADME) of a chemical in-vivo. Knowledge about the model parameters is important for understanding the mechanism of action of a chemical and are often estimated using non-linear least squares methodology. However, there are several challenges associated with the usual methodology. Using functional data analytic methodology, in this article we develop a general framework for drawing inferences on parameters in models described by a system of differential equations. The proposed methodology takes into account variability between and within experimental units. The performance of the proposed methodology is evaluated using a simulation study and data obtained from a benzene inhalation study. We also describe a R-based software developed towards this purpose.

  5. Comparative study of approaches based on the differential critical region and correlation functions in modeling phase-transformation kinetics.

    PubMed

    Tomellini, Massimo; Fanfoni, Massimo

    2014-11-01

    The statistical methods exploiting the "Correlation-Functions" or the "Differential-Critical-Region" are both suitable for describing phase transformation kinetics ruled by nucleation and growth. We present a critical analysis of these two approaches, with particular emphasis to transformations ruled by diffusional growth which cannot be described by the Kolmogorov-Johnson-Mehl-Avrami (KJMA) theory. In order to bridge the gap between these two methods, the conditional probability functions entering the "Differential-Critical-Region" approach are determined in terms of correlation functions. The formulation of these probabilities by means of cluster expansion is also derived, which improves the accuracy of the computation. The model is applied to 2D and 3D parabolic growths occurring at constant value of either actual or phantom-included nucleation rates. Computer simulations have been employed for corroborating the theoretical modeling. The contribution to the kinetics of phantom overgrowth is estimated and it is found to be of a few percent in the case of constant value of the actual nucleation rate. It is shown that for a parabolic growth law both approaches do not provide a closed-form solution of the kinetics. In this respect, the two methods are equivalent and the longstanding overgrowth phenomenon, which limits the KJMA theory, does not admit an exact analytical solution.

  6. Comparative study of approaches based on the differential critical region and correlation functions in modeling phase-transformation kinetics

    NASA Astrophysics Data System (ADS)

    Tomellini, Massimo; Fanfoni, Massimo

    2014-11-01

    The statistical methods exploiting the "Correlation-Functions" or the "Differential-Critical-Region" are both suitable for describing phase transformation kinetics ruled by nucleation and growth. We present a critical analysis of these two approaches, with particular emphasis to transformations ruled by diffusional growth which cannot be described by the Kolmogorov-Johnson-Mehl-Avrami (KJMA) theory. In order to bridge the gap between these two methods, the conditional probability functions entering the "Differential-Critical-Region" approach are determined in terms of correlation functions. The formulation of these probabilities by means of cluster expansion is also derived, which improves the accuracy of the computation. The model is applied to 2D and 3D parabolic growths occurring at constant value of either actual or phantom-included nucleation rates. Computer simulations have been employed for corroborating the theoretical modeling. The contribution to the kinetics of phantom overgrowth is estimated and it is found to be of a few percent in the case of constant value of the actual nucleation rate. It is shown that for a parabolic growth law both approaches do not provide a closed-form solution of the kinetics. In this respect, the two methods are equivalent and the longstanding overgrowth phenomenon, which limits the KJMA theory, does not admit an exact analytical solution.

  7. A predictive analytic model for high-performance tunneling field-effect transistors approaching non-equilibrium Green's function simulations

    SciTech Connect

    Salazar, Ramon B. E-mail: hilatikh@purdue.edu; Appenzeller, Joerg; Ilatikhameneh, Hesameddin E-mail: hilatikh@purdue.edu; Rahman, Rajib; Klimeck, Gerhard

    2015-10-28

    A new compact modeling approach is presented which describes the full current-voltage (I-V) characteristic of high-performance (aggressively scaled-down) tunneling field-effect-transistors (TFETs) based on homojunction direct-bandgap semiconductors. The model is based on an analytic description of two key features, which capture the main physical phenomena related to TFETs: (1) the potential profile from source to channel and (2) the elliptic curvature of the complex bands in the bandgap region. It is proposed to use 1D Poisson's equations in the source and the channel to describe the potential profile in homojunction TFETs. This allows to quantify the impact of source/drain doping on device performance, an aspect usually ignored in TFET modeling but highly relevant in ultra-scaled devices. The compact model is validated by comparison with state-of-the-art quantum transport simulations using a 3D full band atomistic approach based on non-equilibrium Green's functions. It is shown that the model reproduces with good accuracy the data obtained from the simulations in all regions of operation: the on/off states and the n/p branches of conduction. This approach allows calculation of energy-dependent band-to-band tunneling currents in TFETs, a feature that allows gaining deep insights into the underlying device physics. The simplicity and accuracy of the approach provide a powerful tool to explore in a quantitatively manner how a wide variety of parameters (material-, size-, and/or geometry-dependent) impact the TFET performance under any bias conditions. The proposed model presents thus a practical complement to computationally expensive simulations such as the 3D NEGF approach.

  8. Validity of a power law approach to model tablet strength as a function of compaction pressure.

    PubMed

    Kloefer, Bastian; Henschel, Pascal; Kuentz, Martin

    2010-03-01

    Designing quality into dosage forms should not be only based on qualitative or purely heuristic relations. A knowledge space must be generated, in which at least some mechanistic understanding is included. This is of particular interest for critical dosage form parameters like the strength of tablets. In line with this consideration, the scope of the work is to explore the validity range of a theoretically derived power law for the tensile strength of tablets. Different grades of microcrystalline cellulose and lactose, as well as mixtures thereof, were used to compress model tablets. The power law was found to hold true in a low pressure range, which agreed with theoretical expectation. This low pressure range depended on the individual material characteristics, but as a rule of thumb, the tablets having a porosity of more than about 30% or being compressed below 100 MPa were generally well explained by the tensile strength relationship. Tablets at higher densities were less adequately described by the theory that is based on large-scale heterogeneity of the relevant contact points in the compact. Tablets close to the unity density therefore require other theoretical approaches. More research is needed to understand tablet strength in a wider range of compaction pressures.

  9. A Comprehensive Modelling Approach for the Neutral Atmospheric Boundary Layer: Consistent Inflow Conditions, Wall Function and Turbulence Model

    NASA Astrophysics Data System (ADS)

    Parente, Alessandro; Gorlé, Catherine; van Beeck, Jeroen; Benocci, Carlo

    2011-09-01

    We report on a novel approach for the Reynolds-averaged Navier-Stokes (RANS) modelling of the neutral atmospheric boundary layer (ABL), using the standard k-{\\varepsilon} turbulence model. A new inlet condition for turbulent kinetic energy is analytically derived from the solution of the k-{\\varepsilon} model transport equations, resulting in a consistent set of fully developed inlet conditions for the neutral ABL. A modification of the standard k-{\\varepsilon} model is also employed to ensure consistency between the inlet conditions and the turbulence model. In particular, the turbulence model constant C μ is generalized as a location-dependent parameter, and a source term is introduced in the transport equation for the turbulent dissipation rate. The application of the proposed methodology to cases involving obstacles in the flow is made possible through the implementation of an algorithm, which automatically switches the turbulence model formulation when going from the region where the ABL is undisturbed to the region directly affected by the building. Finally, the model is completed with a slightly modified version of the Richards and Hoxey rough-wall boundary condition. The methodology is implemented and tested in the commercial code Ansys Fluent 12.1. Results are presented for a neutral boundary layer over flat terrain and for the flow around a single building immersed in an ABL.

  10. Assessment of Safety and Functional Efficacy of Stem Cell-Based Therapeutic Approaches Using Retinal Degenerative Animal Models

    PubMed Central

    Lin, Tai-Chi; Zhu, Danhong; Hinton, David R.; Clegg, Dennis O.; Humayun, Mark S.

    2017-01-01

    Dysfunction and death of retinal pigment epithelium (RPE) and or photoreceptors can lead to irreversible vision loss. The eye represents an ideal microenvironment for stem cell-based therapy. It is considered an “immune privileged” site, and the number of cells needed for therapy is relatively low for the area of focused vision (macula). Further, surgical placement of stem cell-derived grafts (RPE, retinal progenitors, and photoreceptor precursors) into the vitreous cavity or subretinal space has been well established. For preclinical tests, assessments of stem cell-derived graft survival and functionality are conducted in animal models by various noninvasive approaches and imaging modalities. In vivo experiments conducted in animal models based on replacing photoreceptors and/or RPE cells have shown survival and functionality of the transplanted cells, rescue of the host retina, and improvement of visual function. Based on the positive results obtained from these animal experiments, human clinical trials are being initiated. Despite such progress in stem cell research, ethical, regulatory, safety, and technical difficulties still remain a challenge for the transformation of this technique into a standard clinical approach. In this review, the current status of preclinical safety and efficacy studies for retinal cell replacement therapies conducted in animal models will be discussed. PMID:28928775

  11. Priming effect and microbial diversity in ecosystem functioning and response to global change: a modeling approach using the SYMPHONY model.

    PubMed

    Perveen, Nazia; Barot, Sébastien; Alvarez, Gaël; Klumpp, Katja; Martin, Raphael; Rapaport, Alain; Herfurth, Damien; Louault, Frédérique; Fontaine, Sébastien

    2014-04-01

    Integration of the priming effect (PE) in ecosystem models is crucial to better predict the consequences of global change on ecosystem carbon (C) dynamics and its feedbacks on climate. Over the last decade, many attempts have been made to model PE in soil. However, PE has not yet been incorporated into any ecosystem models. Here, we build plant/soil models to explore how PE and microbial diversity influence soil/plant interactions and ecosystem C and nitrogen (N) dynamics in response to global change (elevated CO2 and atmospheric N depositions). Our results show that plant persistence, soil organic matter (SOM) accumulation, and low N leaching in undisturbed ecosystems relies on a fine adjustment of microbial N mineralization to plant N uptake. This adjustment can be modeled in the SYMPHONY model by considering the destruction of SOM through PE, and the interactions between two microbial functional groups: SOM decomposers and SOM builders. After estimation of parameters, SYMPHONY provided realistic predictions on forage production, soil C storage and N leaching for a permanent grassland. Consistent with recent observations, SYMPHONY predicted a CO2 -induced modification of soil microbial communities leading to an intensification of SOM mineralization and a decrease in the soil C stock. SYMPHONY also indicated that atmospheric N deposition may promote SOM accumulation via changes in the structure and metabolic activities of microbial communities. Collectively, these results suggest that the PE and functional role of microbial diversity may be incorporated in ecosystem models with a few additional parameters, improving accuracy of predictions. © 2013 John Wiley & Sons Ltd.

  12. 2-D Modeling of Nanoscale MOSFETs: Non-Equilibrium Green's Function Approach

    NASA Technical Reports Server (NTRS)

    Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan

    2001-01-01

    We have developed physical approximations and computer code capable of realistically simulating 2-D nanoscale transistors, using the non-equilibrium Green's function (NEGF) method. This is the most accurate full quantum model yet applied to 2-D device simulation. Open boundary conditions and oxide tunneling are treated on an equal footing. Electrons in the ellipsoids of the conduction band are treated within the anisotropic effective mass approximation. Electron-electron interaction is treated within Hartree approximation by solving NEGF and Poisson equations self-consistently. For the calculations presented here, parallelization is performed by distributing the solution of NEGF equations to various processors, energy wise. We present simulation of the "benchmark" MIT 25nm and 90nm MOSFETs and compare our results to those from the drift-diffusion simulator and the quantum-corrected results available. In the 25nm MOSFET, the channel length is less than ten times the electron wavelength, and the electron scattering time is comparable to its transit time. Our main results are: (1) Simulated drain subthreshold current characteristics are shown, where the potential profiles are calculated self-consistently by the corresponding simulation methods. The current predicted by our quantum simulation has smaller subthreshold slope of the Vg dependence which results in higher threshold voltage. (2) When gate oxide thickness is less than 2 nm, gate oxide leakage is a primary factor which determines off-current of a MOSFET (3) Using our 2-D NEGF simulator, we found several ways to drastically decrease oxide leakage current without compromising drive current. (4) Quantum mechanically calculated electron density is much smaller than the background doping density in the poly silicon gate region near oxide interface. This creates an additional effective gate voltage. Different ways to. include this effect approximately will be discussed.

  13. 2-D Modeling of Nanoscale MOSFETs: Non-Equilibrium Green's Function Approach

    NASA Technical Reports Server (NTRS)

    Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan

    2001-01-01

    We have developed physical approximations and computer code capable of realistically simulating 2-D nanoscale transistors, using the non-equilibrium Green's function (NEGF) method. This is the most accurate full quantum model yet applied to 2-D device simulation. Open boundary conditions and oxide tunneling are treated on an equal footing. Electrons in the ellipsoids of the conduction band are treated within the anisotropic effective mass approximation. Electron-electron interaction is treated within Hartree approximation by solving NEGF and Poisson equations self-consistently. For the calculations presented here, parallelization is performed by distributing the solution of NEGF equations to various processors, energy wise. We present simulation of the "benchmark" MIT 25nm and 90nm MOSFETs and compare our results to those from the drift-diffusion simulator and the quantum-corrected results available. In the 25nm MOSFET, the channel length is less than ten times the electron wavelength, and the electron scattering time is comparable to its transit time. Our main results are: (1) Simulated drain subthreshold current characteristics are shown, where the potential profiles are calculated self-consistently by the corresponding simulation methods. The current predicted by our quantum simulation has smaller subthreshold slope of the Vg dependence which results in higher threshold voltage. (2) When gate oxide thickness is less than 2 nm, gate oxide leakage is a primary factor which determines off-current of a MOSFET (3) Using our 2-D NEGF simulator, we found several ways to drastically decrease oxide leakage current without compromising drive current. (4) Quantum mechanically calculated electron density is much smaller than the background doping density in the poly silicon gate region near oxide interface. This creates an additional effective gate voltage. Different ways to. include this effect approximately will be discussed.

  14. A Density Functional Approach to Polarizable Models: A Kim-Gordon-Response Density Interaction Potential for Molecular Simulations

    SciTech Connect

    Tabacchi, G; Hutter, J; Mundy, C

    2005-04-07

    A combined linear response--frozen electron density model has been implemented in a molecular dynamics scheme derived from an extended Lagrangian formalism. This approach is based on a partition of the electronic charge distribution into a frozen region described by Kim-Gordon theory, and a response contribution determined by the instaneous ionic configuration of the system. The method is free from empirical pair-potentials and the parameterization protocol involves only calculations on properly chosen subsystems. They apply this method to a series of alkali halides in different physical phases and are able to reproduce experimental structural and thermodynamic properties with an accuracy comparable to Kohn-Sham density functional calculations.

  15. Integration of kinetic modeling and desirability function approach for multi-objective optimization of UASB reactor treating poultry manure wastewater.

    PubMed

    Yetilmezsoy, Kaan

    2012-08-01

    An integrated multi-objective optimization approach within the framework of nonlinear regression-based kinetic modeling and desirability function was proposed to optimize an up-flow anaerobic sludge blanket (UASB) reactor treating poultry manure wastewater (PMW). Chen-Hashimoto and modified Stover-Kincannon models were applied to the UASB reactor for determination of bio-kinetic coefficients. A new empirical formulation of volumetric organic loading rate was derived for the first time for PMW to estimate the dimensionless kinetic parameter (K) in the Chen-Hashimoto model. Maximum substrate utilization rate constant and saturation constant were predicted as 11.83 g COD/L/day and 13.02 g COD/L/day, respectively, for the modified Stover-Kincannon model. Based on four process-related variables, three objective functions including a detailed bio-economic model were derived and optimized by using a LOQO/AMPL algorithm, with a maximum overall desirability of 0.896. The proposed optimization scheme demonstrated a useful tool for the UASB reactor to optimize several responses simultaneously.

  16. Functional enzyme-based modeling approach for dynamic simulation of denitrification process in hyporheic zone sediments: Genetically structured microbial community model

    NASA Astrophysics Data System (ADS)

    Song, H. S.; Li, M.; Qian, W.; Song, X.; Chen, X.; Scheibe, T. D.; Fredrickson, J.; Zachara, J. M.; Liu, C.

    2016-12-01

    Modeling environmental microbial communities at individual organism level is currently intractable due to overwhelming structural complexity. Functional guild-based approaches alleviate this problem by lumping microorganisms into fewer groups based on their functional similarities. This reduction may become ineffective, however, when individual species perform multiple functions as environmental conditions vary. In contrast, the functional enzyme-based modeling approach we present here describes microbial community dynamics based on identified functional enzymes (rather than individual species or their groups). Previous studies in the literature along this line used biomass or functional genes as surrogate measures of enzymes due to the lack of analytical methods for quantifying enzymes in environmental samples. Leveraging our recent development of a signature peptide-based technique enabling sensitive quantification of functional enzymes in environmental samples, we developed a genetically structured microbial community model (GSMCM) to incorporate enzyme concentrations and various other omics measurements (if available) as key modeling input. We formulated the GSMCM based on the cybernetic metabolic modeling framework to rationally account for cellular regulation without relying on empirical inhibition kinetics. In the case study of modeling denitrification process in Columbia River hyporheic zone sediments collected from the Hanford Reach, our GSMCM provided a quantitative fit to complex experimental data in denitrification, including the delayed response of enzyme activation to the change in substrate concentration. Our future goal is to extend the modeling scope to the prediction of carbon and nitrogen cycles and contaminant fate. Integration of a simpler version of the GSMCM with PFLOTRAN for multi-scale field simulations is in progress.

  17. Fault Diagnosis approach based on a model-based reasoner and a functional designer for a wind turbine. An approach towards self-maintenance

    NASA Astrophysics Data System (ADS)

    Echavarria, E.; Tomiyama, T.; van Bussel, G. J. W.

    2007-07-01

    The objective of this on-going research is to develop a design methodology to increase the availability for offshore wind farms, by means of an intelligent maintenance system capable of responding to faults by reconfiguring the system or subsystems, without increasing service visits, complexity, or costs. The idea is to make use of the existing functional redundancies within the system and sub-systems to keep the wind turbine operational, even at a reduced capacity if necessary. Re-configuration is intended to be a built-in capability to be used as a repair strategy, based on these existing functionalities provided by the components. The possible solutions can range from using information from adjacent wind turbines, such as wind speed and direction, to setting up different operational modes, for instance re-wiring, re-connecting, changing parameters or control strategy. The methodology described in this paper is based on qualitative physics and consists of a fault diagnosis system based on a model-based reasoner (MBR), and on a functional redundancy designer (FRD). Both design tools make use of a function-behaviour-state (FBS) model. A design methodology based on the re-configuration concept to achieve self-maintained wind turbines is an interesting and promising approach to reduce stoppage rate, failure events, maintenance visits, and to maintain energy output possibly at reduced rate until the next scheduled maintenance.

  18. Functional Generalized Additive Models.

    PubMed

    McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David

    2014-01-01

    We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.

  19. A New Approach to Predict Microbial Community Assembly and Function Using a Stochastic, Genome-Enabled Modeling Framework

    NASA Astrophysics Data System (ADS)

    King, E.; Brodie, E.; Anantharaman, K.; Karaoz, U.; Bouskill, N.; Banfield, J. F.; Steefel, C. I.; Molins, S.

    2016-12-01

    Characterizing and predicting the microbial and chemical compositions of subsurface aquatic systems necessitates an understanding of the metabolism and physiology of organisms that are often uncultured or studied under conditions not relevant for one's environment of interest. Cultivation-independent approaches are therefore important and have greatly enhanced our ability to characterize functional microbial diversity. The capability to reconstruct genomes representing thousands of populations from microbial communities using metagenomic techniques provides a foundation for development of predictive models for community structure and function. Here, we discuss a genome-informed stochastic trait-based model incorporated into a reactive transport framework to represent the activities of coupled guilds of hypothetical microorganisms. Metabolic pathways for each microbe within a functional guild are parameterized from metagenomic data with a unique combination of traits governing organism fitness under dynamic environmental conditions. We simulate the thermodynamics of coupled electron donor and acceptor reactions to predict the energy available for cellular maintenance, respiration, biomass development, and enzyme production. While `omics analyses can now characterize the metabolic potential of microbial communities, it is functionally redundant as well as computationally prohibitive to explicitly include the thousands of recovered organisms into biogeochemical models. However, one can derive potential metabolic pathways from genomes along with trait-linkages to build probability distributions of traits. These distributions are used to assemble groups of microbes that couple one or more of these pathways. From the initial ensemble of microbes, only a subset will persist based on the interaction of their physiological and metabolic traits with environmental conditions, competing organisms, etc. Here, we analyze the predicted niches of these hypothetical microbes and

  20. Conformational analysis of glutamic acid: a density functional approach using implicit continuum solvent model.

    PubMed

    Turan, Başak; Selçuki, Cenk

    2014-09-01

    Amino acids are constituents of proteins and enzymes which take part almost in all metabolic reactions. Glutamic acid, with an ability to form a negatively charged side chain, plays a major role in intra and intermolecular interactions of proteins, peptides, and enzymes. An exhaustive conformational analysis has been performed for all eight possible forms at B3LYP/cc-pVTZ level. All possible neutral, zwitterionic, protonated, and deprotonated forms of glutamic acid structures have been investigated in solution by using polarizable continuum model mimicking water as the solvent. Nine families based on the dihedral angles have been classified for eight glutamic acid forms. The electrostatic effects included in the solvent model usually stabilize the charged forms more. However, the stability of the zwitterionic form has been underestimated due to the lack of hydrogen bonding between the solute and solvent; therefore, it is observed that compact neutral glutamic acid structures are more stable in solution than they are in vacuum. Our calculations have shown that among all eight possible forms, some are not stable in solution and are immediately converted to other more stable forms. Comparison of isoelectronic glutamic acid forms indicated that one of the structures among possible zwitterionic and anionic forms may dominate over the other possible forms. Additional investigations using explicit solvent models are necessary to determine the stability of charged forms of glutamic acid in solution as our results clearly indicate that hydrogen bonding and its type have a major role in the structure and energy of conformers.

  1. A functional genomics approach to (iso)flavonoid glycosylation in the model legume Medicago truncatula.

    PubMed

    Modolo, Luzia V; Blount, Jack W; Achnine, Lahoucine; Naoumkina, Marina A; Wang, Xiaoqiang; Dixon, Richard A

    2007-07-01

    Analysis of over 200,000 expressed sequence tags from a range of Medicago truncatula cDNA libraries resulted in the identification of over 150 different family 1 glycosyltransferase (UGT) genes. Of these, 63 were represented by full length clones in an EST library collection. Among these, 19 gave soluble proteins when expressed in E. coli, and these were screened for catalytic activity against a range of flavonoid and isoflavonoid substrates using a high-throughput HPLC assay method. Eight UGTs were identified with activity against isoflavones, flavones, flavonols or anthocyanidins, and several showed high catalytic specificity for more than one class of (iso)flavonoid substrate. All tested UGTs preferred UDP-glucose as sugar donor. Phylogenetic analysis indicated that the Medicago (iso)flavonoid glycosyltransferase gene sequences fell into a number of different clades, and several clustered with UGTs annotated as glycosylating non-flavonoid substrates. Quantitative RT-PCR and DNA microarray analysis revealed unique transcript expression patterns for each of the eight UGTs in Medicago organs and cell suspension cultures, and comparison of these patterns with known phytochemical profiles suggested in vivo functions for several of the enzymes.

  2. Online model checking approach based parameter estimation to a neuronal fate decision simulation model in Caenorhabditis elegans with hybrid functional Petri net with extension.

    PubMed

    Li, Chen; Nagasaki, Masao; Koh, Chuan Hock; Miyano, Satoru

    2011-05-01

    Mathematical modeling and simulation studies are playing an increasingly important role in helping researchers elucidate how living organisms function in cells. In systems biology, researchers typically tune many parameters manually to achieve simulation results that are consistent with biological knowledge. This severely limits the size and complexity of simulation models built. In order to break this limitation, we propose a computational framework to automatically estimate kinetic parameters for a given network structure. We utilized an online (on-the-fly) model checking technique (which saves resources compared to the offline approach), with a quantitative modeling and simulation architecture named hybrid functional Petri net with extension (HFPNe). We demonstrate the applicability of this framework by the analysis of the underlying model for the neuronal cell fate decision model (ASE fate model) in Caenorhabditis elegans. First, we built a quantitative ASE fate model containing 3327 components emulating nine genetic conditions. Then, using our developed efficient online model checker, MIRACH 1.0, together with parameter estimation, we ran 20-million simulation runs, and were able to locate 57 parameter sets for 23 parameters in the model that are consistent with 45 biological rules extracted from published biological articles without much manual intervention. To evaluate the robustness of these 57 parameter sets, we run another 20 million simulation runs using different magnitudes of noise. Our simulation results concluded that among these models, one model is the most reasonable and robust simulation model owing to the high stability against these stochastic noises. Our simulation results provide interesting biological findings which could be used for future wet-lab experiments.

  3. An Integrated model for Product Quality Development—A case study on Quality functions deployment and AHP based approach

    NASA Astrophysics Data System (ADS)

    Maitra, Subrata; Banerjee, Debamalya

    2010-10-01

    Present article is based on application of the product quality and improvement of design related with the nature of failure of machineries and plant operational problems of an industrial blower fan Company. The project aims at developing the product on the basis of standardized production parameters for selling its products in the market. Special attention is also being paid to the blower fans which have been ordered directly by the customer on the basis of installed capacity of air to be provided by the fan. Application of quality function deployment is primarily a customer oriented approach. Proposed model of QFD integrated with AHP to select and rank the decision criterions on the commercial and technical factors and the measurement of the decision parameters for selection of best product in the compettitive environment. The present AHP-QFD model justifies the selection of a blower fan with the help of the group of experts' opinion by pairwise comparison of the customer's and ergonomy based technical design requirements. The steps invoved in implementation of the QFD—AHP and selection of weighted criterion may be helpful for all similar purpose industries maintaining cost and utility for competitive product.

  4. [Partial lease squares approach to functional analysis].

    PubMed

    Preda, C

    2006-01-01

    We extend the partial least squares (PLS) approach to functional data represented in our models by sample paths of stochastic process with continuous time. Due to the infinite dimension, when functional data are used as a predictor for linear regression and classification models, the estimation problem is an ill-posed one. In this context, PLS offers a simple and efficient alternative to the methods based on the principal components of the stochastic process. We compare the results given by the PLS approach and other linear models using several datasets from economy, industry and medical fields.

  5. Determining the Mechanical Constitutive Properties of Metals as Function of Strain Rate and temperature: A Combined Experimental and Modeling Approach

    SciTech Connect

    Ian Robertson

    2007-04-28

    Development and validation of constitutive models for polycrystalline materials subjected to high strain-rate loading over a range of temperatures are needed to predict the response of engineering materials to in-service type conditions. To account accurately for the complex effects that can occur during extreme and variable loading conditions, requires significant and detailed computational and modeling efforts. These efforts must be integrated fully with precise and targeted experimental measurements that not only verify the predictions of the models, but also provide input about the fundamental processes responsible for the macroscopic response. Achieving this coupling between modeling and experiment is the guiding principle of this program. Specifically, this program seeks to bridge the length scale between discrete dislocation interactions with grain boundaries and continuum models for polycrystalline plasticity. Achieving this goal requires incorporating these complex dislocation-interface interactions into the well-defined behavior of single crystals. Despite the widespread study of metal plasticity, this aspect is not well understood for simple loading conditions, let alone extreme ones. Our experimental approach includes determining the high-strain rate response as a function of strain and temperature with post-mortem characterization of the microstructure, quasi-static testing of pre-deformed material, and direct observation of the dislocation behavior during reloading by using the in situ transmission electron microscope deformation technique. These experiments will provide the basis for development and validation of physically-based constitutive models. One aspect of the program involves the direct observation of specific mechanisms of micro-plasticity, as these indicate the boundary value problem that should be addressed. This focus on the pre-yield region in the quasi-static effort (the elasto-plastic transition) is also a tractable one from an

  6. Slave-boson mean-field theory versus variational-wave-function approach for the periodic Anderson model

    NASA Astrophysics Data System (ADS)

    Yang, Min-Fong; Sun, Shih-Jye; Hong, Tzay-Ming

    1993-12-01

    We show that a special kind of slave-boson mean-field approximation, which allows for the symmetry-broken states appropriate for a bipartite lattice, can give essentially the same results as those by the variational-wave-function approach proposed by Gula´csi, Strack, and Vollhardt [Phys. Rev. B 47, 8594 (1993)]. The advantages of our approach are briefly discussed.

  7. MODELING OF METAL BINDING ON HUMIC SUBSTANCES USING THE NIST DATABASE: AN A PRIORI FUNCTIONAL GROUP APPROACH

    EPA Science Inventory

    Various modeling approaches have been developed for metal binding on humic substances. However, most of these models are still curve-fitting exercises-- the resulting set of parameters such as affinity constants (or the distribution of them) is found to depend on pH, ionic stren...

  8. MODELING OF METAL BINDING ON HUMIC SUBSTANCES USING THE NIST DATABASE: AN A PRIORI FUNCTIONAL GROUP APPROACH

    EPA Science Inventory

    Various modeling approaches have been developed for metal binding on humic substances. However, most of these models are still curve-fitting exercises-- the resulting set of parameters such as affinity constants (or the distribution of them) is found to depend on pH, ionic stren...

  9. A functional evolutionary approach to identify determinants of nucleosome positioning: a unifying model for establishing the genome-wide pattern.

    PubMed

    Hughes, Amanda L; Jin, Yi; Rando, Oliver J; Struhl, Kevin

    2012-10-12

    Although the genomic pattern of nucleosome positioning is broadly conserved, quantitative aspects vary over evolutionary timescales. We identify the cis and trans determinants of nucleosome positioning using a functional evolutionary approach involving S. cerevisiae strains containing large genomic regions from other yeast species. In a foreign species, nucleosome depletion at promoters is maintained over poly(dA:dT) tracts, whereas internucleosome spacing and all other aspects of nucleosome positioning tested are not. Interestingly, the locations of the +1 nucleosome and RNA start sites shift in concert. Strikingly, in a foreign species, nucleosome-depleted regions occur fortuitously in coding regions, and they often act as promoters that are associated with a positioned nucleosome array linked to the length of the transcription unit. We suggest a three-step model in which nucleosome remodelers, general transcription factors, and the transcriptional elongation machinery are primarily involved in generating the nucleosome positioning pattern in vivo. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. De Broglie wavelets versus Schroedinger wave functions: A ribbon model approach to quantum theory and the mechanisms of quantum interference

    SciTech Connect

    Tang, Jau

    1996-02-01

    As an alternative to better physical explanations of the mechanisms of quantum interference and the origins of uncertainty broadening, a linear hopping model is proposed with ``color-varying`` dynamics to reflect fast exchange between time-reversed states. Intricate relations between this model, particle-wave dualism, and relativity are discussed. The wave function is shown to possess dual characteristics of a stable, localized ``soliton-like`` de Broglie wavelet and a delocalized, interfering Schroedinger carrier wave function.

  11. Integrated reclamation: Approaching ecological function?

    Treesearch

    Ann L. Hild; Nancy L. Shaw; Ginger B. Paige; Mary I. Williams

    2009-01-01

    Attempts to reclaim arid and semiarid lands have traditionally targeted plant species composition. Much research attention has been directed to seeding rates, species mixes and timing of seeding. However, in order to attain functioning systems, attention to structure and process must compliment existing efforts. We ask how to use a systems approach to enhance...

  12. In Silico Modeling of Indigo and Tyrian Purple Single-Electron Nano-Transistors Using Density Functional Theory Approach

    NASA Astrophysics Data System (ADS)

    Shityakov, Sergey; Roewer, Norbert; Förster, Carola; Broscheit, Jens-Albert

    2017-07-01

    The purpose of this study was to develop and implement an in silico model of indigoid-based single-electron transistor (SET) nanodevices, which consist of indigoid molecules from natural dye weakly coupled to gold electrodes that function in a Coulomb blockade regime. The electronic properties of the indigoid molecules were investigated using the optimized density-functional theory (DFT) with a continuum model. Higher electron transport characteristics were determined for Tyrian purple, consistent with experimentally derived data. Overall, these results can be used to correctly predict and emphasize the electron transport functions of organic SETs, demonstrating their potential for sustainable nanoelectronics comprising the biodegradable and biocompatible materials.

  13. Longitudinal Relationships Between Productive Activities and Functional Health in Later Years: A Multivariate Latent Growth Curve Modeling Approach.

    PubMed

    Choi, Eunhee; Tang, Fengyan; Kim, Sung-Geun; Turk, Phillip

    2016-10-01

    This study examined the longitudinal relationships between functional health in later years and three types of productive activities: volunteering, full-time, and part-time work. Using the data from five waves (2000-2008) of the Health and Retirement Study, we applied multivariate latent growth curve modeling to examine the longitudinal relationships among individuals 50 or over. Functional health was measured by limitations in activities of daily living. Individuals who volunteered, worked either full time or part time exhibited a slower decline in functional health than nonparticipants. Significant associations were also found between initial functional health and longitudinal changes in productive activity participation. This study provides additional support for the benefits of productive activities later in life; engagement in volunteering and employment are indeed associated with better functional health in middle and old age. © The Author(s) 2016.

  14. The "Function-to-Flow" Model: An Interdisciplinary Approach to Assessing Movement within and beyond the Context of Climbing

    ERIC Educational Resources Information Center

    Lloyd, Rebecca

    2015-01-01

    Background: Physical Education (PE) programmes are expanding to include alternative activities yet what is missing is a conceptual model that facilitates how the learning process may be understood and assessed beyond the dominant sport-technique paradigm. Purpose: The purpose of this article was to feature the emergence of a Function-to-Flow (F2F)…

  15. The "Function-to-Flow" Model: An Interdisciplinary Approach to Assessing Movement within and beyond the Context of Climbing

    ERIC Educational Resources Information Center

    Lloyd, Rebecca

    2015-01-01

    Background: Physical Education (PE) programmes are expanding to include alternative activities yet what is missing is a conceptual model that facilitates how the learning process may be understood and assessed beyond the dominant sport-technique paradigm. Purpose: The purpose of this article was to feature the emergence of a Function-to-Flow (F2F)…

  16. Unifying Amplitude and Phase Analysis: A Compositional Data Approach to Functional Multivariate Mixed-Effects Modeling of Mandarin Chinese

    PubMed Central

    Hadjipantelis, P. Z.; Aston, J. A. D.; Müller, H. G.; Evans, J. P.

    2015-01-01

    Mandarin Chinese is characterized by being a tonal language; the pitch (or F 0) of its utterances carries considerable linguistic information. However, speech samples from different individuals are subject to changes in amplitude and phase, which must be accounted for in any analysis that attempts to provide a linguistically meaningful description of the language. A joint model for amplitude, phase, and duration is presented, which combines elements from functional data analysis, compositional data analysis, and linear mixed effects models. By decomposing functions via a functional principal component analysis, and connecting registration functions to compositional data analysis, a joint multivariate mixed effect model can be formulated, which gives insights into the relationship between the different modes of variation as well as their dependence on linguistic and nonlinguistic covariates. The model is applied to the COSPRO-1 dataset, a comprehensive database of spoken Taiwanese Mandarin, containing approximately 50,000 phonetically diverse sample F 0 contours (syllables), and reveals that phonetic information is jointly carried by both amplitude and phase variation. Supplementary materials for this article are available online. PMID:26692591

  17. Unifying Amplitude and Phase Analysis: A Compositional Data Approach to Functional Multivariate Mixed-Effects Modeling of Mandarin Chinese.

    PubMed

    Hadjipantelis, P Z; Aston, J A D; Müller, H G; Evans, J P

    2015-04-03

    Mandarin Chinese is characterized by being a tonal language; the pitch (or F0) of its utterances carries considerable linguistic information. However, speech samples from different individuals are subject to changes in amplitude and phase, which must be accounted for in any analysis that attempts to provide a linguistically meaningful description of the language. A joint model for amplitude, phase, and duration is presented, which combines elements from functional data analysis, compositional data analysis, and linear mixed effects models. By decomposing functions via a functional principal component analysis, and connecting registration functions to compositional data analysis, a joint multivariate mixed effect model can be formulated, which gives insights into the relationship between the different modes of variation as well as their dependence on linguistic and nonlinguistic covariates. The model is applied to the COSPRO-1 dataset, a comprehensive database of spoken Taiwanese Mandarin, containing approximately 50,000 phonetically diverse sample F0 contours (syllables), and reveals that phonetic information is jointly carried by both amplitude and phase variation. Supplementary materials for this article are available online.

  18. Breaking Functional Connectivity into Components: A Novel Approach Using an Individual-Based Model, and First Outcomes

    PubMed Central

    Pe'er, Guy; Henle, Klaus; Dislich, Claudia; Frank, Karin

    2011-01-01

    Landscape connectivity is a key factor determining the viability of populations in fragmented landscapes. Predicting ‘functional connectivity’, namely whether a patch or a landscape functions as connected from the perspective of a focal species, poses various challenges. First, empirical data on the movement behaviour of species is often scarce. Second, animal-landscape interactions are bound to yield complex patterns. Lastly, functional connectivity involves various components that are rarely assessed separately. We introduce the spatially explicit, individual-based model FunCon as means to distinguish between components of functional connectivity and to assess how each of them affects the sensitivity of species and communities to landscape structures. We then present the results of exploratory simulations over six landscapes of different fragmentation levels and across a range of hypothetical bird species that differ in their response to habitat edges. i) Our results demonstrate that estimations of functional connectivity depend not only on the response of species to edges (avoidance versus penetration into the matrix), the movement mode investigated (home range movements versus dispersal), and the way in which the matrix is being crossed (random walk versus gap crossing), but also on the choice of connectivity measure (in this case, the model output examined). ii) We further show a strong effect of the mortality scenario applied, indicating that movement decisions that do not fully match the mortality risks are likely to reduce connectivity and enhance sensitivity to fragmentation. iii) Despite these complexities, some consistent patterns emerged. For instance, the ranking order of landscapes in terms of functional connectivity was mostly consistent across the entire range of hypothetical species, indicating that simple landscape indices can potentially serve as valuable surrogates for functional connectivity. Yet such simplifications must be carefully

  19. Electromagnetic scaling functions within the Green's function Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Rocco, N.; Alvarez-Ruso, L.; Lovato, A.; Nieves, J.

    2017-07-01

    We have studied the scaling properties of the electromagnetic response functions of 4He and 12C nuclei computed by the Green's function Monte Carlo approach, retaining only the one-body current contribution. Longitudinal and transverse scaling functions have been obtained in the relativistic and nonrelativistic cases and compared to experiment for various kinematics. The characteristic asymmetric shape of the scaling function exhibited by data emerges in the calculations in spite of the nonrelativistic nature of the model. The results are mostly consistent with scaling of zeroth, first, and second kinds. Our analysis reveals a direct correspondence between the scaling and the nucleon-density response functions. The scaling function obtained from the proton-density response displays scaling of the first kind, even more evidently than the longitudinal and transverse scaling functions.

  20. An evolutionary approach to Function

    PubMed Central

    2010-01-01

    Background Understanding the distinction between function and role is vexing and difficult. While it appears to be useful, in practice this distinction is hard to apply, particularly within biology. Results I take an evolutionary approach, considering a series of examples, to develop and generate definitions for these concepts. I test them in practice against the Ontology for Biomedical Investigations (OBI). Finally, I give an axiomatisation and discuss methods for applying these definitions in practice. Conclusions The definitions in this paper are applicable, formalizing current practice. As such, they make a significant contribution to the use of these concepts within biomedical ontologies. PMID:20626924

  1. An evolutionary approach to Function.

    PubMed

    Lord, Phillip

    2010-06-22

    Understanding the distinction between function and role is vexing and difficult. While it appears to be useful, in practice this distinction is hard to apply, particularly within biology. I take an evolutionary approach, considering a series of examples, to develop and generate definitions for these concepts. I test them in practice against the Ontology for Biomedical Investigations (OBI). Finally, I give an axiomatisation and discuss methods for applying these definitions in practice. The definitions in this paper are applicable, formalizing current practice. As such, they make a significant contribution to the use of these concepts within biomedical ontologies.

  2. Adaptive cruise control with stop&go function using the state-dependent nonlinear model predictive control approach.

    PubMed

    Shakouri, Payman; Ordys, Andrzej; Askari, Mohamad R

    2012-09-01

    In the design of adaptive cruise control (ACC) system two separate control loops - an outer loop to maintain the safe distance from the vehicle traveling in front and an inner loop to control the brake pedal and throttle opening position - are commonly used. In this paper a different approach is proposed in which a single control loop is utilized. The objective of the distance tracking is incorporated into the single nonlinear model predictive control (NMPC) by extending the original linear time invariant (LTI) models obtained by linearizing the nonlinear dynamic model of the vehicle. This is achieved by introducing the additional states corresponding to the relative distance between leading and following vehicles, and also the velocity of the leading vehicle. Control of the brake and throttle position is implemented by taking the state-dependent approach. The model demonstrates to be more effective in tracking the speed and distance by eliminating the necessity of switching between the two controllers. It also offers smooth variation in brake and throttle controlling signal which subsequently results in a more uniform acceleration of the vehicle. The results of proposed method are compared with other ACC systems using two separate control loops. Furthermore, an ACC simulation results using a stop&go scenario are shown, demonstrating a better fulfillment of the design requirements.

  3. Functional Assessment of the Kidney From Magnetic Resonance and Computed Tomography Renography: Impulse Retention Approach to a Multicompartment Model

    PubMed Central

    Zhang, Jeff L.; Rusinek, Henry; Bokacheva, Louisa; Lerman, Lilach O.; Chen, Qun; Prince, Chekema; Oesingmann, Niels; Song, Ting; Lee, Vivian S.

    2009-01-01

    A three-compartment model is proposed for analyzing magnetic resonance renography (MRR) and computed tomography renography (CTR) data to derive clinically useful parameters such as glomerular filtration rate (GFR) and renal plasma flow (RPF). The model fits the convolution of the measured input and the predefined impulse retention functions to the measured tissue curves. A MRR study of 10 patients showed that relative root mean square errors by the model were significantly lower than errors for a previously reported three-compartmental model (11.6% ± 4.9 vs 15.5% ± 4.1; P < 0.001). GFR estimates correlated well with reference values by 99mTc-DTPA scintigraphy (correlation coefficient r = 0.82), and for RPF, r = 0.80. Parameter-sensitivity analysis and Monte Carlo simulation indicated that model parameters could be reliably identified. When the model was applied to CTR in five pigs, expected increases in RPF and GFR due to acetylcholine were detected with greater consistency than with the previous model. These results support the reliability and validity of the new model in computing GFR, RPF, and renal mean transit times from MR and CT data. PMID:18228576

  4. Correction Approach for Delta Function Convolution Model Fitting of Fluorescence Decay Data in the Case of a Monoexponential Reference Fluorophore.

    PubMed

    Talbot, Clifford B; Lagarto, João; Warren, Sean; Neil, Mark A A; French, Paul M W; Dunsby, Chris

    2015-09-01

    A correction is proposed to the Delta function convolution method (DFCM) for fitting a multiexponential decay model to time-resolved fluorescence decay data using a monoexponential reference fluorophore. A theoretical analysis of the discretised DFCM multiexponential decay function shows the presence an extra exponential decay term with the same lifetime as the reference fluorophore that we denote as the residual reference component. This extra decay component arises as a result of the discretised convolution of one of the two terms in the modified model function required by the DFCM. The effect of the residual reference component becomes more pronounced when the fluorescence lifetime of the reference is longer than all of the individual components of the specimen under inspection and when the temporal sampling interval is not negligible compared to the quantity (τR (-1) - τ(-1))(-1), where τR and τ are the fluorescence lifetimes of the reference and the specimen respectively. It is shown that the unwanted residual reference component results in systematic errors when fitting simulated data and that these errors are not present when the proposed correction is applied. The correction is also verified using real data obtained from experiment.

  5. Approach of the associated Laguerre functions to the su(1,1) coherent states for some quantum solvable models

    NASA Astrophysics Data System (ADS)

    Fakhri, H.; Dehghani, A.; Mojaveri, B.

    Using second-order differential operators as a realization of the su(1,1) Lie algebra by the associated Laguerre functions, it is shown that the quantum states of the Calogero-Sutherland, half-oscillator and radial part of a 3D harmonic oscillator constitute the unitary representations for the same algebra. This su(1,1) Lie algebra symmetry leads to derivation of the Barut-Girardello and Klauder-Perelomov coherent states for those models. The explicit compact forms of these coherent states are calculated. Also, to realize the resolution of the identity, their corresponding positive definite measures on the complex plane are obtained in terms of the known functions.

  6. A novel approach to induction and rehabilitation of deficits in forelimb function in a rat model of ischemic stroke.

    PubMed

    Livingston-Thomas, Jessica Mary; Hume, Andrew Wilson; Doucette, Tracy Ann; Tasker, Richard Andrew

    2013-01-01

    Constraint-induced movement therapy (CIMT), which forces use of the impaired arm following unilateral stroke, promotes functional recovery in the clinic but animal models of CIMT have yielded mixed results. The aim of this study is to develop a refined endothelin-1 (ET-1) model of focal ischemic injury in rats that resulted in reproducible, well-defined lesions and reliable upper extremity impairments, and to determine if an appetitively motivated form of rehabilitation (voluntary forced use movement therapy; FUMT) would accelerate post-ischemic motor recovery. Male Sprague Dawley rats (3 months old) were given multiple intracerebral microinjections of ET-1 into the sensorimotor cortex and dorsolateral striatum. Sham-operated rats received the same surgical procedure up to but not including the drill holes on the skull. Functional deficits were assessed using two tests of forelimb placing, a forelimb postural reflex test, a forelimb asymmetry test, and a horizontal ladder test. In a separate experiment ET-1 stroke rats were subjected to daily rehabilitation with FUMT or with a control therapy beginning on post-surgery d 5. Performance and post-mortem analysis of lesion volume and regional BDNF expression were measured. Following microinjections of ET-1 animals exhibited significant deficits in contralateral forelimb function on a variety of tests compared with the sham group. These deficits persisted for up to 20 d with no mortality and were associated with consistent lesion volumes. FUMT therapy resulted in a modest but significantly accelerated recovery in the forelimb function as compared with the control therapy, but did not affect lesion size or BDNF expression in the ipsilesional hemisphere. We conclude that refined ET-1 microinjection protocols and forcing use of the impaired forelimb in an appetitively motivated paradigm may prove useful in developing strategies to study post-ischemic rehabilitation and neuroplasticity.

  7. Functional approach to the fermionic Casimir effect

    SciTech Connect

    Fosco, C. D.; Losada, E. L.

    2008-07-15

    We use a functional approach to calculate the Casimir energy due to Dirac fields in interaction with thin, flat, parallel walls, which implement imperfect baglike boundary conditions. These are simulated by the introduction of {delta}-like interactions with the walls. We show that, with a proper choice for the corresponding coupling constants, bag-model boundary conditions are properly implemented. We obtain explicit expressions for the energies in 1+1 and 3+1 dimensions, for massless and massive fields.

  8. Integrating functional diversity, food web processes, and biogeochemical carbon fluxes into a conceptual approach for modeling the upper ocean in a high-CO2 world

    NASA Astrophysics Data System (ADS)

    Legendre, Louis; Rivkin, Richard B.

    2005-09-01

    Marine food webs influence climate by channeling carbon below the permanent pycnocline, where it can be sequestered. Because most of the organic matter exported from the euphotic zone is remineralized within the "upper ocean" (i.e., the water column above the depth of sequestration), the resulting CO2 would potentially return to the atmosphere on decadal timescales. Thus ocean-climate models must consider the cycling of carbon within and from the upper ocean down to the depth of sequestration, instead of only to the base of the euphotic zone. Climate-related changes in the upper ocean will influence the diversity and functioning of plankton functional types. In order to predict the interactions between the changing climate and the ocean's biology, relevant models must take into account the roles of functional biodiversity and pelagic ecosystem functioning in determining the biogeochemical fluxes of carbon. We propose the development of a class of models that consider the interactions, in the upper ocean, of functional types of plankton organisms (e.g., phytoplankton, heterotrophic bacteria, microzooplankton, large zooplankton, and microphagous macrozooplankton), food web processes that affect organic matter (e.g., synthesis, transformation, and remineralization), and biogeochemical carbon fluxes (e.g., photosynthesis, calcification, respiration, and deep transfer). Herein we develop a framework for this class of models, and we use it to make preliminary predictions for the upper ocean in a high-CO2 world, without and with iron fertilization. Finally, we suggest a general approach for implementing our proposed class of models.

  9. Introducing linear functions: an alternative statistical approach

    NASA Astrophysics Data System (ADS)

    Nolan, Caroline; Herbert, Sandra

    2015-12-01

    The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be `threshold concepts'. There is recognition that linear functions can be taught in context through the exploration of linear modelling examples, but this has its limitations. Currently, statistical data is easily attainable, and graphics or computer algebra system (CAS) calculators are common in many classrooms. The use of this technology provides ease of access to different representations of linear functions as well as the ability to fit a least-squares line for real-life data. This means these calculators could support a possible alternative approach to the introduction of linear functions. This study compares the results of an end-of-topic test for two classes of Australian middle secondary students at a regional school to determine if such an alternative approach is feasible. In this study, test questions were grouped by concept and subjected to concept by concept analysis of the means of test results of the two classes. This analysis revealed that the students following the alternative approach demonstrated greater competence with non-standard questions.

  10. Modeling of aerosol formation in a turbulent jet with the transported population balance equation-probability density function approach

    NASA Astrophysics Data System (ADS)

    Di Veroli, G. Y.; Rigopoulos, S.

    2011-04-01

    Processes involving particle formation in turbulent flows feature complex interactions between turbulence and the various physicochemical processes involved. An example of such a process is aerosol formation in a turbulent jet, a process investigated experimentally by Lesniewski and Friedlander [Proc. R. Soc. London, Ser. A 454, 2477 (1998)]. Polydispersed particle formation can be described mathematically by a population balance (also called general dynamic) equation, but its formulation and use within a turbulent flow are riddled with problems, as straightforward averaging results in unknown correlations. In this paper we employ a probability density function formalism in conjunction with the population balance equation (the PBE-PDF method) to simulate and study the experiments of Lesniewski and Friedlander. The approach allows studying the effects of turbulence-particle formation interaction, as well as the prediction of the particle size distribution and the incorporation of kinetics of arbitrary complexity in the population balance equation. It is found that turbulence critically affects the first stages of the process, while it seems to have a secondary effect downstream. While Lesniewski and Friedlander argued that the bulk of the nucleation arises in the initial mixing layer, our results indicate that most of the particles nucleate downstream. The full particle size distributions are obtained via our method and can be compared to the experimental results showing good agreement. The sources of uncertainties in the experiments and the kinetic expressions are analyzed, and the underlying mechanisms that affect the evolution of particle size distribution are discussed.

  11. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting

  12. Error latency estimation using functional fault modeling

    NASA Technical Reports Server (NTRS)

    Manthani, S. R.; Saxena, N. R.; Robinson, J. P.

    1983-01-01

    A complete modeling of faults at gate level for a fault tolerant computer is both infeasible and uneconomical. Functional fault modeling is an approach where units are characterized at an intermediate level and then combined to determine fault behavior. The applicability of functional fault modeling to the FTMP is studied. Using this model a forecast of error latency is made for some functional blocks. This approach is useful in representing larger sections of the hardware and aids in uncovering system level deficiencies.

  13. Approaches for modeling magnetic nanoparticle dynamics

    PubMed Central

    Reeves, Daniel B; Weaver, John B

    2014-01-01

    Magnetic nanoparticles are useful biological probes as well as therapeutic agents. There have been several approaches used to model nanoparticle magnetization dynamics for both Brownian as well as Néel rotation. The magnetizations are often of interest and can be compared with experimental results. Here we summarize these approaches including the Stoner-Wohlfarth approach, and stochastic approaches including thermal fluctuations. Non-equilibrium related temperature effects can be described by a distribution function approach (Fokker-Planck equation) or a stochastic differential equation (Langevin equation). Approximate models in several regimes can be derived from these general approaches to simplify implementation. PMID:25271360

  14. Model-based Utility Functions

    NASA Astrophysics Data System (ADS)

    Hibbard, Bill

    2012-05-01

    Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.

  15. An alternative approach to exact wave functions for time-dependent coupled oscillator model of charged particle in variable magnetic field

    SciTech Connect

    Menouar, Salah; Maamache, Mustapha; Choi, Jeong Ryeol

    2010-08-15

    The quantum states of time-dependent coupled oscillator model for charged particles subjected to variable magnetic field are investigated using the invariant operator methods. To do this, we have taken advantage of an alternative method, so-called unitary transformation approach, available in the framework of quantum mechanics, as well as a generalized canonical transformation method in the classical regime. The transformed quantum Hamiltonian is obtained using suitable unitary operators and is represented in terms of two independent harmonic oscillators which have the same frequencies as that of the classically transformed one. Starting from the wave functions in the transformed system, we have derived the full wave functions in the original system with the help of the unitary operators. One can easily take a complete description of how the charged particle behaves under the given Hamiltonian by taking advantage of these analytical wave functions.

  16. A very efficient approach to compute the first-passage probability density function in a time-changed Brownian model: Applications in finance

    NASA Astrophysics Data System (ADS)

    Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide

    2016-12-01

    We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.

  17. Description of hard-sphere crystals and crystal-fluid interfaces: a comparison between density functional approaches and a phase-field crystal model.

    PubMed

    Oettel, M; Dorosz, S; Berghoff, M; Nestler, B; Schilling, T

    2012-08-01

    In materials science the phase-field crystal approach has become popular to model crystallization processes. Phase-field crystal models are in essence Landau-Ginzburg-type models, which should be derivable from the underlying microscopic description of the system in question. We present a study on classical density functional theory in three stages of approximation leading to a specific phase-field crystal model, and we discuss the limits of applicability of the models that result from these approximations. As a test system we have chosen the three-dimensional suspension of monodisperse hard spheres. The levels of density functional theory that we discuss are fundamental measure theory, a second-order Taylor expansion thereof, and a minimal phase-field crystal model. We have computed coexistence densities, vacancy concentrations in the crystalline phase, interfacial tensions, and interfacial order parameter profiles, and we compare these quantities to simulation results. We also suggest a procedure to fit the free parameters of the phase-field crystal model. Thereby it turns out that the order parameter of the phase-field crystal model is more consistent with a smeared density field (shifted and rescaled) than with the shifted and rescaled density itself. In brief, we conclude that fundamental measure theory is very accurate and can serve as a benchmark for the other theories. Taylor expansion strongly affects free energies, surface tensions, and vacancy concentrations. Furthermore it is phenomenologically misleading to interpret the phase-field crystal model as stemming directly from Taylor-expanded density functional theory.

  18. Defining and applying a functionality approach to intellectual disability.

    PubMed

    Luckasson, R; Schalock, R L

    2013-07-01

    The current functional models of disability do not adequately incorporate significant changes of the last three decades in our understanding of human functioning, and how the human functioning construct can be applied to clinical functions, professional practices and outcomes evaluation. The authors synthesise current literature on human functioning dimensions, systems of supports and approaches to outcomes evaluation for persons with intellectual disability (ID), and propose a functionality approach that encompasses a systems perspective towards understanding human functioning in ID. The approach includes human functioning dimensions, interactive systems of supports and human functioning outcomes. Based on this functionality approach the authors: (1) describe how such an approach can be applied to clinical functions related to defining ID, assessment, classification, supports planning and outcomes evaluation; and (2) discuss the impact of a functionality approach on professional practices in the field of ID. A functionality approach can increase focus on the integrative nature of human functioning, provide unified language, align clinical functions and encourage evidence-based practices. The approach incorporates a holistic view of human beings and their lives, and can positively affect supports provision and evaluation. © 2012 The Authors. Journal of Intellectual Disability Research © 2012 John Wiley & Sons Ltd, MENCAP & IASSID.

  19. Model of a tunneling current in a p-n junction based on armchair graphene nanoribbons - an Airy function approach and a transfer matrix method

    SciTech Connect

    Suhendi, Endi; Syariati, Rifki; Noor, Fatimah A.; Khairurrijal; Kurniasih, Neny

    2014-03-24

    We modeled a tunneling current in a p-n junction based on armchair graphene nanoribbons (AGNRs) by using an Airy function approach (AFA) and a transfer matrix method (TMM). We used β-type AGNRs, in which its band gap energy and electron effective mass depends on its width as given by the extended Huckel theory. It was shown that the tunneling currents evaluated by employing the AFA are the same as those obtained under the TMM. Moreover, the calculated tunneling current was proportional to the voltage bias and inversely with temperature.

  20. Model of a tunneling current in a p-n junction based on armchair graphene nanoribbons - an Airy function approach and a transfer matrix method

    NASA Astrophysics Data System (ADS)

    Suhendi, Endi; Syariati, Rifki; Noor, Fatimah A.; Kurniasih, Neny; Khairurrijal

    2014-03-01

    We modeled a tunneling current in a p-n junction based on armchair graphene nanoribbons (AGNRs) by using an Airy function approach (AFA) and a transfer matrix method (TMM). We used β-type AGNRs, in which its band gap energy and electron effective mass depends on its width as given by the extended Huckel theory. It was shown that the tunneling currents evaluated by employing the AFA are the same as those obtained under the TMM. Moreover, the calculated tunneling current was proportional to the voltage bias and inversely with temperature.

  1. Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2013-01-01

    A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.

  2. A developmental systems approach to executive function.

    PubMed

    Müller, Ulrich; Baker, Lesley; Yeung, Emanuela

    2013-01-01

    According to recent claims from behavior genetics, executive function (EF) is almost entirely heritable. The implications of this claim are significant, given the importance of EF in academic, social, and psychological domains. This paper critically examines the behavior genetics approach to explaining individual differences in EF and proposes a relational developmental systems model that integrates both biological and social factors in the development of EF and the emergence of individual differences in EF. Problems inherent to behavioral genetics research are discussed, as is neuroscience research that emphasizes the plasticity of the prefrontal cortex. Empirical evidence from research on stress, social interaction, and intervention and training demonstrates that individual differences in EF are experience-dependent. Taken together, these findings challenge the claim that EF is almost entirely genetic but are consistent with an approach that considers biological differences in the context of social interaction.

  3. Air Pollution and Lung Function in Dutch Children: A Comparison of Exposure Estimates and Associations Based on Land Use Regression and Dispersion Exposure Modeling Approaches

    PubMed Central

    Gehring, Ulrike; Hoek, Gerard; Keuken, Menno; Jonkers, Sander; Beelen, Rob; Eeftens, Marloes; Postma, Dirkje S.; Brunekreef, Bert

    2015-01-01

    . 2015. Air pollution and lung function in Dutch children: a comparison of exposure estimates and associations based on land use regression and dispersion exposure modeling approaches. Environ Health Perspect 123:847–851; http://dx.doi.org/10.1289/ehp.1408541 PMID:25839747

  4. A General Synthetic Approach to Functionalized Dihydrooxepines

    PubMed Central

    Nicolaou, K. C.; Yu, Ruocheng; Shi, Lei; Cai, Quan; Lu, Min; Heretsch, Philipp

    2013-01-01

    A three-step sequence to access functionalized 4,5-dihydrooxepines from cyclohexenones has been developed. This approach features a regioselective Baeyer–Villiger oxidation and subsequent functionalization via the corresponding enol phosphate intermediate. PMID:23550898

  5. Neurorehabilitation of social dysfunctions: a model-based neurofeedback approach for low and high-functioning autism

    PubMed Central

    Pineda, Jaime A.; Friedrich, Elisabeth V. C.; LaMarca, Kristen

    2014-01-01

    Autism Spectrum Disorder (ASD) is an increasingly prevalent condition with core deficits in the social domain. Understanding its neuroetiology is critical to providing insights into the relationship between neuroanatomy, physiology and social behaviors, including imitation learning, language, empathy, theory of mind, and even self-awareness. Equally important is the need to find ways to arrest its increasing prevalence and to ameliorate its symptoms. In this review, we highlight neurofeedback studies as viable treatment options for high-functioning as well as low-functioning children with ASD. Lower-functioning groups have the greatest need for diagnosis and treatment, the greatest barrier to communication, and may experience the greatest benefit if a treatment can improve function or prevent progression of the disorder at an early stage. Therefore, we focus on neurofeedback interventions combined with other kinds of behavioral conditioning to induce neuroplastic changes that can address the full spectrum of the autism phenotype. PMID:25147521

  6. Toward quantitative understanding on microbial community structure and functioning: a modeling-centered approach using degradation of marine oil spills as example.

    PubMed

    Röling, Wilfred F M; van Bodegom, Peter M

    2014-01-01

    Molecular ecology approaches are rapidly advancing our insights into the microorganisms involved in the degradation of marine oil spills and their metabolic potentials. Yet, many questions remain open: how do oil-degrading microbial communities assemble in terms of functional diversity, species abundances and organization and what are the drivers? How do the functional properties of microorganisms scale to processes at the ecosystem level? How does mass flow among species, and which factors and species control and regulate fluxes, stability and other ecosystem functions? Can generic rules on oil-degradation be derived, and what drivers underlie these rules? How can we engineer oil-degrading microbial communities such that toxic polycyclic aromatic hydrocarbons are degraded faster? These types of questions apply to the field of microbial ecology in general. We outline how recent advances in single-species systems biology might be extended to help answer these questions. We argue that bottom-up mechanistic modeling allows deciphering the respective roles and interactions among microorganisms. In particular constraint-based, metagenome-derived community-scale flux balance analysis appears suited for this goal as it allows calculating degradation-related fluxes based on physiological constraints and growth strategies, without needing detailed kinetic information. We subsequently discuss what is required to make these approaches successful, and identify a need to better understand microbial physiology in order to advance microbial ecology. We advocate the development of databases containing microbial physiological data. Answering the posed questions is far from trivial. Oil-degrading communities are, however, an attractive setting to start testing systems biology-derived models and hypotheses as they are relatively simple in diversity and key activities, with several key players being isolated and a high availability of experimental data and approaches.

  7. Toward quantitative understanding on microbial community structure and functioning: a modeling-centered approach using degradation of marine oil spills as example

    PubMed Central

    Röling, Wilfred F. M.; van Bodegom, Peter M.

    2014-01-01

    Molecular ecology approaches are rapidly advancing our insights into the microorganisms involved in the degradation of marine oil spills and their metabolic potentials. Yet, many questions remain open: how do oil-degrading microbial communities assemble in terms of functional diversity, species abundances and organization and what are the drivers? How do the functional properties of microorganisms scale to processes at the ecosystem level? How does mass flow among species, and which factors and species control and regulate fluxes, stability and other ecosystem functions? Can generic rules on oil-degradation be derived, and what drivers underlie these rules? How can we engineer oil-degrading microbial communities such that toxic polycyclic aromatic hydrocarbons are degraded faster? These types of questions apply to the field of microbial ecology in general. We outline how recent advances in single-species systems biology might be extended to help answer these questions. We argue that bottom-up mechanistic modeling allows deciphering the respective roles and interactions among microorganisms. In particular constraint-based, metagenome-derived community-scale flux balance analysis appears suited for this goal as it allows calculating degradation-related fluxes based on physiological constraints and growth strategies, without needing detailed kinetic information. We subsequently discuss what is required to make these approaches successful, and identify a need to better understand microbial physiology in order to advance microbial ecology. We advocate the development of databases containing microbial physiological data. Answering the posed questions is far from trivial. Oil-degrading communities are, however, an attractive setting to start testing systems biology-derived models and hypotheses as they are relatively simple in diversity and key activities, with several key players being isolated and a high availability of experimental data and approaches. PMID:24723922

  8. A density functional approach to ferrogels

    NASA Astrophysics Data System (ADS)

    Cremer, P.; Heinen, M.; Menzel, A. M.; Löwen, H.

    2017-07-01

    Ferrogels consist of magnetic colloidal particles embedded in an elastic polymer matrix. As a consequence, their structural and rheological properties are governed by a competition between magnetic particle-particle interactions and mechanical matrix elasticity. Typically, the particles are permanently fixed within the matrix, which makes them distinguishable by their positions. Over time, particle neighbors do not change due to the fixation by the matrix. Here we present a classical density functional approach for such ferrogels. We map the elastic matrix-induced interactions between neighboring colloidal particles distinguishable by their positions onto effective pairwise interactions between indistinguishable particles similar to a ‘pairwise pseudopotential’. Using Monte-Carlo computer simulations, we demonstrate for one-dimensional dipole-spring models of ferrogels that this mapping is justified. We then use the pseudopotential as an input into classical density functional theory of inhomogeneous fluids and predict the bulk elastic modulus of the ferrogel under various conditions. In addition, we propose the use of an ‘external pseudopotential’ when one switches from the viewpoint of a one-dimensional dipole-spring object to a one-dimensional chain embedded in an infinitely extended bulk matrix. Our mapping approach paves the way to describe various inhomogeneous situations of ferrogels using classical density functional concepts of inhomogeneous fluids.

  9. A density functional approach to ferrogels.

    PubMed

    Cremer, P; Heinen, M; Menzel, A M; Löwen, H

    2017-07-12

    Ferrogels consist of magnetic colloidal particles embedded in an elastic polymer matrix. As a consequence, their structural and rheological properties are governed by a competition between magnetic particle-particle interactions and mechanical matrix elasticity. Typically, the particles are permanently fixed within the matrix, which makes them distinguishable by their positions. Over time, particle neighbors do not change due to the fixation by the matrix. Here we present a classical density functional approach for such ferrogels. We map the elastic matrix-induced interactions between neighboring colloidal particles distinguishable by their positions onto effective pairwise interactions between indistinguishable particles similar to a 'pairwise pseudopotential'. Using Monte-Carlo computer simulations, we demonstrate for one-dimensional dipole-spring models of ferrogels that this mapping is justified. We then use the pseudopotential as an input into classical density functional theory of inhomogeneous fluids and predict the bulk elastic modulus of the ferrogel under various conditions. In addition, we propose the use of an 'external pseudopotential' when one switches from the viewpoint of a one-dimensional dipole-spring object to a one-dimensional chain embedded in an infinitely extended bulk matrix. Our mapping approach paves the way to describe various inhomogeneous situations of ferrogels using classical density functional concepts of inhomogeneous fluids.

  10. Exploring Mouse Protein Function via Multiple Approaches

    PubMed Central

    Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning

    2016-01-01

    Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in

  11. Differential Item Functioning Analysis of High-Stakes Test in Terms of Gender: A Rasch Model Approach

    ERIC Educational Resources Information Center

    Alavi, Seyed Mohammad; Bordbar, Soodeh

    2017-01-01

    Differential Item Functioning (DIF) analysis is a key element in evaluating educational test fairness and validity. One of the frequently cited sources of construct-irrelevant variance is gender which has an important role in the university entrance exam; therefore, it causes bias and consequently undermines test validity. The present study aims…

  12. Mathematical modeling of separated two-phase turbulent reactive flows using a filtered mass density function approach for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Carrara, Mark David

    2006-04-01

    The overall objective of this dissertation is the development of a modeling and simulation approach for turbulent two-phase chemically reacting flows. A new full velocity-scalar filtered mass density function (FMDF) formulation for large eddy simulation (LES) of a separated two-phase flow is developed. In this formulation several terms require modeling that include important conditionally averaged phase-coupling terms (PCT). To close the PCT a new derivation of the local instantaneous two-phase equations is presented and important identities are derived relating the PCT to surface averages. The formulation is then applied for two particle laden flow cases and solved using a full particle based Monte-Carlo numerical solution procedure. The first case is a temporally developing counter-current mixing layer dilutely seeded with evaporating water droplets. Validation studies reveal excellent agreement of the full particle method to previous hybrid FDF studies and direct numerical simulations for single-phase flows. One-way coupled simulations reveal that the overall dispersion is maximized with unity Stokes number droplets. Two-way coupled simulations reveal the advantages of two FDF approaches where the subgrid variation of droplet properties are explicitly taken into account. Comparisons of the fully-coupled FDF approach are compared to more approximate means of determining phase-coupling based on filtered properties and local and compounded global errors are assessed. The second case considered is the combustion aluminum particles. A new mechanistic model for the ignition and combustion of aluminum particulate is developed that accounts for unsteady heating, melting, heterogeneous surface reactions (HSR) and quasi-steady burning. Results of this model agree well with experimental data for overall burn rates and ignition times. Two-phase simulations of aluminum particulate seeded mixing layer reveal the variations in flame radius resulting in local extinguishment

  13. Exploring functional similarity in the export of Nitrate-N from forested catchments: A mechanistic modeling approach

    NASA Astrophysics Data System (ADS)

    Creed, I. F.; Band, L. E.

    1998-11-01

    Functional similarity of catchments implies that we are able to identify the combination of processes that creates a similar response of a specific characteristic of a catchment. We applied the concept of functional similarity to the export of NO3--N from catchments situated within the Turkey Lakes Watershed, a temperate forest in central Ontario, Canada. Despite the homogeneous nature of the forest, these catchments exhibit substantial variability in the concentrations of NO3--N in discharge waters, over both time and space. We hypothesized that functional similarity in the export of NO3--N can be expressed as a function of topographic complexity as topography regulates both the formation and flushing of NO3--N within the catchment. We tested this hypothesis by exploring whether topographically based similarity indices of the formation and flushing of NO3--N capture the observed export of NO3--N over a set of topographically diverse catchments. For catchments with no elevated base concentrations of NO3--N the similarity indices explained up to 58% of the variance in the export of NO3--N. For catchments with elevated base concentrations of NO3--N, prediction of the export of NO3--N may have been complicated by the fact that hydrology was governed by a two-component till, with an ablation till overlying a basal till. While the similarity indices captured peak NO3--N concentrations exported from shallow flow paths emanating from the ablation till, they did not capture base NO3--N concentrations exported from deep flow paths emanating from the basal till, emphasizing the importance of including shallow and deep flow paths in future similarity indices. The strength of the similarity indices is their potential ability to enable us to discriminate catchments that have visually similar surface characteristics but show distinct NO3--N export responses and, conversely, to group catchments that have visually dissimilar surface characteristics but are functionally similar

  14. Influence of xc functional on thermal-elastic properties of Ceria: A DFT-based Debye-Grüneisen model approach

    NASA Astrophysics Data System (ADS)

    Lee, Ji-Hwan; Tak, Youngjoo; Lee, Taehun; Soon, Aloysius

    Ceria (CeO2-x) is widely studied as a choice electrolyte material for intermediate-temperature (~ 800 K) solid oxide fuel cells. At this temperature, maintaining its chemical stability and thermal-mechanical integrity of this oxide are of utmost importance. To understand their thermal-elastic properties, we firstly test the influence of various approximations to the density-functional theory (DFT) xc functionals on specific thermal-elastic properties of both CeO2 and Ce2O3. Namely, we consider the local-density approximation (LDA), the generalized gradient approximation (GGA-PBE) with and without additional Hubbard U as applied to the 4 f electron of Ce, as well as the recently popularized hybrid functional due to Heyd-Scuseria-Ernzehof (HSE06). Next, we then couple this to a volume-dependent Debye-Grüneisen model to determine the thermodynamic quantities of ceria at arbitrary temperatures. We find an explicit description of the strong correlation (e.g. via the DFT + U and hybrid functional approach) is necessary to have a good agreement with experimental values, in contrast to the mean-field treatment in standard xc approximations (such as LDA or GGA-PBE). We acknowledge support from Samsung Research Funding Center of Samsung Electronics (SRFC-MA1501-03).

  15. Predicting the effects of environment and management on cotton fibre growth and quality: a functional-structural plant modelling approach.

    PubMed

    Wang, Xuejiao; Zhang, Lizhen; Evers, Jochem B; Mao, Lili; Wei, Shoujun; Pan, Xuebiao; Zhao, Xinhua; van der Werf, Wopke; Li, Zhaohu

    2014-07-09

    In general, the quality of fruits depends on local conditions experienced by the fruit during its development. In cotton, fruit quality, and more specifically the quality of the fibre in the fruit, depends on interactions between fruit position in the plant architecture, temperature and agronomic practices, such as sowing time, mulching with plastic film and topping of the plant's main stem and branches. To quantify this response of cotton fibre quality to environment and management, we developed a simulation model of cotton growth and development, CottonXL. Simulation of cotton fibre quality (strength, length and micronaire) was implemented at the level of each individual fruit, in relation to thermal time (represented by physiological age of the fruit) and prevailing temperature during development of each fruit. Field experiments were conducted in China in 2007 to determine model parameters, and independent data on cotton fibre quality in three cotton producing regions in China were used for model validation. Simulated values for fibre quality closely corresponded to experimental data. Scenario studies simulating a range of management practices predicted that delaying topping times can significantly decrease fibre quality, while sowing date and film mulching had no significant effect. We conclude that CottonXL may be used to explore options for optimizing cotton fibre quality by matching cotton management to the environment, taking into account responses at the level of individual fruits. The model may be used at plant, crop and regional levels to address climate and land-use change scenarios. Published by Oxford University Press on behalf of the Annals of Botany Company.

  16. A novel approach to calibrate the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements.

    PubMed

    Khoram, Nafiseh; Zayane, Chadia; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2016-03-15

    The calibration of the hemodynamic model that describes changes in blood flow and blood oxygenation during brain activation is a crucial step for successfully monitoring and possibly predicting brain activity. This in turn has the potential to provide diagnosis and treatment of brain diseases in early stages. We propose an efficient numerical procedure for calibrating the hemodynamic model using some fMRI measurements. The proposed solution methodology is a regularized iterative method equipped with a Kalman filtering-type procedure. The Newton component of the proposed method addresses the nonlinear aspect of the problem. The regularization feature is used to ensure the stability of the algorithm. The Kalman filter procedure is incorporated here to address the noise in the data. Numerical results obtained with synthetic data as well as with real fMRI measurements are presented to illustrate the accuracy, robustness to the noise, and the cost-effectiveness of the proposed method. We present numerical results that clearly demonstrate that the proposed method outperforms the Cubature Kalman Filter (CKF), one of the most prominent existing numerical methods. We have designed an iterative numerical technique, called the TNM-CKF algorithm, for calibrating the mathematical model that describes the single-event related brain response when fMRI measurements are given. The method appears to be highly accurate and effective in reconstructing the BOLD signal even when the measurements are tainted with high noise level (as high as 30%). Published by Elsevier B.V.

  17. Comparing soil functions for a wide range of agriculture soils focusing on production for bioenergy using a combined isotope-based observation and modelling approach

    NASA Astrophysics Data System (ADS)

    Leistert, Hannes; Herbstritt, Barbara; Weiler, Markus

    2017-04-01

    Increase crop production for bioenergy will result in changes in land use and the resulting soil functions and may generate new chances and risks. However, detailed data and information are still missing how soil function may be altered under changing crop productions for bioenergy, in particular for a wide range of agricultural soils since most data are currently derived from individual experimental sites studying different bioenergy crops at one location. We developed a new, rapid measurement approach to investigate the influence of bioenergy plants on the water cycle and different soil functions (filter and buffer of water and N-cycling). For this approach, we drilled 89 soil cores (1-3 m deep) in spring and fall at 11 sites with different soil properties and climatic conditions comparing different crops (grass, corn, willow, poplar, and other less common bioenergy crops) and analyzing 1150 soil samples for water content, nitrate concentration and stable water isotopes. We benchmarked a soil hydrological model (1-D numerical Richards equation, ADE, water isotope fractionation including liquid and vapor composition of isotopes) using longer-term climate variables and water isotopes in precipitation to derive crop specific parameterization and to specifically validate the differences in water transport and water partitioning into evaporation, transpiration and groundwater recharge among the sites and crops using the water isotopes in particular. The model simulation were in good agreement with the observed isotope profiles and allowed us to differentiate among the different crops. We defined different indicators for the soil functions considered in this study. These indicators included the proportion of groundwater recharge, transit time of water (different percentiles) though the upper 2m and nutrient leaching potential (e.g. nitrate) during the dormant season from the rooting zone. The parameterized model was first used to calculate the indicators for the

  18. Modeling mitochondrial function.

    PubMed

    Balaban, Robert S

    2006-12-01

    The mitochondrion represents a unique opportunity to apply mathematical modeling to a complex biological system. Understanding mitochondrial function and control is important since this organelle is critical in energy metabolism as well as playing key roles in biochemical synthesis, redox control/signaling, and apoptosis. A mathematical model, or hypothesis, provides several useful insights including a rigorous test of the consensus view of the operation of a biological process as well as providing methods of testing and creating new hypotheses. The advantages of the mitochondrial system for applying a mathematical model include the relative simplicity and understanding of the matrix reactions, the ability to study the mitochondria as a independent contained organelle, and, most importantly, one can dynamically measure many of the internal reaction intermediates, on line. The developing ability to internally monitor events within the metabolic network, rather than just the inflow and outflow, is extremely useful in creating critical bounds on complex mathematical models using the individual reaction mechanisms available. However, many serious problems remain in creating a working model of mitochondrial function including the incomplete definition of metabolic pathways, the uncertainty of using in vitro enzyme kinetics, as well as regulatory data in the intact system and the unknown chemical activities of relevant molecules in the matrix. Despite these formidable limitations, the advantages of the mitochondrial system make it one of the best defined mammalian metabolic networks that can be used as a model system for understanding the application and use of mathematical models to study biological systems.

  19. Evolution of one-particle and double-occupied Green functions for the Hubbard model, with interaction, at half-filling with lifetime effects within the moment approach

    NASA Astrophysics Data System (ADS)

    Schafroth, S.; Rodríguez-Núñez, J. J.

    1999-08-01

    We evaluate the one-particle and double-occupied Green functions for the Hubbard model at half-filling using the moment approach of Nolting [Z. Phys. 255, 25 (1972); Grund Kurs: Theoretische Physik. 7 Viel-Teilchen-Theorie (Verlag Zimmermann-Neufang, Ulmen, 1992)]. Our starting point is a self-energy, Σ(k-->,ω), which has a single pole, Ω(k-->), with spectral weight, α(k-->), and quasiparticle lifetime, γ(k-->) [J. J. Rodríguez-Núñez and S. Schafroth, J. Phys. Condens. Matter 10, L391 (1998); J. J. Rodríguez-Núñez, S. Schafroth, and H. Beck, Physica B (to be published); (unpublished)]. In our approach, Σ(k-->,ω) becomes the central feature of the many-body problem and due to three unknown k--> parameters we have to satisfy only the first three sum rules instead of four as in the canonical formulation of Nolting [Z. Phys. 255, 25 (1972); Grund Kurs: Theoretische Physik. 7 Viel-Teilchen-Theorie (Verlag Zimmermann-Neufang, Ulmen, 1992)]. This self-energy choice forces our system to be a non-Fermi liquid for any value of the interaction, since it does not vanish at zero frequency. The one-particle Green function, G(k-->,ω), shows the fingerprint of a strongly correlated system, i.e., a double peak structure in the one-particle spectral density, A(k-->,ω), vs ω for intermediate values of the interaction. Close to the Mott insulator-transition, A(k-->,ω) becomes a wide single peak, signaling the absence of quasiparticles. Similar behavior is observed for the real and imaginary parts of the self-energy, Σ(k-->,ω). The double-occupied Green function, G2(q-->,ω), has been obtained from G(k-->,ω) by means of the equation of motion. The relation between G2(q-->,ω) and the self-energy, Σ(k-->,ω), is formally established and numerical results for the spectral function of G2(k-->,ω), χ(2)(k-->,ω)≡-(1/π)δ-->0+Im[G2(k-->,ω)], are given. Our approach represents the simplest way to include (1) lifetime effects in the moment approach of Nolting, as

  20. Optical Properties of Gold Nanoclusters Functionalized with a Small Organic Compound: Modeling by an Integrated Quantum-Classical Approach.

    PubMed

    Li, Xin; Carravetta, Vincenzo; Li, Cui; Monti, Susanna; Rinkevicius, Zilvinas; Ågren, Hans

    2016-07-12

    Motivated by the growing importance of organometallic nanostructured materials and nanoparticles as microscopic devices for diagnostic and sensing applications, and by the recent considerable development in the simulation of such materials, we here choose a prototype system - para-nitroaniline (pNA) on gold nanoparticles - to demonstrate effective strategies for designing metal nanoparticles with organic conjugates from fundamental principles. We investigated the motion, adsorption mode, and physical chemistry properties of gold-pNA particles, increasing in size, through classical molecular dynamics (MD) simulations in connection with quantum chemistry (QC) calculations. We apply the quantum mechanics-capacitance molecular mechanics method [Z. Rinkevicius et al. J. Chem. Theory Comput. 2014, 10, 989] for calculations of the properties of the conjugate nanoparticles, where time dependent density functional theory is used for the QM part and a capacitance-polarizability parametrization of the MM part, where induced dipoles and charges by metallic charge transfer are considered. Dispersion and short-range repulsion forces are included as well. The scheme is applied to one- and two-photon absorption of gold-pNA clusters increasing in size toward the nanometer scale. Charge imaging of the surface introduces red-shifts both because of altered excitation energy dependence and variation of the relative intensity of the inherent states making up for the total band profile. For the smaller nanoparticles the difference in the crystal facets are important for the spectral outcome which is also influenced by the surrounding MM environment.

  1. Inspection of 56Fe γ-Ray angular distributions as a function of incident neutron energy using optical model approaches

    NASA Astrophysics Data System (ADS)

    Vanhoy, J. R.; Ramirez, A. P.; Alcorn-Dominguez, D. K.; Hicks, S. F.; Peters, E. E.; McEllistrem, M. T.; Mukhopadhyay, S.; Yates, S. W.

    2017-09-01

    Neutron inelastic scattering cross sections measured directly through (n,n) or deduced from γ-ray production cross sections following inelastic neutron scattering (n,n'γ) are a focus of basic and applied research at the University of Kentucky Accelerator Laboratory (www.pa.uky.edu/accelerator). For nuclear data applications, angle-integrated cross sections are desired over a wide range of fast neutron energies. Several days of experimental beam time are required for a data set at each incident neutron energy, which limits the number of angular distributions that can be measured in a reasonable amount of time. Approximations can be employed to generate cross sections with a higher energy resolution, since at 125o, the a2P2 term of the Legendre expansion is identically zero and the a4P4 is assumed to be very small. Provided this assumption is true, a single measurement at 125o would produce the γ-ray production cross section. This project tests these assumptions and energy dependences using the codes CINDY/SCAT and TALYS/ECIS06/SCAT. It is found that care must be taken when interpreting γ-ray excitation functions as cross sections when the incident neutron energy is < 1000 keV above threshold or before the onset of feeding.

  2. Systems biology approach to identify alterations in the stem cell reservoir of subcutaneous adipose tissue in a rat model of diabetes: effects on differentiation potential and function.

    PubMed

    Ferrer-Lorente, Raquel; Bejar, Maria Teresa; Tous, Monica; Vilahur, Gemma; Badimon, Lina

    2014-01-01

    Autologous progenitor cells represent a promising option for regenerative cell-based therapies. Nevertheless, it has been shown that ageing and cardiovascular risk factors such as diabetes affect circulating endothelial and bone marrow-derived progenitor cells, limiting their therapeutic potential. However, their impact on other stem cell populations remains unclear. We therefore investigated the effects of diabetes on adipose-derived stem cells (ASCs) and whether these effects might limit the therapeutic potential of autologous ASCs. A systems biology approach was used to analyse the expression of genes related to stem cell identification in subcutaneous adipose tissue (SAT), the stromal vascular fraction and isolated ASCs from Zucker diabetic fatty rats and their non-diabetic controls. An additional model of type 2 diabetes without obesity was also investigated. Bioinformatic approaches were used to investigate the biological significance of these changes. In addition, functional studies on cell viability and differentiation potential were performed. Widespread downregulation of mesenchymal stem cell markers was observed in SAT of diabetic rats. Gene expression and in silico analysis revealed a significant effect on molecules involved in the maintenance of pluripotency and self-renewal, and on the alteration of main signalling pathways important for stem cell maintenance. The viability and differentiation potential of ASCs from diabetic rats was impaired in in vitro models and in in vivo angiogenesis. The impact of type 2 diabetes on ASCs might compromise the efficiency of spontaneous self-repair and direct autologous stem cell therapy.

  3. Determining the mechanical constitutive properties of metals as a function of strain rate and temperature: A combined experimental and modeling approach

    SciTech Connect

    I. M. Robertson; A. Beaudoin; J. Lambros

    2004-01-05

    OAK-135 Development and validation of constitutive models for polycrystalline materials subjected to high strain rate loading over a range of temperatures are needed to predict the response of engineering materials to in-service type conditions (foreign object damage, high-strain rate forging, high-speed sheet forming, deformation behavior during forming, response to extreme conditions, etc.). To account accurately for the complex effects that can occur during extreme and variable loading conditions, requires significant and detailed computational and modeling efforts. These efforts must be closely coupled with precise and targeted experimental measurements that not only verify the predictions of the models, but also provide input about the fundamental processes responsible for the macroscopic response. Achieving this coupling between modeling and experimentation is the guiding principle of this program. Specifically, this program seeks to bridge the length scale between discrete dislocation interactions with grain boundaries and continuum models for polycrystalline plasticity. Achieving this goal requires incorporating these complex dislocation-interface interactions into the well-defined behavior of single crystals. Despite the widespread study of metal plasticity, this aspect is not well understood for simple loading conditions, let alone extreme ones. Our experimental approach includes determining the high-strain rate response as a function of strain and temperature with post-mortem characterization of the microstructure, quasi-static testing of pre-deformed material, and direct observation of the dislocation behavior during reloading by using the in situ transmission electron microscope deformation technique. These experiments will provide the basis for development and validation of physically-based constitutive models, which will include dislocation-grain boundary interactions for polycrystalline systems. One aspect of the program will involve the dire ct

  4. New Markov Model Approaches to Deciphering Microbial Genome Function and Evolution: Comparative Genomics of Laterally Transferred Genes

    SciTech Connect

    Borodovsky, M.

    2013-04-11

    Algorithmic methods for gene prediction have been developed and successfully applied to many different prokaryotic genome sequences. As the set of genes in a particular genome is not homogeneous with respect to DNA sequence composition features, the GeneMark.hmm program utilizes two Markov models representing distinct classes of protein coding genes denoted "typical" and "atypical". Atypical genes are those whose DNA features deviate significantly from those classified as typical and they represent approximately 10% of any given genome. In addition to the inherent interest of more accurately predicting genes, the atypical status of these genes may also reflect their separate evolutionary ancestry from other genes in that genome. We hypothesize that atypical genes are largely comprised of those genes that have been relatively recently acquired through lateral gene transfer (LGT). If so, what fraction of atypical genes are such bona fide LGTs? We have made atypical gene predictions for all fully completed prokaryotic genomes; we have been able to compare these results to other "surrogate" methods of LGT prediction.

  5. A whole-brain computational modeling approach to explain the alterations in resting-state functional connectivity during progression of Alzheimer's disease.

    PubMed

    Demirtaş, Murat; Falcon, Carles; Tucholka, Alan; Gispert, Juan Domingo; Molinuevo, José Luis; Deco, Gustavo

    2017-01-01

    Alzheimer's disease (AD) is the most common dementia with dramatic consequences. The research in structural and functional neuroimaging showed altered brain connectivity in AD. In this study, we investigated the whole-brain resting state functional connectivity (FC) of the subjects with preclinical Alzheimer's disease (PAD), mild cognitive impairment due to AD (MCI) and mild dementia due to Alzheimer's disease (AD), the impact of APOE4 carriership, as well as in relation to variations in core AD CSF biomarkers. The synchronization in the whole-brain was monotonously decreasing during the course of the disease progression. Furthermore, in AD patients we found widespread significant decreases in functional connectivity (FC) strengths particularly in the brain regions with high global connectivity. We employed a whole-brain computational modeling approach to study the mechanisms underlying these alterations. To characterize the causal interactions between brain regions, we estimated the effective connectivity (EC) in the model. We found that the significant EC differences in AD were primarily located in left temporal lobe. Then, we systematically manipulated the underlying dynamics of the model to investigate simulated changes in FC based on the healthy control subjects. Furthermore, we found distinct patterns involving CSF biomarkers of amyloid-beta (Aβ1 - 42) total tau (t-tau) and phosphorylated tau (p-tau). CSF Aβ1 - 42 was associated to the contrast between healthy control subjects and clinical groups. Nevertheless, tau CSF biomarkers were associated to the variability in whole-brain synchronization and sensory integration regions. These associations were robust across clinical groups, unlike the associations that were found for CSF Aβ1 - 42. APOE4 carriership showed no significant correlations with the connectivity measures.

  6. Universality of electronic friction: Equivalence of von Oppen's nonequilibrium Green's function approach and the Head-Gordon-Tully model at equilibrium

    NASA Astrophysics Data System (ADS)

    Dou, Wenjie; Subotnik, Joseph E.

    2017-09-01

    For a molecule moving near a single metal surface at equilibrium, following von Oppen and coworkers [N. Bode, S. V. Kusminskiy, R. Egger, and F. von Oppen, Beilstein J. Nanotechnol. 3, 144 (2012), 10.3762/bjnano.3.15] and using a nonequilibrium Green's-function (NEGF) approach, we derive a very general form of electronic friction that includes non-Condon effects. We then demonstrate that the resulting NEGF friction tensor agrees exactly with the Head-Gordon-Tully model, provided that finite temperature effects are incorporated correctly. The present results are in agreement with our recent claim that there is only one universal electronic friction tensor arising from the Born-Oppenheimer approximation [W. Dou, G. Miao, and J. E. Subotnik, Phys. Rev. Lett. 119, 046001 (2017), 10.1103/PhysRevLett.119.046001].

  7. Supramolecular organization of functional organic materials in the bulk and at organic/organic interfaces: a modeling and computer simulation approach.

    PubMed

    Muccioli, Luca; D'Avino, Gabriele; Berardi, Roberto; Orlandi, Silvia; Pizzirusso, Antonio; Ricci, Matteo; Roscioni, Otello Maria; Zannoni, Claudio

    2014-01-01

    The molecular organization of functional organic materials is one of the research areas where the combination of theoretical modeling and experimental determinations is most fruitful. Here we present a brief summary of the simulation approaches used to investigate the inner structure of organic materials with semiconducting behavior, paying special attention to applications in organic photovoltaics and clarifying the often obscure jargon hindering the access of newcomers to the literature of the field. Special attention is paid to the choice of the computational "engine" (Monte Carlo or Molecular Dynamics) used to generate equilibrium configurations of the molecular system under investigation and, more importantly, to the choice of the chemical details in describing the molecular interactions. Recent literature dealing with the simulation of organic semiconductors is critically reviewed in order of increasing complexity of the system studied, from low molecular weight molecules to semiflexible polymers, including the challenging problem of determining the morphology of heterojunctions between two different materials.

  8. Electric double layer capacitance of restricted primitive model for an ionic fluid in slit-like nanopores: A density functional approach

    NASA Astrophysics Data System (ADS)

    Pizio, O.; Sokołowski, S.; Sokołowska, Z.

    2012-12-01

    We apply recently developed version of a density functional theory [Z. Wang, L. Liu, and I. Neretnieks, J. Phys.: Condens. Matter 23, 175002 (2011)], 10.1088/0953-8984/23/17/175002 to study adsorption of a restricted primitive model for an ionic fluid in slit-like pores in the absence of interactions induced by electrostatic images. At present this approach is one of the most accurate theories for such model electric double layers. The dependencies of the differential double layer capacitance on the pore width, on the electrostatic potential at the wall, bulk fluid density, and temperature are obtained. We show that the differential capacitance can oscillate as a function of the pore width dependent on the values of the above parameters. The number of oscillations and their magnitude decrease for high values of the electrostatic potential. For very narrow pores, close to the ion diameter, the differential capacitance tends to a minimum. The dependence of differential capacitance on temperature exhibits maximum at different values of bulk fluid density and applied electrostatic potential.

  9. Electric double layer capacitance of restricted primitive model for an ionic fluid in slit-like nanopores: A density functional approach.

    PubMed

    Pizio, O; Sokołowski, S; Sokołowska, Z

    2012-12-21

    We apply recently developed version of a density functional theory [Z. Wang, L. Liu, and I. Neretnieks, J. Phys.: Condens. Matter 23, 175002 (2011)] to study adsorption of a restricted primitive model for an ionic fluid in slit-like pores in the absence of interactions induced by electrostatic images. At present this approach is one of the most accurate theories for such model electric double layers. The dependencies of the differential double layer capacitance on the pore width, on the electrostatic potential at the wall, bulk fluid density, and temperature are obtained. We show that the differential capacitance can oscillate as a function of the pore width dependent on the values of the above parameters. The number of oscillations and their magnitude decrease for high values of the electrostatic potential. For very narrow pores, close to the ion diameter, the differential capacitance tends to a minimum. The dependence of differential capacitance on temperature exhibits maximum at different values of bulk fluid density and applied electrostatic potential.

  10. Modeling Green's Function Errors through a Statistical Approach: Application to the 2009 Mw 6.1 L'Aquila, Italy, Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Scognamiglio, L.; Cirella, A.; Tinti, E.; Spudich, P.

    2016-12-01

    Seismic data can be used to infer the rupture evolution of moderate-to-large earthquakes. Researchers often retrieve significantly different rupture models for a single event, even though their solutions match the data acceptably well. For this reason it is important to estimate the reliability of inferred rupture models. One of the main sources of error in such inversions is the inaccuracy of the theoretical Green's functions (GFs). In this work we propose a quantitative approach to model this source of uncertainty and we analyze the 2009 L'Aquila, Italy, main shock and aftershocks as a case study. In order to measure the errors in theoretical GFs, we assume that the observed ground motions from small aftershocks located on the fault surface of the Mw 6.1 main shock are true point-dislocation GFs. Our erroneous theoretical GFs have been computed using a frequency-wavenumber code in a regionally calibrated velocity structure. The error in a theoretical GF for a particular point source location, observation station, and component of motion is taken to be the complex difference between the Fourier spectra of the aftershock seismogram and the theoretical GF multiplied by the aftershock's moment. The distributions of the real and imaginary parts of the errors are characterized by an "S" curve in normal probability plots, that is, these distributions are not Gaussian but rather `heavy-tailed'. The observed distributions are consistent with a model in which the errors in the theoretical GFs have a normal probability density function (PDF) with σT depending on frequency and component of motion, and the erroneous seismic moments have a log-normal PDF with a standard deviation σM=ln(3). We have developed a semi-analytic expression for the PDF of the complex difference data. The results obtained provide a new quantitative tool when dealing with finite-fault kinematic inversion, seismic moment determination, shake-map generation and ground motion prediction.

  11. An iterative approach of protein function prediction

    PubMed Central

    2011-01-01

    Background Current approaches of predicting protein functions from a protein-protein interaction (PPI) dataset are based on an assumption that the available functions of the proteins (a.k.a. annotated proteins) will determine the functions of the proteins whose functions are unknown yet at the moment (a.k.a. un-annotated proteins). Therefore, the protein function prediction is a mono-directed and one-off procedure, i.e. from annotated proteins to un-annotated proteins. However, the interactions between proteins are mutual rather than static and mono-directed, although functions of some proteins are unknown for some reasons at present. That means when we use the similarity-based approach to predict functions of un-annotated proteins, the un-annotated proteins, once their functions are predicted, will affect the similarities between proteins, which in turn will affect the prediction results. In other words, the function prediction is a dynamic and mutual procedure. This dynamic feature of protein interactions, however, was not considered in the existing prediction algorithms. Results In this paper, we propose a new prediction approach that predicts protein functions iteratively. This iterative approach incorporates the dynamic and mutual features of PPI interactions, as well as the local and global semantic influence of protein functions, into the prediction. To guarantee predicting functions iteratively, we propose a new protein similarity from protein functions. We adapt new evaluation metrics to evaluate the prediction quality of our algorithm and other similar algorithms. Experiments on real PPI datasets were conducted to evaluate the effectiveness of the proposed approach in predicting unknown protein functions. Conclusions The iterative approach is more likely to reflect the real biological nature between proteins when predicting functions. A proper definition of protein similarity from protein functions is the key to predicting functions iteratively. The

  12. Approaches to the Hubbard Model

    NASA Astrophysics Data System (ADS)

    Maguire, Cary Mcilwaine, Jr.

    This thesis analyzes several theoretical approaches to the one band Hubbard model in hopes of extracting selected physical quantities in limits most closely corresponding to real materials. Along the way, three rather remarkable theorems of a much broader scope are proven. It is hoped that these may be of general interest in a variety of related physical and mathematical disciplines. In chapter one, the well-known mean field theory developed by Affleck and Marston is studied in the presence of a magnetic field. Through a rather straightforward numerical procedure, phase diagrams in t/delta ^ace are generated as a function of field. The results of this study are then extended to a magnetic susceptibility calculation and to the analysis of the phase diagram of fan alternate mean field theory, the "generalized flux phases" proposed by Anderson. Several interesting properties and symmetries of the solutions are then briefly discussed. In chapter two, the Gutzwiller projector is analyzed both analytically and numerically, with the results being used to calculate the momentum density function for a trial wavefunction also proposed by Anderson. Two of the above mentioned theorems are developed in this chapter, the one prescribing the expansion of a general restricted sum in terms of its related unrestricted sums, and the other presenting the exact diagonilization of a component of the projector which is equivalent through a U(1) gauge transformation to the total spin operator. In chapter three, we discuss the exact solutions to the one dimensional Hubbard model first derived by Lieb and Wu. From their large U limiting behavior, we extract the phonon scattering matrix elements and first order single particle energies for some finite systems. The third potentially general theorem, which related charge determinants with an arbitrary number of "gaps" between their rows to a comparatively simple function of the corresponding van der Monde determinants, is proven here.

  13. Analysis of radial basis function interpolation approach

    NASA Astrophysics Data System (ADS)

    Zou, You-Long; Hu, Fa-Long; Zhou, Can-Can; Li, Chao-Liu; Dunn, Keh-Jim

    2013-12-01

    The radial basis function (RBF) interpolation approach proposed by Freedman is used to solve inverse problems encountered in well-logging and other petrophysical issues. The approach is to predict petrophysical properties in the laboratory on the basis of physical rock datasets, which include the formation factor, viscosity, permeability, and molecular composition. However, this approach does not consider the effect of spatial distribution of the calibration data on the interpolation result. This study proposes a new RBF interpolation approach based on the Freedman's RBF interpolation approach, by which the unit basis functions are uniformly populated in the space domain. The inverse results of the two approaches are comparatively analyzed by using our datasets. We determine that although the interpolation effects of the two approaches are equivalent, the new approach is more flexible and beneficial for reducing the number of basis functions when the database is large, resulting in simplification of the interpolation function expression. However, the predicted results of the central data are not sufficiently satisfied when the data clusters are far apart.

  14. The Functional Approach to Curriculum Design.

    ERIC Educational Resources Information Center

    Yalden, Janice

    1979-01-01

    Reviews Wilkins' approach to curriculum design as presented in his "Notional Syllabuses." Discusses three components of the language teaching-language learning process: the semantic, the functional, and the formal component, showing how Wilkins' analytic approach implies a semantic rather than a grammatical syllabus, based on learners'…

  15. Functional Risk Modeling for Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Thomson, Fraser; Mathias, Donovan; Go, Susie; Nejad, Hamed

    2010-01-01

    We introduce an approach to risk modeling that we call functional modeling , which we have developed to estimate the capabilities of a lunar base. The functional model tracks the availability of functions provided by systems, in addition to the operational state of those systems constituent strings. By tracking functions, we are able to identify cases where identical functions are provided by elements (rovers, habitats, etc.) that are connected together on the lunar surface. We credit functional diversity in those cases, and in doing so compute more realistic estimates of operational mode availabilities. The functional modeling approach yields more realistic estimates of the availability of the various operational modes provided to astronauts by the ensemble of surface elements included in a lunar base architecture. By tracking functional availability the effects of diverse backup, which often exists when two or more independent elements are connected together, is properly accounted for.

  16. Elucidating the functional relationship between working memory capacity and psychometric intelligence: a fixed-links modeling approach for experimental repeated-measures designs.

    PubMed

    Thomas, Philipp; Rammsayer, Thomas; Schweizer, Karl; Troche, Stefan

    2015-01-01

    Numerous studies reported a strong link between working memory capacity (WMC) and fluid intelligence (Gf), although views differ in respect to how close these two constructs are related to each other. In the present study, we used a WMC task with five levels of task demands to assess the relationship between WMC and Gf by means of a new methodological approach referred to as fixed-links modeling. Fixed-links models belong to the family of confirmatory factor analysis (CFA) and are of particular interest for experimental, repeated-measures designs. With this technique, processes systematically varying across task conditions can be disentangled from processes unaffected by the experimental manipulation. Proceeding from the assumption that experimental manipulation in a WMC task leads to increasing demands on WMC, the processes systematically varying across task conditions can be assumed to be WMC-specific. Processes not varying across task conditions, on the other hand, are probably independent of WMC. Fixed-links models allow for representing these two kinds of processes by two independent latent variables. In contrast to traditional CFA where a common latent variable is derived from the different task conditions, fixed-links models facilitate a more precise or purified representation of the WMC-related processes of interest. By using fixed-links modeling to analyze data of 200 participants, we identified a non-experimental latent variable, representing processes that remained constant irrespective of the WMC task conditions, and an experimental latent variable which reflected processes that varied as a function of experimental manipulation. This latter variable represents the increasing demands on WMC and, hence, was considered a purified measure of WMC controlled for the constant processes. Fixed-links modeling showed that both the purified measure of WMC (β = .48) as well as the constant processes involved in the task (β = .45) were related to Gf. Taken

  17. Functional capacity evaluation: an empirical approach.

    PubMed

    Jette, A M

    1980-02-01

    This paper presents an empirical approach to selecting activities of daily living (ADL) to assess the functional capacity of noninstitutionalized individuals with polyarticular disability. The results of structural analyses illustrate the feasibility of substantially reducing the task of assessing functional capacity with a subset of ADL items without sacrificing the comprehensiveness of the assessment. The analyses reveal 5 common functional categories: physical mobility, transfers, home chores, kitchen chores, and personal care, which account for over 50% of the variance in the data.

  18. An inverse approach for elucidating dendritic function.

    PubMed

    Torben-Nielsen, Benjamin; Stiefel, Klaus M

    2010-01-01

    We outline an inverse approach for investigating dendritic function-structure relationships by optimizing dendritic trees for a priori chosen computational functions. The inverse approach can be applied in two different ways. First, we can use it as a "hypothesis generator" in which we optimize dendrites for a function of general interest. The optimization yields an artificial dendrite that is subsequently compared to real neurons. This comparison potentially allows us to propose hypotheses about the function of real neurons. In this way, we investigated dendrites that optimally perform input-order detection. Second, we can use it as a "function confirmation" by optimizing dendrites for functions hypothesized to be performed by classes of neurons. If the optimized, artificial, dendrites resemble the dendrites of real neurons the artificial dendrites corroborate the hypothesized function of the real neuron. Moreover, properties of the artificial dendrites can lead to predictions about yet unmeasured properties. In this way, we investigated wide-field motion integration performed by the VS cells of the fly visual system. In outlining the inverse approach and two applications, we also elaborate on the nature of dendritic function. We furthermore discuss the role of optimality in assigning functions to dendrites and point out interesting future directions.

  19. General derivation of the Green's functions for the atomic approach of the Anderson model: application to a single electron transistor (SET)

    NASA Astrophysics Data System (ADS)

    Foglio, M. E.; Lobo, T.; Figueira, M. S.

    2012-09-01

    We consider the cumulant expansion of the periodic Anderson model (PAM) in the case of a finite electronic correlation U, employing the hybridization as perturbation, and obtain a formal expression of the exact one-electron Green's function (GF). This expression contains effective cumulants that are as difficult to calculate as the original GF, and the atomic approach consists in substituting the effective cumulants by the ones that correspond to the atomic case, namely by taking a conduction band of zeroth width and local hybridization. In a previous work (T. Lobo, M. S. Figueira, and M. E. Foglio, Nanotechnology 21, 274007 (2010), 10.1088/0957-4484/21/27/274007) we developed the atomic approach by considering only one variational parameter that is used to adjust the correct height of the Kondo peak by imposing the satisfaction of the Friedel sum rule. To obtain the correct width of the Kondo peak in the present work, we consider an additional variational parameter that guarantees this quantity. The two constraints now imposed on the formalism are the satisfaction of the Friedel sum rule and the correct Kondo temperature. In the first part of the work, we present a general derivation of the method for the single impurity Anderson model (SIAM), and we calculate several density of states representative of the Kondo regime for finite correlation U, including the symmetrical case. In the second part, we apply the method to study the electronic transport through a quantum dot (QD) embedded in a quantum wire (QW), which is realized experimentally by a single electron transistor (SET). We calculate the conductance of the SET and obtain a good agreement with available experimental and theoretical results.

  20. Defining Function in the Functional Medicine Model.

    PubMed

    Bland, Jeffrey

    2017-02-01

    In the functional medicine model, the word function is aligned with the evolving understanding that disease is an endpoint and function is a process. Function can move both forward and backward. The vector of change in function through time is, in part, determined by the unique interaction of an individual's genome with their environment, diet, and lifestyle. The functional medicine model for health care is concerned less with what we call the dysfunction or disease, and more about the dynamic processes that resulted in the person's dysfunction. The previous concept of functional somatic syndromes as psychosomatic in origin has now been replaced with a new concept of function that is rooted in the emerging 21st-century understanding of systems network-enabled biology.

  1. An Inverse Approach for Elucidating Dendritic Function

    PubMed Central

    Torben-Nielsen, Benjamin; Stiefel, Klaus M.

    2010-01-01

    We outline an inverse approach for investigating dendritic function–structure relationships by optimizing dendritic trees for a priori chosen computational functions. The inverse approach can be applied in two different ways. First, we can use it as a “hypothesis generator” in which we optimize dendrites for a function of general interest. The optimization yields an artificial dendrite that is subsequently compared to real neurons. This comparison potentially allows us to propose hypotheses about the function of real neurons. In this way, we investigated dendrites that optimally perform input-order detection. Second, we can use it as a “function confirmation” by optimizing dendrites for functions hypothesized to be performed by classes of neurons. If the optimized, artificial, dendrites resemble the dendrites of real neurons the artificial dendrites corroborate the hypothesized function of the real neuron. Moreover, properties of the artificial dendrites can lead to predictions about yet unmeasured properties. In this way, we investigated wide-field motion integration performed by the VS cells of the fly visual system. In outlining the inverse approach and two applications, we also elaborate on the nature of dendritic function. We furthermore discuss the role of optimality in assigning functions to dendrites and point out interesting future directions. PMID:21258425

  2. Determining the mechanical constitutive properties of metals as a function of strain rate and temperature: A combined experimental and modeling approach; Progress Report for 2004

    SciTech Connect

    I. Robertson; A. Beaudoin; J. Lambros

    2005-01-31

    Development and validation of constitutive models for polycrystalline materials subjected to high strain rate loading over a range of temperatures are needed to predict the response of engineering materials to in-service type conditions (foreign object damage, high-strain rate forging, high-speed sheet forming, deformation behavior during forming, response to extreme conditions, etc.). To account accurately for the complex effects that can occur during extreme and variable loading conditions, requires significant and detailed computational and modeling efforts. These efforts must be closely coupled with precise and targeted experimental measurements that not only verify the predictions of the models, but also provide input about the fundamental processes responsible for the macroscopic response. Achieving this coupling between modeling and experimentation is the guiding principle of this program. Specifically, this program seeks to bridge the length scale between discrete dislocation interactions with grain boundaries and continuum models for polycrystalline plasticity. Achieving this goal requires incorporating these complex dislocation-interface interactions into the well-defined behavior of single crystals. Despite the widespread study of metal plasticity, this aspect is not well understood for simple loading conditions, let alone extreme ones. Our experimental approach includes determining the high-strain rate response as a function of strain and temperature with post-mortem characterization of the microstructure, quasi-static testing of pre-deformed material, and direct observation of the dislocation behavior during reloading by using the in situ transmission electron microscope deformation technique. These experiments will provide the basis for development and validation of physically-based constitutive models, which will include dislocation-grain boundary interactions for polycrystalline systems. One aspect of the program will involve the direct observation

  3. FINAL PROJECT REPORT DOE Early Career Principal Investigator Program Project Title: Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach

    SciTech Connect

    Shankar Subramaniam

    2009-04-01

    This final project report summarizes progress made towards the objectives described in the proposal entitled “Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach”. Substantial progress has been made in theory, modeling and numerical simulation of turbulent multiphase flows. The consistent mathematical framework based on probability density functions is described. New models are proposed for turbulent particle-laden flows and sprays.

  4. From data to function: functional modeling of poultry genomics data.

    PubMed

    McCarthy, F M; Lyons, E

    2013-09-01

    One of the challenges of functional genomics is to create a better understanding of the biological system being studied so that the data produced are leveraged to provide gains for agriculture, human health, and the environment. Functional modeling enables researchers to make sense of these data as it reframes a long list of genes or gene products (mRNA, ncRNA, and proteins) by grouping based upon function, be it individual molecular functions or interactions between these molecules or broader biological processes, including metabolic and signaling pathways. However, poultry researchers have been hampered by a lack of functional annotation data, tools, and training to use these data and tools. Moreover, this lack is becoming more critical as new sequencing technologies enable us to generate data not only for an increasingly diverse range of species but also individual genomes and populations of individuals. We discuss the impact of these new sequencing technologies on poultry research, with a specific focus on what functional modeling resources are available for poultry researchers. We also describe key strategies for researchers who wish to functionally model their own data, providing background information about functional modeling approaches, the data and tools to support these approaches, and the strengths and limitations of each. Specifically, we describe methods for functional analysis using Gene Ontology (GO) functional summaries, functional enrichment analysis, and pathways and network modeling. As annotation efforts begin to provide the fundamental data that underpin poultry functional modeling (such as improved gene identification, standardized gene nomenclature, temporal and spatial expression data and gene product function), tool developers are incorporating these data into new and existing tools that are used for functional modeling, and cyberinfrastructure is being developed to provide the necessary extendibility and scalability for storing and

  5. A functional approach to the TMJ disorders.

    PubMed

    Deodato, F; Cristiano, S; Trusendi, R; Giorgetti, R

    2003-01-01

    This manuscript describes our conservative approach to treatment of TMJ disorders. The method we use had been suggested by Rocabado - its aims are: joint distraction by the elimination of compression, restoration of physiologic articular rest, mobilization of the soft tissues, and whenever possible, to improve the condyle-disk-glenoid fossa relationship. To support these claims two clinical cases are presented where the non-invasive therapy was successful. The results obtained confirm the validity of this functional approach.

  6. Detection of Differential Item Functioning Using the Lasso Approach

    ERIC Educational Resources Information Center

    Magis, David; Tuerlinckx, Francis; De Boeck, Paul

    2015-01-01

    This article proposes a novel approach to detect differential item functioning (DIF) among dichotomously scored items. Unlike standard DIF methods that perform an item-by-item analysis, we propose the "LR lasso DIF method": logistic regression (LR) model is formulated for all item responses. The model contains item-specific intercepts,…

  7. Quadratic function approaching method for magnetotelluric soundingdata inversion

    SciTech Connect

    Liangjun, Yan; Wenbao, Hu; Zhang, Keni

    2004-04-05

    The quadratic function approaching method (QFAM) is introduced for magnetotelluric sounding (MT) data inversion. The method takes the advantage of that quadratic function has single extreme value, which avoids leading to an inversion solution for local minimum and ensures the solution for global minimization of an objective function. The method does not need calculation of sensitivity matrix and not require a strict initial earth model. Examples for synthetic data and field measurement data indicate that the proposed inversion method is effective.

  8. New Experiments and a Model-Driven Approach for Interpreting Middle Stone Age Lithic Point Function Using the Edge Damage Distribution Method

    PubMed Central

    Schoville, Benjamin J.; Brown, Kyle S.; Harris, Jacob A.; Wilkins, Jayne

    2016-01-01

    The Middle Stone Age (MSA) is associated with early evidence for symbolic material culture and complex technological innovations. However, one of the most visible aspects of MSA technologies are unretouched triangular stone points that appear in the archaeological record as early as 500,000 years ago in Africa and persist throughout the MSA. How these tools were being used and discarded across a changing Pleistocene landscape can provide insight into how MSA populations prioritized technological and foraging decisions. Creating inferential links between experimental and archaeological tool use helps to establish prehistoric tool function, but is complicated by the overlaying of post-depositional damage onto behaviorally worn tools. Taphonomic damage patterning can provide insight into site formation history, but may preclude behavioral interpretations of tool function. Here, multiple experimental processes that form edge damage on unretouched lithic points from taphonomic and behavioral processes are presented. These provide experimental distributions of wear on tool edges from known processes that are then quantitatively compared to the archaeological patterning of stone point edge damage from three MSA lithic assemblages—Kathu Pan 1, Pinnacle Point Cave 13B, and Die Kelders Cave 1. By using a model-fitting approach, the results presented here provide evidence for variable MSA behavioral strategies of stone point utilization on the landscape consistent with armature tips at KP1, and cutting tools at PP13B and DK1, as well as damage contributions from post-depositional sources across assemblages. This study provides a method with which landscape-scale questions of early modern human tool-use and site-use can be addressed. PMID:27736886

  9. Model dielectric functions and conservation laws

    NASA Astrophysics Data System (ADS)

    Shirley, Eric L.

    2003-03-01

    There continues to be a need for calculating dielectric screening of charges in solids. Most work has been done in the random-phase approximation (RPA) with minor variations, which proves to be quite accurate for many applications. However, this is still a time-consuming and computationally intensive approach, and model dielectric functions can be valuable for this reason. This talk discusses several conservation laws related to dielectric screening and a model dielectric function that obeys such laws. Shortcomings of model functions that are difficult to overcome will be touched on, and a possible means of combining results from RPA and model calculations will be addressed.

  10. Clinical Psychology Ph.D. Program Admissions: Differential Values as a Function of Program Characteristics and the Implications of the Mentor-Model Approach

    ERIC Educational Resources Information Center

    Metzger, Jesse A.

    2010-01-01

    The aims of this research were to 1) examine the qualities for which applicants are selected for entrance into clinical psychology Ph.D. programs, and 2) investigate the prevalence and impact of the mentor-model approach to admissions on multiple domains of programs and the field at large. Fifty Directors of Clinical Training (DCTs) provided data…

  11. Linearized path integral approach for calculating nonadiabatic time correlation functions.

    PubMed

    Bonella, Sara; Montemayor, Daniel; Coker, David F

    2005-05-10

    We show that quantum time correlation functions including electronically nonadiabatic effects can be computed by using an approach in which their path integral expression is linearized in the difference between forward and backward nuclear paths while the electronic component of the amplitude, represented in the mapping formulation, can be computed exactly, leading to classical-like equations of motion for all degrees of freedom. The efficiency of this approach is demonstrated in some simple model applications.

  12. Modeling Protein Domain Function

    ERIC Educational Resources Information Center

    Baker, William P.; Jones, Carleton "Buck"; Hull, Elizabeth

    2007-01-01

    This simple but effective laboratory exercise helps students understand the concept of protein domain function. They use foam beads, Styrofoam craft balls, and pipe cleaners to explore how domains within protein active sites interact to form a functional protein. The activity allows students to gain content mastery and an understanding of the…

  13. Modeling Protein Domain Function

    ERIC Educational Resources Information Center

    Baker, William P.; Jones, Carleton "Buck"; Hull, Elizabeth

    2007-01-01

    This simple but effective laboratory exercise helps students understand the concept of protein domain function. They use foam beads, Styrofoam craft balls, and pipe cleaners to explore how domains within protein active sites interact to form a functional protein. The activity allows students to gain content mastery and an understanding of the…

  14. Air Pollution and Lung Function in Dutch Children: A Comparison of Exposure Estimates and Associations Based on Land Use Regression and Dispersion Exposure Modeling Approaches.

    PubMed

    Wang, Meng; Gehring, Ulrike; Hoek, Gerard; Keuken, Menno; Jonkers, Sander; Beelen, Rob; Eeftens, Marloes; Postma, Dirkje S; Brunekreef, Bert

    2015-08-01

    There is limited knowledge about the extent to which estimates of air pollution effects on health are affected by the choice for a specific exposure model. We aimed to evaluate the correlation between long-term air pollution exposure estimates using two commonly used exposure modeling techniques [dispersion and land use regression (LUR) models] and, in addition, to compare the estimates of the association between long-term exposure to air pollution and lung function in children using these exposure modeling techniques. We used data of 1,058 participants of a Dutch birth cohort study with measured forced expiratory volume in 1 sec (FEV1), forced vital capacity (FVC), and peak expiratory flow (PEF) measurements at 8 years of age. For each child, annual average outdoor air pollution exposure [nitrogen dioxide (NO2), mass concentration of particulate matter with diameters ≤ 2.5 and ≤ 10 μm (PM2.5, PM10), and PM2.5 soot] was estimated for the current addresses of the participants by a dispersion and a LUR model. Associations between exposures to air pollution and lung function parameters were estimated using linear regression analysis with confounder adjustment. Correlations between LUR- and dispersion-modeled pollution concentrations were high for NO2, PM2.5, and PM2.5 soot (R = 0.86-0.90) but low for PM10 (R = 0.57). Associations with lung function were similar for air pollutant exposures estimated using LUR and dispersion modeling, except for associations of PM2.5 with FEV1 and FVC, which were stronger but less precise for exposures based on LUR compared with dispersion model. Predictions from LUR and dispersion models correlated very well for PM2.5, NO2, and PM2.5 soot but not for PM10. Health effect estimates did not depend on the type of model used to estimate exposure in a population of Dutch children.

  15. Brain Functioning Models for Learning.

    ERIC Educational Resources Information Center

    Tipps, Steve; And Others

    This paper describes three models of brain function, each of which contributes to an integrated understanding of human learning. The first model, the up-and-down model, emphasizes the interconnection between brain structures and functions, and argues that since physiological, emotional, and cognitive responses are inseparable, the learning context…

  16. A Bayesian geostatistical transfer function approach to tracer test analysis

    NASA Astrophysics Data System (ADS)

    Fienen, Michael N.; Luo, Jian; Kitanidis, Peter K.

    2006-07-01

    Reactive transport modeling is often used in support of bioremediation and chemical treatment planning and design. There remains a pressing need for practical and efficient models that do not require (or assume attainable) the high level of characterization needed by complex numerical models. We focus on a linear systems or transfer function approach to the problem of reactive tracer transport in a heterogeneous saprolite aquifer. Transfer functions are obtained through the Bayesian geostatistical inverse method applied to tracer injection histories and breakthrough curves. We employ nonparametric transfer functions, which require minimal assumptions about shape and structure. The resulting flexibility empowers the data to determine the nature of the transfer function with minimal prior assumptions. Nonnegativity is enforced through a reflected Brownian motion stochastic model. The inverse method enables us to quantify uncertainty and to generate conditional realizations of the transfer function. Complex information about a hydrogeologic system is distilled into a relatively simple but rigorously obtained function that describes the transport behavior of the system between two wells. The resulting transfer functions are valuable in reactive transport models based on traveltime and streamline methods. The information contained in the data, particularly in the case of strong heterogeneity, is not overextended but is fully used. This is the first application of Bayesian geostatistical inversion to transfer functions in hydrogeology but the methodology can be extended to any linear system.

  17. Functional genomics approaches in parasitic helminths.

    PubMed

    Hagen, J; Lee, E F; Fairlie, W D; Kalinna, B H

    2012-01-01

    As research on parasitic helminths is moving into the post-genomic era, an enormous effort is directed towards deciphering gene function and to achieve gene annotation. The sequences that are available in public databases undoubtedly hold information that can be utilized for new interventions and control but the exploitation of these resources has until recently remained difficult. Only now, with the emergence of methods to genetically manipulate and transform parasitic worms will it be possible to gain a comprehensive understanding of the molecular mechanisms involved in nutrition, metabolism, developmental switches/maturation and interaction with the host immune system. This review focuses on functional genomics approaches in parasitic helminths that are currently used, to highlight potential applications of these technologies in the areas of cell biology, systems biology and immunobiology of parasitic helminths. © 2011 Blackwell Publishing Ltd.

  18. An approach to metering and network modeling

    SciTech Connect

    Adibi, M.M. ); Clements, K.A. ); Kafka, R.J. ); Stovall, J.P. )

    1992-01-01

    Estimation of the static state of an electric power network has become a standard function in real-time monitoring and control. Its purpose is to use the network model and process the metering data in order to determine an accurate and reliable estimate of the system state in the real-time environment. In the models usually used it is assumed that the network parameters and topology are free of errors and the measurement system provides unbiased data having a known distribution. The network and metering models however, contain errors which frequently result in either non-convergent behavior of the state estimator or exceedingly large residuals, reducing the level of confidence in the results. This paper describes an approach minimizing the above uncertainties by analyzing the data which are routinely collected at the power system control center. The approach will improve the reliability of the real-time data-base while reducing the state estimator installation and maintenance effort.

  19. A wave-function based approach for polarizable charge model: Systematic comparison of polarization effects on protic, aprotic, and ionic liquids.

    PubMed

    Nakano, Hiroshi; Yamamoto, Takeshi; Kato, Shigeki

    2010-01-28

    We first describe a wave-function based formalism of polarizable charge model by starting from the Hartree product ansatz for the total wave function and making the second-order expansion of individual molecular energies with the use of partial charge operators. The resulting model is shown to be formally equivalent to the charge response kernel model that starts from the linear-response approximation to partial charges, and also closely related to a family of fluctuating charge models that are based on the electronegativity equalization principle. We then apply the above model to a systematic comparison of polarization effects on qualitatively different liquids, namely, protic solvents (water and methanol), an aprotic polar solvent (acetonitrile), and imidazolium-based ionic liquids. Electronic polarization is known to decelerate molecular motions in conventional solvents while it accelerates them in ionic liquids. To obtain more insights into these phenomena, we consider an effective decomposition of total polarization energy into molecular contributions, and show that their statistical distribution is well-correlated with the acceleration/deceleration of molecular motions. In addition, we perform effective nonpolarizable simulations based on mean polarized charges, and compare them with fully polarizable simulations. The result shows that the former can reproduce structural properties of conventional solvents rather accurately, while they fail qualitatively to reproduce acceleration of molecular motions in ionic liquids.

  20. Pharmacological approaches to restore mitochondrial function

    PubMed Central

    Andreux, Pénélope A.; Houtkooper, Riekelt H.; Auwerx, Johan

    2014-01-01

    Mitochondrial dysfunction is not only a hallmark of rare inherited mitochondrial disorders, but is also implicated in age-related diseases, including those that affect the metabolic and nervous system, such as type 2 diabetes and Parkinson’s disease. Numerous pathways maintain and/or restore proper mitochondrial function, including mitochondrial biogenesis, mitochondrial dynamics, mitophagy, and the mitochondrial unfolded protein response. New and powerful phenotypic assays in cell-based models, as well as multicellular organisms, have been developed to explore these different aspects of mitochondrial function. Modulating mitochondrial function has therefore emerged as an attractive therapeutic strategy for a range of diseases, which has spurred active drug discovery efforts in this area. PMID:23666487

  1. Modeling NMR lineshapes using logspline density functions.

    PubMed

    Raz, J; Fernandez, E J; Gillespie, J

    1997-08-01

    Distortions in the FID and spin echo due to magnetic field inhomogeneity are proved to have a representation as the characteristic function of some probability distribution. In the special case that the distribution is Cauchy, the model reduces to the conventional Lorentzian model. A more general and flexible representation is presented using the Fourier transform of a logspline density. An algorithm for fitting the model is described, the performance of the model and algorithm is investigated in applications to real and simulated data sets, and the logspline approach is compared to a previous Hermitian spline approach and to the Lorentzian model. The logspline model is more parsimonious than the Hermitian spline model, provides a better fit to real data, and is much less biased than the Lorentzian model.

  2. HEDR modeling approach: Revision 1

    SciTech Connect

    Shipler, D.B.; Napier, B.A.

    1994-05-01

    This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies.

  3. A Transfer Learning Approach for Network Modeling

    PubMed Central

    Huang, Shuai; Li, Jing; Chen, Kewei; Wu, Teresa; Ye, Jieping; Wu, Xia; Yao, Li

    2012-01-01

    Networks models have been widely used in many domains to characterize the interacting relationship between physical entities. A typical problem faced is to identify the networks of multiple related tasks that share some similarities. In this case, a transfer learning approach that can leverage the knowledge gained during the modeling of one task to help better model another task is highly desirable. In this paper, we propose a transfer learning approach, which adopts a Bayesian hierarchical model framework to characterize task relatedness and additionally uses the L1-regularization to ensure robust learning of the networks with limited sample sizes. A method based on the Expectation-Maximization (EM) algorithm is further developed to learn the networks from data. Simulation studies are performed, which demonstrate the superiority of the proposed transfer learning approach over single task learning that learns the network of each task in isolation. The proposed approach is also applied to identification of brain connectivity networks of Alzheimer’s disease (AD) from functional magnetic resonance image (fMRI) data. The findings are consistent with the AD literature. PMID:24526804

  4. Computational Models for Neuromuscular Function

    PubMed Central

    Valero-Cuevas, Francisco J.; Hoffmann, Heiko; Kurse, Manish U.; Kutch, Jason J.; Theodorou, Evangelos A.

    2011-01-01

    Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data. PMID:21687779

  5. Alternative Approach To Modeling Bacterial Lag Time, Using Logistic Regression as a Function of Time, Temperature, pH, and Sodium Chloride Concentration

    PubMed Central

    Nonaka, Junko

    2012-01-01

    The objective of this study was to develop a probabilistic model to predict the end of lag time (λ) during the growth of Bacillus cereus vegetative cells as a function of temperature, pH, and salt concentration using logistic regression. The developed λ model was subsequently combined with a logistic differential equation to simulate bacterial numbers over time. To develop a novel model for λ, we determined whether bacterial growth had begun, i.e., whether λ had ended, at each time point during the growth kinetics. The growth of B. cereus was evaluated by optical density (OD) measurements in culture media for various pHs (5.5 ∼ 7.0) and salt concentrations (0.5 ∼ 2.0%) at static temperatures (10 ∼ 20°C). The probability of the end of λ was modeled using dichotomous judgments obtained at each OD measurement point concerning whether a significant increase had been observed. The probability of the end of λ was described as a function of time, temperature, pH, and salt concentration and showed a high goodness of fit. The λ model was validated with independent data sets of B. cereus growth in culture media and foods, indicating acceptable performance. Furthermore, the λ model, in combination with a logistic differential equation, enabled a simulation of the population of B. cereus in various foods over time at static and/or fluctuating temperatures with high accuracy. Thus, this newly developed modeling procedure enables the description of λ using observable environmental parameters without any conceptual assumptions and the simulation of bacterial numbers over time with the use of a logistic differential equation. PMID:22729541

  6. Response Surface Modeling Using Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; DeLoach, Richard

    2001-01-01

    A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.

  7. Modeling Approaches in Planetary Seismology

    NASA Technical Reports Server (NTRS)

    Weber, Renee; Knapmeyer, Martin; Panning, Mark; Schmerr, Nick

    2014-01-01

    Of the many geophysical means that can be used to probe a planet's interior, seismology remains the most direct. Given that the seismic data gathered on the Moon over 40 years ago revolutionized our understanding of the Moon and are still being used today to produce new insight into the state of the lunar interior, it is no wonder that many future missions, both real and conceptual, plan to take seismometers to other planets. To best facilitate the return of high-quality data from these instruments, as well as to further our understanding of the dynamic processes that modify a planet's interior, various modeling approaches are used to quantify parameters such as the amount and distribution of seismicity, tidal deformation, and seismic structure on and of the terrestrial planets. In addition, recent advances in wavefield modeling have permitted a renewed look at seismic energy transmission and the effects of attenuation and scattering, as well as the presence and effect of a core, on recorded seismograms. In this chapter, we will review these approaches.

  8. The functions of autobiographical memory: an integrative approach.

    PubMed

    Harris, Celia B; Rasmussen, Anne S; Berntsen, Dorthe

    2014-01-01

    Recent research in cognitive psychology has emphasised the uses, or functions, of autobiographical memory. Theoretical and empirical approaches have focused on a three-function model: autobiographical memory serves self, directive, and social functions. In the reminiscence literature other taxonomies and additional functions have been postulated. We examined the relationships between functions proposed by these literatures, in order to broaden conceptualisations and make links between research traditions. In Study 1 we combined two measures of individual differences in the uses of autobiographical memory. Our results suggested four classes of memory functions, which we labelled Reflective, Generative, Ruminative, and Social. In Study 2 we tested relationships between our four functions and broader individual differences, and found conceptually consistent relationships. In Study 3 we found that memories cued by Generative and Social functions were more emotionally positive than were memories cued by Reflective and Ruminative functions. In Study 4 we found that reported use of Generative functions increased across the lifespan, while reported use of the other three functions decreased. Overall our findings suggest a broader view of autobiographical memory functions that links them to ways in which people make meaning of their selves, their environment, and their social world more generally.

  9. Synchronization-based approach for detecting functional activation of brain

    NASA Astrophysics Data System (ADS)

    Hong, Lei; Cai, Shi-Min; Zhang, Jie; Zhuo, Zhao; Fu, Zhong-Qian; Zhou, Pei-Ling

    2012-09-01

    In this paper, we investigate a synchronization-based, data-driven clustering approach for the analysis of functional magnetic resonance imaging (fMRI) data, and specifically for detecting functional activation from fMRI data. We first define a new measure of similarity between all pairs of data points (i.e., time series of voxels) integrating both complete phase synchronization and amplitude correlation. These pairwise similarities are taken as the coupling between a set of Kuramoto oscillators, which in turn evolve according to a nearest-neighbor rule. As the network evolves, similar data points naturally synchronize with each other, and distinct clusters will emerge. The clustering behavior of the interaction network of the coupled oscillators, therefore, mirrors the clustering property of the original multiple time series. The clustered regions whose cross-correlation coefficients are much greater than other regions are considered as the functionally activated brain regions. The analysis of fMRI data in auditory and visual areas shows that the recognized brain functional activations are in complete correspondence with those from the general linear model of statistical parametric mapping, but with a significantly lower time complexity. We further compare our results with those from traditional K-means approach, and find that our new clustering approach can distinguish between different response patterns more accurately and efficiently than the K-means approach, and therefore more suitable in detecting functional activation from event-related experimental fMRI data.

  10. Leveraging Modeling Approaches: Reaction Networks and Rules

    PubMed Central

    Blinov, Michael L.; Moraru, Ion I.

    2012-01-01

    We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high resolution and/or high throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatio-temporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks – the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks. PMID:22161349

  11. Leveraging modeling approaches: reaction networks and rules.

    PubMed

    Blinov, Michael L; Moraru, Ion I

    2012-01-01

    We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high-resolution and/or high-throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatiotemporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks - the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks.

  12. Wavelet-based functional mixed models

    PubMed Central

    Morris, Jeffrey S.; Carroll, Raymond J.

    2009-01-01

    Summary Increasingly, scientific studies yield functional data, in which the ideal units of observation are curves and the observed data consist of sets of curves that are sampled on a fine grid. We present new methodology that generalizes the linear mixed model to the functional mixed model framework, with model fitting done by using a Bayesian wavelet-based approach. This method is flexible, allowing functions of arbitrary form and the full range of fixed effects structures and between-curve covariance structures that are available in the mixed model framework. It yields nonparametric estimates of the fixed and random-effects functions as well as the various between-curve and within-curve covariance matrices. The functional fixed effects are adaptively regularized as a result of the non-linear shrinkage prior that is imposed on the fixed effects’ wavelet coefficients, and the random-effect functions experience a form of adaptive regularization because of the separately estimated variance components for each wavelet coefficient. Because we have posterior samples for all model quantities, we can perform pointwise or joint Bayesian inference or prediction on the quantities of the model. The adaptiveness of the method makes it especially appropriate for modelling irregular functional data that are characterized by numerous local features like peaks. PMID:19759841

  13. A Bayesian modeling approach for generalized semiparametric structural equation models.

    PubMed

    Song, Xin-Yuan; Lu, Zhao-Hua; Cai, Jing-Heng; Ip, Edward Hak-Sing

    2013-10-01

    In behavioral, biomedical, and psychological studies, structural equation models (SEMs) have been widely used for assessing relationships between latent variables. Regression-type structural models based on parametric functions are often used for such purposes. In many applications, however, parametric SEMs are not adequate to capture subtle patterns in the functions over the entire range of the predictor variable. A different but equally important limitation of traditional parametric SEMs is that they are not designed to handle mixed data types-continuous, count, ordered, and unordered categorical. This paper develops a generalized semiparametric SEM that is able to handle mixed data types and to simultaneously model different functional relationships among latent variables. A structural equation of the proposed SEM is formulated using a series of unspecified smooth functions. The Bayesian P-splines approach and Markov chain Monte Carlo methods are developed to estimate the smooth functions and the unknown parameters. Moreover, we examine the relative benefits of semiparametric modeling over parametric modeling using a Bayesian model-comparison statistic, called the complete deviance information criterion (DIC). The performance of the developed methodology is evaluated using a simulation study. To illustrate the method, we used a data set derived from the National Longitudinal Survey of Youth.

  14. Functional phosphoproteomic mass spectrometry-based approaches

    PubMed Central

    2012-01-01

    Mass Spectrometry (MS)-based phosphoproteomics tools are crucial for understanding the structure and dynamics of signaling networks. Approaches such as affinity purification followed by MS have also been used to elucidate relevant biological questions in health and disease. The study of proteomes and phosphoproteomes as linked systems, rather than research studies of individual proteins, are necessary to understand the functions of phosphorylated and un-phosphorylated proteins under spatial and temporal conditions. Phosphoproteome studies also facilitate drug target protein identification which may be clinically useful in the near future. Here, we provide an overview of general principles of signaling pathways versus phosphorylation. Likewise, we detail chemical phosphoproteomic tools, including pros and cons with examples where these methods have been applied. In addition, basic clues of electrospray ionization and collision induced dissociation fragmentation are detailed in a simple manner for successful phosphoproteomic clinical studies. PMID:23369623

  15. Cytotoxicity towards CCO cells of imidazolium ionic liquids with functionalized side chains: preliminary QSTR modeling using regression and classification based approaches.

    PubMed

    Bubalo, Marina Cvjetko; Radošević, Kristina; Srček, Višnja Gaurina; Das, Rudra Narayan; Popelier, Paul; Roy, Kunal

    2015-02-01

    Within this work we evaluated the cytotoxicity towards the Channel Catfish Ovary (CCO) cell line of some imidazolium-based ionic liquids containing different functionalized and unsaturated side chains. The toxic effects were measured by the reduction of the WST-1 dye after 72 h exposure resulting in dose- and structure-dependent toxicities. The obtained data on cytotoxic effects of 14 different imidazolium ionic liquids in CCO cells, expressed as EC50 values, were used in a preliminary quantitative structure-toxicity relationship (QSTR) study employing regression- and classification-based approaches. The toxicity of ILs towards CCO was chiefly related to the shape and hydrophobicity parameters of cations. A significant influence of the quantum topological molecular similarity descriptor ellipticity (ε) of the imine bond was also observed.

  16. Ecosystem structure and function modeling

    USGS Publications Warehouse

    Humphries, H.C.; Baron, J.S.; Jensen, M.E.; Bourgeron, P.

    2001-01-01

    An important component of ecological assessments is the ability to predict and display changes in ecosystem structure and function over a variety of spatial and temporal scales. These changes can occur over short (less than 1 year) or long time frames (over 100 years). Models may emphasize structural responses (changes in species composition, growth forms, canopy height, amount of old growth, etc.) or functional responses (cycling of carbon, nutrients, and water). Both are needed to display changes in ecosystem components for use in robust ecological assessments. Structure and function models vary in the ecosystem components included, algorithms employed, level of detail, and spatial and temporal scales incorporated. They range from models that track individual organisms to models of broad-scale landscape changes. This chapter describes models appropriate for ecological assessments. The models selected for inclusion can be implemented in a spatial framework and for the most part have been run in more than one system.

  17. Green functions of graphene: An analytic approach

    NASA Astrophysics Data System (ADS)

    Lawlor, James A.; Ferreira, Mauro S.

    2015-04-01

    In this article we derive the lattice Green Functions (GFs) of graphene using a Tight Binding Hamiltonian incorporating both first and second nearest neighbour hoppings and allowing for a non-orthogonal electron wavefunction overlap. It is shown how the resulting GFs can be simplified from a double to a single integral form to aid computation, and that when considering off-diagonal GFs in the high symmetry directions of the lattice this single integral can be approximated very accurately by an algebraic expression. By comparing our results to the conventional first nearest neighbour model commonly found in the literature, it is apparent that the extended model leads to a sizeable change in the electronic structure away from the linear regime. As such, this article serves as a blueprint for researchers who wish to examine quantities where these considerations are important.

  18. Component Modeling Approach Software Tool

    SciTech Connect

    2010-08-23

    The Component Modeling Approach Software Tool (CMAST) establishes a set of performance libraries of approved components (frames, glass, and spacer) which can be accessed for configuring fenestration products for a project, and btaining a U-factor, Solar Heat Gain Coefficient (SHGC), and Visible Transmittance (VT) rating for those products, which can then be reflected in a CMA Label Certificate for code compliance. CMAST is web-based as well as client-based. The completed CMA program and software tool will be useful in several ways for a vast array of stakeholders in the industry: Generating performance ratings for bidding projects Ascertaining credible and accurate performance data Obtaining third party certification of overall product performance for code compliance

  19. Modelling approaches for evaluating multiscale tendon mechanics

    PubMed Central

    Fang, Fei; Lake, Spencer P.

    2016-01-01

    Tendon exhibits anisotropic, inhomogeneous and viscoelastic mechanical properties that are determined by its complicated hierarchical structure and varying amounts/organization of different tissue constituents. Although extensive research has been conducted to use modelling approaches to interpret tendon structure–function relationships in combination with experimental data, many issues remain unclear (i.e. the role of minor components such as decorin, aggrecan and elastin), and the integration of mechanical analysis across different length scales has not been well applied to explore stress or strain transfer from macro- to microscale. This review outlines mathematical and computational models that have been used to understand tendon mechanics at different scales of the hierarchical organization. Model representations at the molecular, fibril and tissue levels are discussed, including formulations that follow phenomenological and microstructural approaches (which include evaluations of crimp, helical structure and the interaction between collagen fibrils and proteoglycans). Multiscale modelling approaches incorporating tendon features are suggested to be an advantageous methodology to understand further the physiological mechanical response of tendon and corresponding adaptation of properties owing to unique in vivo loading environments. PMID:26855747

  20. Systematic approach for modeling tetrachloroethene biodegradation

    SciTech Connect

    Bagley, D.M.

    1998-11-01

    The anaerobic biodegradation of tetrachloroethene (PCE) is a reasonably well understood process. Specific organisms capable of using PCE as an electron acceptor for growth require the addition of an electron donor to remove PCE from contaminated ground waters. However, competition from other anaerobic microorganisms for added electron donor will influence the rate and completeness of PCE degradation. The approach developed here allows for the explicit modeling of PCE and byproduct biodegradation as a function of electron donor and byproduct concentrations, and the microbiological ecology of the system. The approach is general and can be easily modified for ready use with in situ ground-water models or ex situ reactor models. Simulations conducted with models developed from this approach show the sensitivity of PCE biodegradation to input parameter values, in particular initial biomass concentrations. Additionally, the dechlorination rate will be strongly influenced by the microbial ecology of the system. Finally, comparison with experimental acclimation results indicates that existing kinetic constants may not be generally applicable. Better techniques for measuring the biomass of specific organisms groups in mixed systems are required.

  1. Executive functioning as a mediator of conduct problems prevention in children of homeless families residing in temporary supportive housing: a parallel process latent growth modeling approach.

    PubMed

    Piehler, Timothy F; Bloomquist, Michael L; August, Gerald J; Gewirtz, Abigail H; Lee, Susanne S; Lee, Wendy S C

    2014-01-01

    A culturally diverse sample of formerly homeless youth (ages 6-12) and their families (n = 223) participated in a cluster randomized controlled trial of the Early Risers conduct problems prevention program in a supportive housing setting. Parents provided 4 annual behaviorally-based ratings of executive functioning (EF) and conduct problems, including at baseline, over 2 years of intervention programming, and at a 1-year follow-up assessment. Using intent-to-treat analyses, a multilevel latent growth model revealed that the intervention group demonstrated reduced growth in conduct problems over the 4 assessment points. In order to examine mediation, a multilevel parallel process latent growth model was used to simultaneously model growth in EF and growth in conduct problems along with intervention status as a covariate. A significant mediational process emerged, with participation in the intervention promoting growth in EF, which predicted negative growth in conduct problems. The model was consistent with changes in EF fully mediating intervention-related changes in youth conduct problems over the course of the study. These findings highlight the critical role that EF plays in behavioral change and lends further support to its importance as a target in preventive interventions with populations at risk for conduct problems.

  2. Distribution function approach to redshift space distortions

    SciTech Connect

    Seljak, Uroš; McDonald, Patrick E-mail: pvmcdonald@lbl.gov

    2011-11-01

    We develop a phase space distribution function approach to redshift space distortions (RSD), in which the redshift space density can be written as a sum over velocity moments of the distribution function. These moments are density weighted and have well defined physical interpretation: their lowest orders are density, momentum density, and stress energy density. The series expansion is convergent if kμu/aH < 1, where k is the wavevector, H the Hubble parameter, u the typical gravitational velocity and μ = cos θ, with θ being the angle between the Fourier mode and the line of sight. We perform an expansion of these velocity moments into helicity modes, which are eigenmodes under rotation around the axis of Fourier mode direction, generalizing the scalar, vector, tensor decomposition of perturbations to an arbitrary order. We show that only equal helicity moments correlate and derive the angular dependence of the individual contributions to the redshift space power spectrum. We show that the dominant term of μ{sup 2} dependence on large scales is the cross-correlation between the density and scalar part of momentum density, which can be related to the time derivative of the matter power spectrum. Additional terms contributing to μ{sup 2} and dominating on small scales are the vector part of momentum density-momentum density correlations, the energy density-density correlations, and the scalar part of anisotropic stress density-density correlations. The second term is what is usually associated with the small scale Fingers-of-God damping and always suppresses power, but the first term comes with the opposite sign and always adds power. Similarly, we identify 7 terms contributing to μ{sup 4} dependence. Some of the advantages of the distribution function approach are that the series expansion converges on large scales and remains valid in multi-stream situations. We finish with a brief discussion of implications for RSD in galaxies relative to dark matter

  3. Interaction Models for Functional Regression

    PubMed Central

    USSET, JOSEPH; STAICU, ANA-MARIA; MAITY, ARNAB

    2015-01-01

    A functional regression model with a scalar response and multiple functional predictors is proposed that accommodates two-way interactions in addition to their main effects. The proposed estimation procedure models the main effects using penalized regression splines, and the interaction effect by a tensor product basis. Extensions to generalized linear models and data observed on sparse grids or with measurement error are presented. A hypothesis testing procedure for the functional interaction effect is described. The proposed method can be easily implemented through existing software. Numerical studies show that fitting an additive model in the presence of interaction leads to both poor estimation performance and lost prediction power, while fitting an interaction model where there is in fact no interaction leads to negligible losses. The methodology is illustrated on the AneuRisk65 study data. PMID:26744549

  4. A new approach to modelling substrate inhibition.

    PubMed

    Meriç, S; Tünay, O; Ali, S H

    2002-02-01

    Substrate inhibition, which is one of the most frequently observed phenomena in the biological treatment of industrial wastewaters, has been the subject of numerous studies. Yet there are still cases which cannot be adequately described by the existing models. In this paper, a review of substrate inhibition approaches was made. A new model is proposed that assumes a common mechanism for substrate and product inhibition. The model is a continuous function having a maximum growth rate at the critical substrate concentration, beyond which the growth rate decreases as the substrate concentration is increased. The model also predicts the maximum substrate concentration where the growth ceases. The model was tested using existing data in the literature to assess the model response and predictability of critical points. The literature datahave been selected from the studies conducted on pure and mixed cultures in batch and continuous reactors for phenol and several phenolics as well as from the studies which employed the Haldane model. A curve fitting method was used to determine the model parameters. The fit of the model to the data was satisfactory, particularly for the substrate concentrations exceeding maximum growth rate.

  5. The NJL Model for Quark Fragmentation Functions

    SciTech Connect

    T. Ito, W. Bentz, I. Cloet, A W Thomas, K. Yazaki

    2009-10-01

    A description of fragmentation functions which satisfy the momentum and isospin sum rules is presented in an effective quark theory. Concentrating on the pion fragmentation function, we first explain the reason why the elementary (lowest order) fragmentation process q → qπ is completely inadequate to describe the empirical data, although the “crossed” process π → qq describes the quark distribution functions in the pion reasonably well. Then, taking into account cascade-like processes in a modified jet-model approach, we show that the momentum and isospin sum rules can be satisfied naturally without introducing any ad-hoc parameters. We present numerical results for the Nambu-Jona-Lasinio model in the invariant mass regularization scheme, and compare the results with the empirical parametrizations. We argue that this NJL-jet model provides a very useful framework to calculate the fragmentation functions in an effective chiral quark theory.

  6. Calculus of Functions and Their Inverses: A Unified Approach

    ERIC Educational Resources Information Center

    Krishnan, Srilal N.

    2006-01-01

    In this pedagogical article, I explore a unified approach in obtaining the derivatives of functions and their inverses by adopting a guided self-discovery approach. I begin by finding the derivative of the exponential functions and the derivative of their inverses, the logarithmic functions. I extend this approach to generate formulae for the…

  7. Average Magnetic Field Magnitude Profiles of Wind Magnetic Clouds as a Function of Closest Approach to the Clouds' Axes and Comparison to Model

    NASA Astrophysics Data System (ADS)

    Lepping, R. P.; Berdichevsky, D. B.; Wu, C.-C.

    2017-02-01

    We examine the average magnetic field magnitude (| B | ≡ B) within magnetic clouds (MCs) observed by the Wind spacecraft from 1995 to July 2015 to understand the difference between this B and the ideal B-profiles expected from using the static, constant-α, force-free, cylindrically symmetric model for MCs of Lepping, Jones, and Burlaga ( J. Geophys. Res. 95, 11957, 1990, denoted here as the LJB model). We classify all MCs according to an assigned quality, Q0 (= 1, 2, 3, for excellent, good, and poor). There are a total of 209 MCs and 124 when only Q0 = 1, 2 cases are considered. The average normalized field with respect to the closest approach (CA) is stressed, where we separate cases into four CA sets centered at 12.5 %, 37.5 %, 62.5 %, and 87.5 % of the average radius; the averaging is done on a percentage-duration basis to treat all cases the same. Normalized B means that before averaging, the B for each MC at each point is divided by the LJB model-estimated B for the MC axis, B0. The actual averages for the 209 and 124 MC sets are compared to the LJB model, after an adjustment for MC expansion ( e.g. Lepping et al. in Ann. Geophys. 26, 1919, 2008). This provides four separate difference-relationships, each fitted with a quadratic ( Quad) curve of very small σ. Interpreting these Quad formulae should provide a comprehensive view of the variation in normalized B throughout the average MC, where we expect external front and rear compression to be part of its explanation. These formulae are also being considered for modifying the LJB model. This modification will be used in a scheme for forecasting the timing and magnitude of magnetic storms caused by MCs. Extensive testing of the Quad formulae shows that the formulae are quite useful in correcting individual MC B-profiles, especially for the first {≈ }1/3 of these MCs. However, the use of this type of B correction constitutes a (slight) violation of the force-free assumption used in the original LJB MC model.

  8. Modeling the Schwarzschild Green's function

    NASA Astrophysics Data System (ADS)

    Mark, Zachary; Zimmerman, Aaron; Chen, Yanbei

    2017-01-01

    At sufficiently late times, gravitational waveforms from extreme mass ratio inspirals consist of a sum of quasinormal modes, power law tails, and modes related to the matter source, such as the horizon mode (Zimmerman and Chen 2011). Due to the complexity of the exact curved spacetime Green function, making precise predictions about each component is difficult. We discuss the validity of a simple model for the scalar Schwarzschild Green's function. For observers at future null infinity, we model the Green's function as a simple function describing the direct radiation that matches to a single quasinormal mode at a retarded time related to the light ring location. As applications of the model, we describe the excitation process of the single quasinormal mode and the horizon mode, showing that waveform from the inspiralling object is in precise correspondence to the response of driven, damped harmonic oscillator.

  9. Porocytosis: a new approach to synaptic function.

    PubMed

    Kriebel, M E; Keller, B; Silver, R B; Fox, G Q; Pappas, G D

    2001-12-01

    We propose a new approach to address the question of how a single quantum of neurotransmitter is secreted from a presynaptic terminal whose clustered secretory vesicles are locally bathed in high levels of calcium ions [Proceedings of the Symposium on Bioelectrogenesis (1961) 297-309; The Physiology of Synapses (1964) Chapters 1, 4, 5, 6; How the Self Controls its Brain (1994) Chapters 1, 4, 5, 6; Science 256 (1992) 677-679]. This hypothesis, which we term 'porocytosis', posits that the post-synaptic quantal response results from transmitter secreted through an array of docked vesicle/secretory pore complexes. The transient increase in calcium ions, which results from the voltage activated calcium channels, stimulates the array of secretory pores to simultaneously flicker open to pulse transmitter. Porocytosis is consistent with the quantal nature of presynaptic secretion and transmission, and with available biochemical, morphological and physiological evidence. It explains the frequency dependency of quantal size as a function of the secretion process. It permits a signature amount of transmitter release for different frequencies allowing a given synapse to be employed in different behavioral responses. The porocytosis hypothesis permits fidelity of secretion and the seemingly apposed characteristic of synaptic plasticity. The dynamics inherent in an array insure a constant quantal size as a function of the number of units within the array. In this hypothesis, plasticity is a consequence of concurrent pre- and post-synaptic changes due to a change in array size. Changes in the number of docked vesicle-secretory pore complexes composing the array can explain facilitation, depletion, graded excitation-secretion and long term plasticity.

  10. The Linearized Kinetic Equation -- A Functional Analytic Approach

    NASA Astrophysics Data System (ADS)

    Brinkmann, Ralf Peter

    2009-10-01

    Kinetic models of plasma phenomena are difficult to address for two reasons. They i) are given as systems of nonlinear coupled integro-differential equations, and ii) involve generally six-dimensional distribution functions f(r,v,t). In situations which can be addressed in a linear regime, the first difficulty disappears, but the second one still poses considerable practical problems. This contribution presents an abstract approach to linearized kinetic theory which employs the methods of functional analysis. A kinetic electron equation with elastic electron-neutral interaction is studied in the electrostatic approximation. Under certain boundary conditions, a nonlinear functional, the kinetic free energy, exists which has the properties of a Lyapunov functional. In the linear regime, the functional becomes a quadratic form which motivates the definition of a bilinear scalar product, turning the space of all distribution functions into a Hilbert space. The linearized kinetic equation can then be described in terms of dynamical operators with well-defined properties. Abstract solutions can be constructed which have mathematically plausible properties. As an example, the formalism is applied to the example of the multipole resonance probe (MRP). Under the assumption of a Maxwellian background distribution, the kinetic model of that diagnostics device is compared to a previously investigated fluid model.

  11. Identification of novel histone deacetylase 1 inhibitors by combined pharmacophore modeling, 3D-QSAR analysis, in silico screening and Density Functional Theory (DFT) approaches

    NASA Astrophysics Data System (ADS)

    Choubey, Sanjay K.; Mariadasse, Richard; Rajendran, Santhosh; Jeyaraman, Jeyakanthan

    2016-12-01

    Overexpression of HDAC1, a member of Class I histone deacetylase is reported to be implicated in breast cancer. Epigenetic alteration in carcinogenesis has been the thrust of research for few decades. Increased deacetylation leads to accelerated cell proliferation, cell migration, angiogenesis and invasion. HDAC1 is pronounced as the potential drug target towards the treatment of breast cancer. In this study, the biochemical potential of 6-aminonicotinamide derivatives was rationalized. Five point pharmacophore model with one hydrogen-bond acceptor (A3), two hydrogen-bond donors (D5, D6), one ring (R12) and one hydrophobic group (H8) was developed using 6-aminonicotinamide derivatives. The pharmacophore hypothesis yielded a 3D-QSAR model with correlation-coefficient (r2 = 0.977, q2 = 0.801) and it was externally validated with (r2pred = 0.929, r2cv = 0.850 and r2m = 0.856) which reveals the statistical significance of the model having high predictive power. The model was then employed as 3D search query for virtual screening against compound libraries (Zinc, Maybridge, Enamine, Asinex, Toslab, LifeChem and Specs) in order to identify novel scaffolds which can be experimentally validated to design future drug molecule. Density Functional Theory (DFT) at B3LYP/6-31G* level was employed to explore the electronic features of the ligands involved in charge transfer reaction during receptor ligand interaction. Binding free energy (ΔGbind) calculation was done using MM/GBSA which defines the affinity of ligands towards the receptor.

  12. Two-particle correlation function and dihadron correlation approach

    SciTech Connect

    Vechernin, V. V. Ivanov, K. O.; Neverov, D. I.

    2016-09-15

    It is shown that, in the case of asymmetric nuclear interactions, the application of the traditional dihadron correlation approach to determining a two-particle correlation function C may lead to a form distorted in relation to the canonical pair correlation function {sub C}{sup 2}. This result was obtained both by means of exact analytic calculations of correlation functions within a simple string model for proton–nucleus and deuteron–nucleus collisions and by means of Monte Carlo simulations based on employing the HIJING event generator. It is also shown that the method based on studying multiplicity correlations in two narrow observation windows separated in rapidity makes it possible to determine correctly the canonical pair correlation function C{sub 2} for all cases, including the case where the rapidity distribution of product particles is not uniform.

  13. Two-particle correlation function and dihadron correlation approach

    NASA Astrophysics Data System (ADS)

    Vechernin, V. V.; Ivanov, K. O.; Neverov, D. I.

    2016-09-01

    It is shown that, in the case of asymmetric nuclear interactions, the application of the traditional dihadron correlation approach to determining a two-particle correlation function C may lead to a form distorted in relation to the canonical pair correlation function C 2. This result was obtained both by means of exact analytic calculations of correlation functions within a simple string model for proton-nucleus and deuteron-nucleus collisions and by means of Monte Carlo simulations based on employing the HIJING event generator. It is also shown that the method based on studying multiplicity correlations in two narrow observation windows separated in rapidity makes it possible to determine correctly the canonical pair correlation function C 2 for all cases, including the case where the rapidity distribution of product particles is not uniform.

  14. Co-activation Probability Estimation (CoPE): An approach for modeling functional co-activation architecture based on neuroimaging coordinates

    PubMed Central

    Chu, Congying; Fan, Lingzhong; Eickhoff, Claudia R.; Liu, Yong; Yang, Yong; Eickhoff, Simon B.; Jiang, Tianzi

    2016-01-01

    Recent progress in functional neuroimaging has prompted studies of brain activation during various cognitive tasks. Coordinate-based meta-analysis has been utilized to discover the brain regions that are consistently activated across experiments. However, within-experiment co-activation relationships, which can reflect the underlying functional relationships between different brain regions, have not been widely studied. In particular, voxel-wise co-activation, which may be able to provide a detailed configuration of the co-activation network, still needs to be modeled. To estimate the voxel-wise co-activation pattern and deduce the co-activation network, a Co-activation Probability Estimation (CoPE) method was proposed to model within-experiment activations for the purpose of defining the co-activations. A permutation test was adopted as a significance test. Moreover, the co-activations were automatically separated into local and long-range ones, based on distance. The two types of co-activations describe distinct features: the first reflects convergent activations; the second represents co-activations between different brain regions. The validation of CoPE was based on five simulation tests and one real dataset derived from studies of working memory. Both the simulated and the real data demonstrated that CoPE was not only able to find local convergence but also significant long-range co-activation. In particular, CoPE was able to identify a ‘core’ co-activation network in the working memory dataset. As a data-driven method, the CoPE method can be used to mine underlying co-activation relationships across experiments in future studies. PMID:26037052

  15. General Green's function formalism for layered systems: Wave function approach

    NASA Astrophysics Data System (ADS)

    Zhang, Shu-Hui; Yang, Wen; Chang, Kai

    2017-02-01

    The single-particle Green's function (GF) of mesoscopic structures plays a central role in mesoscopic quantum transport. The recursive GF technique is a standard tool to compute this quantity numerically, but it lacks physical transparency and is limited to relatively small systems. Here we present a numerically efficient and physically transparent GF formalism for a general layered structure. In contrast to the recursive GF that directly calculates the GF through the Dyson equations, our approach converts the calculation of the GF to the generation and subsequent propagation of a scattering wave function emanating from a local excitation. This viewpoint not only allows us to reproduce existing results in a concise and physically intuitive manner, but also provides analytical expressions of the GF in terms of a generalized scattering matrix. This identifies the contributions from each individual scattering channel to the GF and hence allows this information to be extracted quantitatively from dual-probe STM experiments. The simplicity and physical transparency of the formalism further allows us to treat the multiple reflection analytically and derive an analytical rule to construct the GF of a general layered system. This could significantly reduce the computational time and enable quantum transport calculations for large samples. We apply this formalism to perform both analytical analysis and numerical simulation for the two-dimensional conductance map of a realistic graphene p -n junction. The results demonstrate the possibility of observing the spatially resolved interference pattern caused by negative refraction and further reveal a few interesting features, such as the distance-independent conductance and its quadratic dependence on the carrier concentration, as opposed to the linear dependence in uniform graphene.

  16. Genetic and genomic approaches to understanding macrophage identity and function.

    PubMed

    Glass, Christopher K

    2015-04-01

    A major goal of our laboratory is to understand the molecular mechanisms that underlie the development and functions of diverse macrophage phenotypes in health and disease. Recent studies using genetic and genomic approaches suggest a relatively simple model of collaborative and hierarchical interactions between lineage-determining and signal-dependent transcription factors that enable selection and activation of transcriptional enhancers that specify macrophage identity and function. In addition, we have found that it is possible to use natural genetic variation as a powerful tool for advancing our understanding of how the macrophage deciphers the information encoded by the genome to attain specific phenotypes in a context-dependent manner. Here, I will describe our recent efforts to extend genetic and genomic approaches to investigate the roles of distinct tissue environments in determining the phenotypes of different resident populations of macrophages.

  17. A factor analysis model for functional genomics

    PubMed Central

    Kustra, Rafal; Shioda, Romy; Zhu, Mu

    2006-01-01

    Background Expression array data are used to predict biological functions of uncharacterized genes by comparing their expression profiles to those of characterized genes. While biologically plausible, this is both statistically and computationally challenging. Typical approaches are computationally expensive and ignore correlations among expression profiles and functional categories. Results We propose a factor analysis model (FAM) for functional genomics and give a two-step algorithm, using genome-wide expression data for yeast and a subset of Gene-Ontology Biological Process functional annotations. We show that the predictive performance of our method is comparable to the current best approach while our total computation time was faster by a factor of 4000. We discuss the unique challenges in performance evaluation of algorithms used for genome-wide functions genomics. Finally, we discuss extensions to our method that can incorporate the inherent correlation structure of the functional categories to further improve predictive performance. Conclusion Our factor analysis model is a computationally efficient technique for functional genomics and provides a clear and unified statistical framework with potential for incorporating important gene ontology information to improve predictions. PMID:16630343

  18. MFS transporters required for multidrug/multixenobiotic (MD/MX) resistance in the model yeast: understanding their physiological function through post-genomic approaches

    PubMed Central

    dos Santos, Sandra C.; Teixeira, Miguel C.; Dias, Paulo J.; Sá-Correia, Isabel

    2014-01-01

    Multidrug/Multixenobiotic resistance (MDR/MXR) is a widespread phenomenon with clinical, agricultural and biotechnological implications, where MDR/MXR transporters that are presumably able to catalyze the efflux of multiple cytotoxic compounds play a key role in the acquisition of resistance. However, although these proteins have been traditionally considered drug exporters, the physiological function of MDR/MXR transporters and the exact mechanism of their involvement in resistance to cytotoxic compounds are still open to debate. In fact, the wide range of structurally and functionally unrelated substrates that these transporters are presumably able to export has puzzled researchers for years. The discussion has now shifted toward the possibility of at least some MDR/MXR transporters exerting their effect as the result of a natural physiological role in the cell, rather than through the direct export of cytotoxic compounds, while the hypothesis that MDR/MXR transporters may have evolved in nature for other purposes than conferring chemoprotection has been gaining momentum in recent years. This review focuses on the drug transporters of the Major Facilitator Superfamily (MFS; drug:H+ antiporters) in the model yeast Saccharomyces cerevisiae. New insights into the natural roles of these transporters are described and discussed, focusing on the knowledge obtained or suggested by post-genomic research. The new information reviewed here provides clues into the unexpectedly complex roles of these transporters, including a proposed indirect regulation of the stress response machinery and control of membrane potential and/or internal pH, with a special emphasis on a genome-wide view of the regulation and evolution of MDR/MXR-MFS transporters. PMID:24847282

  19. Simulation of sprays using a Lagrangian filtered density function approach

    NASA Astrophysics Data System (ADS)

    Liu, Wanjiao; Garrick, Sean

    2013-11-01

    Sprays and atomization have wide applications in industry, including combustion/engines, pharmaceutics and agricultural spraying. Due to the complexity of the underlying processes, much of the underlying phenomena are not fully understood. Numerical simulation may provide ways to investigate atomization and spray dynamics. Large eddy simulation (LES) is a practical approach to flow simulation as it resolves only the large-scale structures while modeling the sub-grid scale (SGS) effects. We combine a filtered density function (FDF) based approach with a Lagrangian volume-of-fluid method to perform LES. This resulting methodology is advantageous in that it has no diffusive or dissipative numerical errors, and the highly non-linear surface tension force appears in closed form thus the modeling of the SGS surface tension is not needed when simulating turbulent, multiphase flows. We present the methodology and some results for the simulation of multiphase jets.

  20. Validation of Modeling Flow Approaching Navigation Locks

    DTIC Science & Technology

    2013-08-01

    instrumentation, direction vernier . ........................................................................ 8  Figure 11. Plan A lock approach, upstream approach...13-9 8 Figure 9. Tools and instrumentation, bracket attached to rail. Figure 10. Tools and instrumentation, direction vernier . Numerical model

  1. A mechanistic approach to explore novel HDAC1 inhibitor using pharmacophore modeling, 3D- QSAR analysis, molecular docking, density functional and molecular dynamics simulation study.

    PubMed

    Choubey, Sanjay K; Jeyaraman, Jeyakanthan

    2016-11-01

    Deregulated epigenetic activity of Histone deacetylase 1 (HDAC1) in tumor development and carcinogenesis pronounces it as promising therapeutic target for cancer treatment. HDAC1 has recently captured the attention of researchers owing to its decisive role in multiple types of cancer. In the present study a multistep framework combining ligand based 3D-QSAR, molecular docking and Molecular Dynamics (MD) simulation studies were performed to explore potential compound with good HDAC1 binding affinity. Four different pharmacophore hypotheses Hypo1 (AADR), Hypo2 (AAAH), Hypo3 (AAAR) and Hypo4 (ADDR) were obtained. The hypothesis Hypo1 (AADR) with two hydrogen bond acceptors (A), one hydrogen bond donor (D) and one aromatics ring (R) was selected to build 3D-QSAR model on the basis of statistical parameter. The pharmacophore hypothesis produced a statistically significant QSAR model, with co-efficient of correlation r(2)=0.82 and cross validation correlation co-efficient q(2)=0.70. External validation result displays high predictive power with r(2) (o) value of 0.88 and r(2) (m) value of 0.58 to carry out further in silico studies. Virtual screening result shows ZINC70450932 as the most promising lead where HDAC1 interacts with residues Asp99, His178, Tyr204, Phe205 and Leu271 forming seven hydrogen bonds. A high docking score (-11.17kcal/mol) and lower docking energy -37.84kcal/mol) displays the binding efficiency of the ligand. Binding free energy calculation was done using MM/GBSA to access affinity of ligands towards protein. Density Functional Theory was employed to explore electronic features of the ligands describing intramolcular charge transfer reaction. Molecular dynamics simulation studies at 50ns display metal ion (Zn)-ligand interaction which is vital to inhibit the enzymatic activity of the protein.

  2. Restricted primitive model for electrolyte solutions in slit-like pores with grafted chains: microscopic structure, thermodynamics of adsorption, and electric properties from a density functional approach.

    PubMed

    Pizio, Orest; Sokołowski, Stefan

    2013-05-28

    We apply a density functional theory to describe properties of a restricted primitive model of an ionic fluid in slit-like pores. The pore walls are modified by grafted chains. The chains are built of uncharged or charged segments. We study the influence of modification of the pore walls on the structure, adsorption, ion selectivity, and the electric double layer capacitance of ionic fluid under confinement. The brush built of uncharged segments acts as a collection of obstacles in the walls vicinity. Consequently, separation of charges requires higher voltages, in comparison to the models without brushes. At high grafting densities the formation of crowding-type structure is inhibited. The double layer structure becomes more complex in various aspects, if the brushes are built of charged segments. In particular, the evolution of the brush height with the bulk fluid density and with the charge on the walls depends on the length of the blocks of charged spheres as well as on the distribution of charged species along chains. We also investigated how the dependence of the double layer capacitance on the electrostatic potential (or on the charge on the walls) changes with grafting density, the chain length, distribution of charges along the chain, the bulk fluid density, and, finally, with the pore width. The shape of the electric double layer capacitance vs. voltage changes from a camel-like to bell-like shape, if the bulk fluid density changes from low to moderate and high. If the bulk density is appropriately chosen, it is possible to alter the shape of this curve from the double hump to single hump by changing the grafting density. Moreover, in narrow pores one can observe the capacitance curve with even three humps for a certain set of parameters describing brush. This behavior illustrates how strong the influence of brushes on the electric double layer properties can be, particularly for ionic fluids in narrow pores.

  3. Food Protein Functionality--A New Model.

    PubMed

    Foegeding, E Allen

    2015-12-01

    Proteins in foods serve dual roles as nutrients and structural building blocks. The concept of protein functionality has historically been restricted to nonnutritive functions--such as creating emulsions, foams, and gels--but this places sole emphasis on food quality considerations and potentially overlooks modifications that may also alter nutritional quality or allergenicity. A new model is proposed that addresses the function of proteins in foods based on the length scale(s) responsible for the function. Properties such as flavor binding, color, allergenicity, and digestibility are explained based on the structure of individual molecules; placing this functionality at the nano/molecular scale. At the next higher scale, applications in foods involving gelation, emulsification, and foam formation are based on how proteins form secondary structures that are seen at the nano and microlength scales, collectively called the mesoscale. The macroscale structure represents the arrangements of molecules and mesoscale structures in a food. Macroscale properties determine overall product appearance, stability, and texture. The historical approach of comparing among proteins based on forming and stabilizing specific mesoscale structures remains valid but emphasis should be on a common means for structure formation to allow for comparisons across investigations. For applications in food products, protein functionality should start with identification of functional needs across scales. Those needs are then evaluated relative to how processing and other ingredients could alter desired molecular scale properties, or proper formation of mesoscale structures. This allows for a comprehensive approach to achieving the desired function of proteins in foods.

  4. Simple model dielectric functions for insulators

    NASA Astrophysics Data System (ADS)

    Vos, Maarten; Grande, Pedro L.

    2017-05-01

    The Drude dielectric function is a simple way of describing the dielectric function of free electron materials, which have an uniform electron density, in a classical way. The Mermin dielectric function describes a free electron gas, but is based on quantum physics. More complex metals have varying electron densities and are often described by a sum of Drude dielectric functions, the weight of each function being taken proportional to the volume with the corresponding density. Here we describe a slight variation on the Drude dielectric functions that describes insulators in a semi-classical way and a form of the Levine-Louie dielectric function including a relaxation time that does the same within the framework of quantum physics. In the optical limit the semi-classical description of an insulator and the quantum physics description coincide, in the same way as the Drude and Mermin dielectric function coincide in the optical limit for metals. There is a simple relation between the coefficients used in the classical and quantum approaches, a relation that ensures that the obtained dielectric function corresponds to the right static refractive index. For water we give a comparison of the model dielectric function at non-zero momentum with inelastic X-ray measurements, both at relative small momenta and in the Compton limit. The Levine-Louie dielectric function including a relaxation time describes the spectra at small momentum quite well, but in the Compton limit there are significant deviations.

  5. Mixture models for distance sampling detection functions.

    PubMed

    Miller, David L; Thomas, Len

    2015-01-01

    We present a new class of models for the detection function in distance sampling surveys of wildlife populations, based on finite mixtures of simple parametric key functions such as the half-normal. The models share many of the features of the widely-used "key function plus series adjustment" (K+A) formulation: they are flexible, produce plausible shapes with a small number of parameters, allow incorporation of covariates in addition to distance and can be fitted using maximum likelihood. One important advantage over the K+A approach is that the mixtures are automatically monotonic non-increasing and non-negative, so constrained optimization is not required to ensure distance sampling assumptions are honoured. We compare the mixture formulation to the K+A approach using simulations to evaluate its applicability in a wide set of challenging situations. We also re-analyze four previously problematic real-world case studies. We find mixtures outperform K+A methods in many cases, particularly spiked line transect data (i.e., where detectability drops rapidly at small distances) and larger sample sizes. We recommend that current standard model selection methods for distance sampling detection functions are extended to include mixture models in the candidate set.

  6. Introducing Linear Functions: An Alternative Statistical Approach

    ERIC Educational Resources Information Center

    Nolan, Caroline; Herbert, Sandra

    2015-01-01

    The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be "threshold concepts". There is recognition that linear functions can be taught in context through the exploration of linear…

  7. Digital controller design for absolute value function constrained nonlinear systems via scalar sign function approach.

    PubMed

    Wu, Jian; Singla, Mithun; Olmi, Claudio; Shieh, Leang S; Song, Gangbing

    2010-07-01

    In this paper, a scalar sign function-based digital design methodology is developed for modeling and control of a class of analog nonlinear systems that are restricted by the absolute value function constraints. As is found to be not uncommon, many real systems are subject to the constraints which are described by the non-smooth functions such as absolute value function. The non-smooth and nonlinear nature poses significant challenges to the modeling and control work. To overcome these difficulties, a novel idea proposed in this work is to use a scalar sign function approach to effectively transform the original nonlinear and non-smooth model into a smooth nonlinear rational function model. Upon the resulting smooth model, a systematic digital controller design procedure is established, in which an optimal linearization method, LQR design and digital implementation through an advanced digital redesign technique are sequentially applied. The example of tracking control of a piezoelectric actuator system is utilized throughout the paper for illustrating the proposed methodology. 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  8. An approach to metering and network modeling

    SciTech Connect

    Adibi, M.M.; Clements, K.A.; Kafka, R.J.; Stovall, J.P.

    1992-06-01

    Estimation of the static state of an electric power network has become a standard function in real-time monitoring and control. Its purpose is to use the network model and process the metering data in order to determine an accurate and reliable estimate of the system state in the real-time environment. In the models usually used it is assumed that the network parameters and topology are free of errors and the measurement system provides unbiased data having a known distribution. The network and metering models however, contain errors which frequently result in either non-convergent behavior of the state estimator or exceedingly large residual, reducing the level of confidence in the results. This paper describes an approach minimizing the above uncertainties by analyzing the data which are routinely collected at the power system control center. The approach while improve the reliability of the real-time data-base while reducing the state estimator installation and maintenance effort. 5 refs.

  9. An approach to metering and network modeling

    SciTech Connect

    Adibi, M.M. ); Clements, K.A. ); Kafka, R.J. ); Stovall, J.P. )

    1992-01-01

    Estimation of the static state of an electric power network has become a standard function in real-time monitoring and control. Its purpose is to use the network model and process the metering data in order to determine an accurate and reliable estimate of the system state in the real-time environment. In the models usually used it is assumed that the network parameters and topology are free of errors and the measurement system provides unbiased data having a known distribution. The network and metering models however, contain errors which frequently result in either non-convergent behavior of the state estimator or exceedingly large residual, reducing the level of confidence in the results. This paper describes an approach minimizing the above uncertainties by analyzing the data which are routinely collected at the power system control center. The approach while improve the reliability of the real-time data-base while reducing the state estimator installation and maintenance effort. 5 refs.

  10. Models in palaeontological functional analysis

    PubMed Central

    Anderson, Philip S. L.; Bright, Jen A.; Gill, Pamela G.; Palmer, Colin; Rayfield, Emily J.

    2012-01-01

    Models are a principal tool of modern science. By definition, and in practice, models are not literal representations of reality but provide simplifications or substitutes of the events, scenarios or behaviours that are being studied or predicted. All models make assumptions, and palaeontological models in particular require additional assumptions to study unobservable events in deep time. In the case of functional analysis, the degree of missing data associated with reconstructing musculoskeletal anatomy and neuronal control in extinct organisms has, in the eyes of some scientists, rendered detailed functional analysis of fossils intractable. Such a prognosis may indeed be realized if palaeontologists attempt to recreate elaborate biomechanical models based on missing data and loosely justified assumptions. Yet multiple enabling methodologies and techniques now exist: tools for bracketing boundaries of reality; more rigorous consideration of soft tissues and missing data and methods drawing on physical principles that all organisms must adhere to. As with many aspects of science, the utility of such biomechanical models depends on the questions they seek to address, and the accuracy and validity of the models themselves. PMID:21865242

  11. Models in palaeontological functional analysis.

    PubMed

    Anderson, Philip S L; Bright, Jen A; Gill, Pamela G; Palmer, Colin; Rayfield, Emily J

    2012-02-23

    Models are a principal tool of modern science. By definition, and in practice, models are not literal representations of reality but provide simplifications or substitutes of the events, scenarios or behaviours that are being studied or predicted. All models make assumptions, and palaeontological models in particular require additional assumptions to study unobservable events in deep time. In the case of functional analysis, the degree of missing data associated with reconstructing musculoskeletal anatomy and neuronal control in extinct organisms has, in the eyes of some scientists, rendered detailed functional analysis of fossils intractable. Such a prognosis may indeed be realized if palaeontologists attempt to recreate elaborate biomechanical models based on missing data and loosely justified assumptions. Yet multiple enabling methodologies and techniques now exist: tools for bracketing boundaries of reality; more rigorous consideration of soft tissues and missing data and methods drawing on physical principles that all organisms must adhere to. As with many aspects of science, the utility of such biomechanical models depends on the questions they seek to address, and the accuracy and validity of the models themselves.

  12. Functional Error Models to Accelerate Nested Sampling

    NASA Astrophysics Data System (ADS)

    Josset, L.; Elsheikh, A. H.; Demyanov, V.; Lunati, I.

    2014-12-01

    The main challenge in groundwater problems is the reliance on large numbers of unknown parameters with wide rage of associated uncertainties. To translate this uncertainty to quantities of interest (for instance the concentration of pollutant in a drinking well), a large number of forward flow simulations is required. To make the problem computationally tractable, Josset et al. (2013, 2014) introduced the concept of functional error models. It consists in two elements: a proxy model that is cheaper to evaluate than the full physics flow solver and an error model to account for the missing physics. The coupling of the proxy model and the error models provides reliable predictions that approximate the full physics model's responses. The error model is tailored to the problem at hand by building it for the question of interest. It follows a typical approach in machine learning where both the full physics and proxy models are evaluated for a training set (subset of realizations) and the set of responses is used to construct the error model using functional data analysis. Once the error model is devised, a prediction of the full physics response for a new geostatistical realization can be obtained by computing the proxy response and applying the error model. We propose the use of functional error models in a Bayesian inference context by combining it to the Nested Sampling (Skilling 2006; El Sheikh et al. 2013, 2014). Nested Sampling offers a mean to compute the Bayesian Evidence by transforming the multidimensional integral into a 1D integral. The algorithm is simple: starting with an active set of samples, at each iteration, the sample with the lowest likelihood is kept aside and replaced by a sample of higher likelihood. The main challenge is to find this sample of higher likelihood. We suggest a new approach: first the active set is sampled, both proxy and full physics models are run and the functional error model is build. Then, at each iteration of the Nested

  13. Evaluating face trustworthiness: a model based approach.

    PubMed

    Todorov, Alexander; Baron, Sean G; Oosterhof, Nikolaas N

    2008-06-01

    Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.

  14. Evaluating face trustworthiness: a model based approach

    PubMed Central

    Baron, Sean G.; Oosterhof, Nikolaas N.

    2008-01-01

    Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response—as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic—strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension. PMID:19015102

  15. Parametric modeling of quantile regression coefficient functions.

    PubMed

    Frumento, Paolo; Bottai, Matteo

    2016-03-01

    Estimating the conditional quantiles of outcome variables of interest is frequent in many research areas, and quantile regression is foremost among the utilized methods. The coefficients of a quantile regression model depend on the order of the quantile being estimated. For example, the coefficients for the median are generally different from those of the 10th centile. In this article, we describe an approach to modeling the regression coefficients as parametric functions of the order of the quantile. This approach may have advantages in terms of parsimony, efficiency, and may expand the potential of statistical modeling. Goodness-of-fit measures and testing procedures are discussed, and the results of a simulation study are presented. We apply the method to analyze the data that motivated this work. The described method is implemented in the qrcm R package.

  16. Numerical approaches to combustion modeling

    SciTech Connect

    Oran, E.S.; Boris, J.P. )

    1991-01-01

    This book presents a series of topics ranging from microscopic combustion physics to several aspects of macroscopic reactive-flow modeling. As the reader progresses into the book, the successive chapters generally include a wider range of physical and chemical processes in the mathematical model. Including more processes, however, usually means that they will be represented phenomenologically at a cruder level. In practice the detailed microscopic models and simulations are often used to develop and calibrate the phenomenologies used in the macroscopic models. The book first describes computations of the most microscopic chemical processes, then considers laminar flames and detonation modeling, and ends with computations of complex, multiphase combustion systems.

  17. A Functional Analytic Approach to Group Psychotherapy

    ERIC Educational Resources Information Center

    Vandenberghe, Luc

    2009-01-01

    This article provides a particular view on the use of Functional Analytical Psychotherapy (FAP) in a group therapy format. This view is based on the author's experiences as a supervisor of Functional Analytical Psychotherapy Groups, including groups for women with depression and groups for chronic pain patients. The contexts in which this approach…

  18. Functional integral approach for multiplicative stochastic processes.

    PubMed

    Arenas, Zochil González; Barci, Daniel G

    2010-05-01

    We present a functional formalism to derive a generating functional for correlation functions of a multiplicative stochastic process represented by a Langevin equation. We deduce a path integral over a set of fermionic and bosonic variables without performing any time discretization. The usual prescriptions to define the Wiener integral appear in our formalism in the definition of Green's functions in the Grassman sector of the theory. We also study nonperturbative constraints imposed by Becchi, Rouet and Stora symmetry (BRS) and supersymmetry on correlation functions. We show that the specific prescription to define the stochastic process is wholly contained in tadpole diagrams. Therefore, in a supersymmetric theory, the stochastic process is uniquely defined since tadpole contributions cancels at all order of perturbation theory.

  19. A three-way approach for protein function classification

    PubMed Central

    2017-01-01

    The knowledge of protein functions plays an essential role in understanding biological cells and has a significant impact on human life in areas such as personalized medicine, better crops and improved therapeutic interventions. Due to expense and inherent difficulty of biological experiments, intelligent methods are generally relied upon for automatic assignment of functions to proteins. The technological advancements in the field of biology are improving our understanding of biological processes and are regularly resulting in new features and characteristics that better describe the role of proteins. It is inevitable to neglect and overlook these anticipated features in designing more effective classification techniques. A key issue in this context, that is not being sufficiently addressed, is how to build effective classification models and approaches for protein function prediction by incorporating and taking advantage from the ever evolving biological information. In this article, we propose a three-way decision making approach which provides provisions for seeking and incorporating future information. We considered probabilistic rough sets based models such as Game-Theoretic Rough Sets (GTRS) and Information-Theoretic Rough Sets (ITRS) for inducing three-way decisions. An architecture of protein functions classification with probabilistic rough sets based three-way decisions is proposed and explained. Experiments are carried out on Saccharomyces cerevisiae species dataset obtained from Uniprot database with the corresponding functional classes extracted from the Gene Ontology (GO) database. The results indicate that as the level of biological information increases, the number of deferred cases are reduced while maintaining similar level of accuracy. PMID:28234929

  20. Functional CAR models for large spatially correlated functional datasets.

    PubMed

    Zhang, Lin; Baladandayuthapani, Veerabhadran; Zhu, Hongxiao; Baggerly, Keith A; Majewski, Tadeusz; Czerniak, Bogdan A; Morris, Jeffrey S

    2016-01-01

    We develop a functional conditional autoregressive (CAR) model for spatially correlated data for which functions are collected on areal units of a lattice. Our model performs functional response regression while accounting for spatial correlations with potentially nonseparable and nonstationary covariance structure, in both the space and functional domains. We show theoretically that our construction leads to a CAR model at each functional location, with spatial covariance parameters varying and borrowing strength across the functional domain. Using basis transformation strategies, the nonseparable spatial-functional model is computationally scalable to enormous functional datasets, generalizable to different basis functions, and can be used on functions defined on higher dimensional domains such as images. Through simulation studies, we demonstrate that accounting for the spatial correlation in our modeling leads to improved functional regression performance. Applied to a high-throughput spatially correlated copy number dataset, the model identifies genetic markers not identified by comparable methods that ignore spatial correlations.

  1. Work Functions for Models of Scandate Surfaces

    NASA Technical Reports Server (NTRS)

    Mueller, Wolfgang

    1997-01-01

    The electronic structure, surface dipole properties, and work functions of scandate surfaces have been investigated using the fully relativistic scattered-wave cluster approach. Three different types of model surfaces are considered: (1) a monolayer of Ba-Sc-O on W(100), (2) Ba or BaO adsorbed on Sc2O3 + W, and (3) BaO on SC2O3 + WO3. Changes in the work function due to Ba or BaO adsorption on the different surfaces are calculated by employing the depolarization model of interacting surface dipoles. The largest work function change and the lowest work function of 1.54 eV are obtained for Ba adsorbed on the Sc-O monolayer on W(100). The adsorption of Ba on Sc2O3 + W does not lead to a low work function, but the adsorption of BaO results in a work function of about 1.6-1.9 eV. BaO adsorbed on Sc2O3 + WO3, or scandium tungstates, may also lead to low work functions.

  2. Transfer function modeling of damping mechanisms in distributed parameter models

    NASA Technical Reports Server (NTRS)

    Slater, J. C.; Inman, D. J.

    1994-01-01

    This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.

  3. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  4. Linearized Functional Minimization for Inverse Modeling

    SciTech Connect

    Wohlberg, Brendt; Tartakovsky, Daniel M.; Dentz, Marco

    2012-06-21

    Heterogeneous aquifers typically consist of multiple lithofacies, whose spatial arrangement significantly affects flow and transport. The estimation of these lithofacies is complicated by the scarcity of data and by the lack of a clear correlation between identifiable geologic indicators and attributes. We introduce a new inverse-modeling approach to estimate both the spatial extent of hydrofacies and their properties from sparse measurements of hydraulic conductivity and hydraulic head. Our approach is to minimize a functional defined on the vectors of values of hydraulic conductivity and hydraulic head fields defined on regular grids at a user-determined resolution. This functional is constructed to (i) enforce the relationship between conductivity and heads provided by the groundwater flow equation, (ii) penalize deviations of the reconstructed fields from measurements where they are available, and (iii) penalize reconstructed fields that are not piece-wise smooth. We develop an iterative solver for this functional that exploits a local linearization of the mapping from conductivity to head. This approach provides a computationally efficient algorithm that rapidly converges to a solution. A series of numerical experiments demonstrates the robustness of our approach.

  5. Thermodynamic and redox properties of graphene oxides for lithium-ion battery applications: a first principles density functional theory modeling approach.

    PubMed

    Kim, Sunghee; Kim, Ki Chul; Lee, Seung Woo; Jang, Seung Soon

    2016-07-27

    Understanding the thermodynamic stability and redox properties of oxygen functional groups on graphene is critical to systematically design stable graphene-based positive electrode materials with high potential for lithium-ion battery applications. In this work, we study the thermodynamic and redox properties of graphene functionalized with carbonyl and hydroxyl groups, and the evolution of these properties with the number, types and distribution of functional groups by employing the density functional theory method. It is found that the redox potential of the functionalized graphene is sensitive to the types, number, and distribution of oxygen functional groups. First, the carbonyl group induces higher redox potential than the hydroxyl group. Second, more carbonyl groups would result in higher redox potential. Lastly, the locally concentrated distribution of the carbonyl group is more beneficial to have higher redox potential compared to the uniformly dispersed distribution. In contrast, the distribution of the hydroxyl group does not affect the redox potential significantly. Thermodynamic investigation demonstrates that the incorporation of carbonyl groups at the edge of graphene is a promising strategy for designing thermodynamically stable positive electrode materials with high redox potentials.

  6. Translation: Towards a Critical-Functional Approach

    ERIC Educational Resources Information Center

    Sadeghi, Sima; Ketabi, Saeed

    2010-01-01

    The controversy over the place of translation in the teaching of English as a Foreign Language (EFL) is a thriving field of inquiry. Many older language teaching methodologies such as the Direct Method, the Audio-lingual Method, and Natural and Communicative Approaches, tended to either neglect the role of translation, or prohibit it entirely as a…

  7. Functional Approaches to Written Text: Classroom Applications.

    ERIC Educational Resources Information Center

    Miller, Tom, Ed.

    Noting that little in language can be understood without taking into consideration the wider picture of communicative purpose, content, context, and audience, this book address practical uses of various approaches to discourse analysis. Several assumptions run through the chapters: knowledge is socially constructed; the manner in which language…

  8. Translation: Towards a Critical-Functional Approach

    ERIC Educational Resources Information Center

    Sadeghi, Sima; Ketabi, Saeed

    2010-01-01

    The controversy over the place of translation in the teaching of English as a Foreign Language (EFL) is a thriving field of inquiry. Many older language teaching methodologies such as the Direct Method, the Audio-lingual Method, and Natural and Communicative Approaches, tended to either neglect the role of translation, or prohibit it entirely as a…

  9. Loop expansion of the average effective action in the functional renormalization group approach

    NASA Astrophysics Data System (ADS)

    Lavrov, Peter M.; Merzlikin, Boris S.

    2015-10-01

    We formulate a perturbation expansion for the effective action in a new approach to the functional renormalization group method based on the concept of composite fields for regulator functions being their most essential ingredients. We demonstrate explicitly the principal difference between the properties of effective actions in these two approaches existing already on the one-loop level in a simple gauge model.

  10. Self-Consistent Green’s Function Approaches

    NASA Astrophysics Data System (ADS)

    Barbieri, Carlo; Carbone, Arianna

    We present the fundamental techniques and working equations of many-body Green's function theory for calculating ground state properties and the spectral strength. Green's function methods closely relate to other polynomial scaling approaches discussed in Chaps. 8 and 10. However, here we aim directly at a global view of the many-fermion structure. We derive the working equations for calculating many-body propagators, using both the Algebraic Diagrammatic Construction technique and the self-consistent formalism at finite temperature. Their implementation is discussed, as well as the inclusion of three-nucleon interactions. The self-consistency feature is essential to guarantee thermodynamic consistency. The pairing and neutron matter models introduced in previous chapters are solved and compared with the other methods in this book.

  11. Statistical approaches and software for clustering islet cell functional heterogeneity

    PubMed Central

    Wills, Quin F.; Boothe, Tobias; Asadi, Ali; Ao, Ziliang; Warnock, Garth L.; Kieffer, Timothy J.

    2016-01-01

    ABSTRACT Worldwide efforts are underway to replace or repair lost or dysfunctional pancreatic β-cells to cure diabetes. However, it is unclear what the final product of these efforts should be, as β-cells are thought to be heterogeneous. To enable the analysis of β-cell heterogeneity in an unbiased and quantitative way, we developed model-free and model-based statistical clustering approaches, and created new software called TraceCluster. Using an example data set, we illustrate the utility of these approaches by clustering dynamic intracellular Ca2+ responses to high glucose in ∼300 simultaneously imaged single islet cells. Using feature extraction from the Ca2+ traces on this reference data set, we identified 2 distinct populations of cells with β-like responses to glucose. To the best of our knowledge, this report represents the first unbiased cluster-based analysis of human β-cell functional heterogeneity of simultaneous recordings. We hope that the approaches and tools described here will be helpful for those studying heterogeneity in primary islet cells, as well as excitable cells derived from embryonic stem cells or induced pluripotent cells. PMID:26909740

  12. Different Approaches to Covariate Inclusion in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Li, Tongyun; Jiao, Hong; Macready, George B.

    2016-01-01

    The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…

  13. Different Approaches to Covariate Inclusion in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Li, Tongyun; Jiao, Hong; Macready, George B.

    2016-01-01

    The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…

  14. Functional approach to high-throughput plant growth analysis

    PubMed Central

    2013-01-01

    Method Taking advantage of the current rapid development in imaging systems and computer vision algorithms, we present HPGA, a high-throughput phenotyping platform for plant growth modeling and functional analysis, which produces better understanding of energy distribution in regards of the balance between growth and defense. HPGA has two components, PAE (Plant Area Estimation) and GMA (Growth Modeling and Analysis). In PAE, by taking the complex leaf overlap problem into consideration, the area of every plant is measured from top-view images in four steps. Given the abundant measurements obtained with PAE, in the second module GMA, a nonlinear growth model is applied to generate growth curves, followed by functional data analysis. Results Experimental results on model plant Arabidopsis thaliana show that, compared to an existing approach, HPGA reduces the error rate of measuring plant area by half. The application of HPGA on the cfq mutant plants under fluctuating light reveals the correlation between low photosynthetic rates and small plant area (compared to wild type), which raises a hypothesis that knocking out cfq changes the sensitivity of the energy distribution under fluctuating light conditions to repress leaf growth. Availability HPGA is available at http://www.msu.edu/~jinchen/HPGA. PMID:24565437

  15. Identifying Similarities in Cognitive Subtest Functional Requirements: An Empirical Approach

    ERIC Educational Resources Information Center

    Frisby, Craig L.; Parkin, Jason R.

    2007-01-01

    In the cognitive test interpretation literature, a Rational/Intuitive, Indirect Empirical, or Combined approach is typically used to construct conceptual taxonomies of the functional (behavioral) similarities between subtests. To address shortcomings of these approaches, the functional requirements for 49 subtests from six individually…

  16. Identifying Similarities in Cognitive Subtest Functional Requirements: An Empirical Approach

    ERIC Educational Resources Information Center

    Frisby, Craig L.; Parkin, Jason R.

    2007-01-01

    In the cognitive test interpretation literature, a Rational/Intuitive, Indirect Empirical, or Combined approach is typically used to construct conceptual taxonomies of the functional (behavioral) similarities between subtests. To address shortcomings of these approaches, the functional requirements for 49 subtests from six individually…

  17. Multicomponent Equilibrium Models for Testing Geothermometry Approaches

    SciTech Connect

    Cooper, D. Craig; Palmer, Carl D.; Smith, Robert W.; McLing, Travis L.

    2013-02-01

    Geothermometry is an important tool for estimating deep reservoir temperature from the geochemical composition of shallower and cooler waters. The underlying assumption of geothermometry is that the waters collected from shallow wells and seeps maintain a chemical signature that reflects equilibrium in the deeper reservoir. Many of the geothermometers used in practice are based on correlation between water temperatures and composition or using thermodynamic calculations based a subset (typically silica, cations or cation ratios) of the dissolved constituents. An alternative approach is to use complete water compositions and equilibrium geochemical modeling to calculate the degree of disequilibrium (saturation index) for large number of potential reservoir minerals as a function of temperature. We have constructed several “forward” geochemical models using The Geochemist’s Workbench to simulate the change in chemical composition of reservoir fluids as they migrate toward the surface. These models explicitly account for the formation (mass and composition) of a steam phase and equilibrium partitioning of volatile components (e.g., CO2, H2S, and H2) into the steam as a result of pressure decreases associated with upward fluid migration from depth. We use the synthetic data generated from these simulations to determine the advantages and limitations of various geothermometry and optimization approaches for estimating the likely conditions (e.g., temperature, pCO2) to which the water was exposed in the deep subsurface. We demonstrate the magnitude of errors that can result from boiling, loss of volatiles, and analytical error from sampling and instrumental analysis. The estimated reservoir temperatures for these scenarios are also compared to conventional geothermometers. These results can help improve estimation of geothermal resource temperature during exploration and early development.

  18. Quantum thermodynamics: a nonequilibrium Green's function approach.

    PubMed

    Esposito, Massimiliano; Ochoa, Maicol A; Galperin, Michael

    2015-02-27

    We establish the foundations of a nonequilibrium theory of quantum thermodynamics for noninteracting open quantum systems strongly coupled to their reservoirs within the framework of the nonequilibrium Green's functions. The energy of the system and its coupling to the reservoirs are controlled by a slow external time-dependent force treated to first order beyond the quasistatic limit. We derive the four basic laws of thermodynamics and characterize reversible transformations. Stochastic thermodynamics is recovered in the weak coupling limit.

  19. Matrix model approach to cosmology

    NASA Astrophysics Data System (ADS)

    Chaney, A.; Lu, Lei; Stern, A.

    2016-03-01

    We perform a systematic search for rotationally invariant cosmological solutions to toy matrix models. These models correspond to the bosonic sector of Lorentzian Ishibashi, Kawai, Kitazawa and Tsuchiya (IKKT)-type matrix models in dimensions d less than ten, specifically d =3 and d =5 . After taking a continuum (or commutative) limit they yield d -1 dimensional Poisson manifolds. The manifolds have a Lorentzian induced metric which can be associated with closed, open, or static space-times. For d =3 , we obtain recursion relations from which it is possible to generate rotationally invariant matrix solutions which yield open universes in the continuum limit. Specific examples of matrix solutions have also been found which are associated with closed and static two-dimensional space-times in the continuum limit. The solutions provide for a resolution of cosmological singularities, at least within the context of the toy matrix models. The commutative limit reveals other desirable features, such as a solution describing a smooth transition from an initial inflation to a noninflationary era. Many of the d =3 solutions have analogues in higher dimensions. The case of d =5 , in particular, has the potential for yielding realistic four-dimensional cosmologies in the continuum limit. We find four-dimensional de Sitter d S4 or anti-de Sitter AdS4 solutions when a totally antisymmetric term is included in the matrix action. A nontrivial Poisson structure is attached to these manifolds which represents the lowest order effect of noncommutativity. For the case of AdS4 , we find one particular limit where the lowest order noncommutativity vanishes at the boundary, but not in the interior.

  20. Szekeres models: a covariant approach

    NASA Astrophysics Data System (ADS)

    Apostolopoulos, Pantelis S.

    2017-05-01

    We exploit the 1  +  1  +  2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an average scale length can be defined covariantly which satisfies a 2d equation of motion driven from the effective gravitational mass (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field E ab . We show that the quasi-symmetric property of the Szekeres models is justified through the existence of 3 independent intrinsic Killing vector fields (IKVFs). In addition the notions of the apparent and absolute apparent horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express Sachs’ optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.

  1. Functional models of power electronic components for system studies

    NASA Technical Reports Server (NTRS)

    Tam, Kwa-Sur; Yang, Lifeng; Dravid, Narayan

    1991-01-01

    A novel approach to model power electronic circuits has been developed to facilitate simulation studies of system-level issues. The underlying concept for this approach is to develop an equivalent circuit, the functional model, that performs the same functions as the actual circuit but whose operation can be simulated by using larger time step size and the reduction in model complexity, the computation time required by a functional model is significantly shorter than that required by alternative approaches. The authors present this novel modeling approach and discuss the functional models of two major power electronic components, the DC/DC converter unit and the load converter, that are being considered by NASA for use in the Space Station Freedom electric power system. The validity of these models is established by comparing the simulation results with available experimental data and other simulation results obtained by using a more established modeling approach. The usefulness of this approach is demonstrated by incorporating these models into a power system model and simulating the system responses and interactions between components under various conditions.

  2. ONION: Functional Approach for Integration of Lipidomics and Transcriptomics Data

    PubMed Central

    Piwowar, Monika; Jurkowski, Wiktor

    2015-01-01

    To date, the massive quantity of data generated by high-throughput techniques has not yet met bioinformatics treatment required to make full use of it. This is partially due to a mismatch in experimental and analytical study design but primarily due to a lack of adequate analytical approaches. When integrating multiple data types e.g. transcriptomics and metabolomics, multidimensional statistical methods are currently the techniques of choice. Typical statistical approaches, such as canonical correlation analysis (CCA), that are applied to find associations between metabolites and genes are failing due to small numbers of observations (e.g. conditions, diet etc.) in comparison to data size (number of genes, metabolites). Modifications designed to cope with this issue are not ideal due to the need to add simulated data resulting in a lack of p-value computation or by pruning of variables hence losing potentially valid information. Instead, our approach makes use of verified or putative molecular interactions or functional association to guide analysis. The workflow includes dividing of data sets to reach the expected data structure, statistical analysis within groups and interpretation of results. By applying pathway and network analysis, data obtained by various platforms are grouped with moderate stringency to avoid functional bias. As a consequence CCA and other multivariate models can be applied to calculate robust statistics and provide easy to interpret associations between metabolites and genes to leverage understanding of metabolic response. Effective integration of lipidomics and transcriptomics is demonstrated on publically available murine nutrigenomics data sets. We are able to demonstrate that our approach improves detection of genes related to lipid metabolism, in comparison to applying statistics alone. This is measured by increased percentage of explained variance (95% vs. 75–80%) and by identifying new metabolite-gene associations related to lipid

  3. HABITAT MODELING APPROACHES FOR RESTORATION SITE SELECTION

    EPA Science Inventory

    Numerous modeling approaches have been used to develop predictive models of species-environment and species-habitat relationships. These models have been used in conservation biology and habitat or species management, but their application to restoration efforts has been minimal...

  4. An Instructional Approach to Modeling in Microevolution.

    ERIC Educational Resources Information Center

    Thompson, Steven R.

    1988-01-01

    Describes an approach to teaching population genetics and evolution and some of the ways models can be used to enhance understanding of the processes being studied. Discusses the instructional plan, and the use of models including utility programs and analysis with models. Provided are a basic program and sample program outputs. (CW)

  5. Nonrelativistic approaches derived from point-coupling relativistic models

    SciTech Connect

    Lourenco, O.; Dutra, M.; Delfino, A.; Sa Martins, J. S.

    2010-03-15

    We construct nonrelativistic versions of relativistic nonlinear hadronic point-coupling models, based on new normalized spinor wave functions after small component reduction. These expansions give us energy density functionals that can be compared to their relativistic counterparts. We show that the agreement between the nonrelativistic limit approach and the Skyrme parametrizations becomes strongly dependent on the incompressibility of each model. We also show that the particular case A=B=0 (Walecka model) leads to the same energy density functional of the Skyrme parametrizations SV and ZR2, while the truncation scheme, up to order {rho}{sup 3}, leads to parametrizations for which {sigma}=1.

  6. A system decomposition approach to the design of functional observers

    NASA Astrophysics Data System (ADS)

    Fernando, Tyrone; Trinh, Hieu

    2014-09-01

    This paper reports a system decomposition that allows the construction of a minimum-order functional observer using a state observer design approach. The system decomposition translates the functional observer design problem to that of a state observer for a smaller decomposed subsystem. Functional observability indices are introduced, and a closed-form expression for the minimum order required for a functional observer is derived in terms of those functional observability indices.

  7. Computational modelling approaches to vaccinology.

    PubMed

    Pappalardo, Francesco; Flower, Darren; Russo, Giulia; Pennisi, Marzio; Motta, Santo

    2015-02-01

    Excepting the Peripheral and Central Nervous Systems, the Immune System is the most complex of somatic systems in higher animals. This complexity manifests itself at many levels from the molecular to that of the whole organism. Much insight into this confounding complexity can be gained through computational simulation. Such simulations range in application from epitope prediction through to the modelling of vaccination strategies. In this review, we evaluate selectively various key applications relevant to computational vaccinology: these include technique that operates at different scale that is, from molecular to organisms and even to population level.

  8. Structure Leads To Function: An Integrated Biophysical Approach To Teaching a Biochemistry Laboratory.

    ERIC Educational Resources Information Center

    And Others; deLannoy, Peter

    1996-01-01

    Describes an integrated approach to teaching a biochemistry laboratory focusing on the relationship between the three-dimensional structure of a macromolecule and its function. RNA is chosen as the model system. Discusses curriculum and student assessment. (AIM)

  9. Modeling of Mid-Frequency Reverberation in Very Shallow Water: A Green’s Function Approach and Application to TREX2013 Data Analysis

    DTIC Science & Technology

    2015-08-31

    inversion , TREX13 data analysis and model-data comparisons. Distribution Statement A: Approved for public release; distribution unlimited. 2...inputs. Also, it was suggested to consider analytical expressions for reverberation, methods of its inversion for environmental parameters, and...1913), “Physics-based inversion of multibeam sonar data for seafloor characterization”, J. Acoust. Soc. Amer., 134(4), Pt.2, p.4240. B.T. Hefner

  10. Modeling and simulation of molecular biology systems using petri nets: modeling goals of various approaches.

    PubMed

    Hardy, Simon; Robillard, Pierre N

    2004-12-01

    Petri nets are a discrete event simulation approach developed for system representation, in particular for their concurrency and synchronization properties. Various extensions to the original theory of Petri nets have been used for modeling molecular biology systems and metabolic networks. These extensions are stochastic, colored, hybrid and functional. This paper carries out an initial review of the various modeling approaches based on Petri net found in the literature, and of the biological systems that have been successfully modeled with these approaches. Moreover, the modeling goals and possibilities of qualitative analysis and system simulation of each approach are discussed.

  11. Challenges in structural approaches to cell modeling

    PubMed Central

    Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A.

    2016-01-01

    Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field. PMID:27255863

  12. Cost function approach for estimating derived demand for composite wood products

    Treesearch

    T. C. Marcin

    1991-01-01

    A cost function approach was examined for using the concept of duality between production and input factor demands. A translog cost function was used to represent residential construction costs and derived conditional factor demand equations. Alternative models were derived from the translog cost function by imposing parameter restrictions.

  13. Sturmian function approach and {bar N}N bound states

    SciTech Connect

    Yan, Y.; Tegen, R.; Gutsche, T.; Faessler, A.

    1997-09-01

    A suitable numerical approach based on Sturmian functions is employed to solve the {bar N}N bound state problem for local and nonlocal potentials. The approach accounts for both the strong short-range nuclear potential and the long-range Coulomb force and provides directly the wave function of protonium and {bar N}N deep bound states with complex eigenvalues E=E{sub R}{minus}i({Gamma}/2). The spectrum of {bar N}N bound states has two parts, the atomic states bound by several keV, and the deep bound states which are bound by several hundred MeV. The observed very small hyperfine splitting of the 1s level and the 1s and 2p decay widths are reasonably well reproduced by both the Paris and Bonn potentials (supplemented with a microscopically derived quark annihilation potential), although there are differences in magnitude and level ordering. We present further arguments for the identification of the {sup 13}PF{sub 2} deep bound state with the exotic tensor meson f{sub 2}(1520). Both investigated models can accommodate the f{sub 2}(1520) but differ greatly in the total number of levels and in their ordering. The model based on the Paris potential predicts the {sup 13}P{sub 0} level slightly below 1.1 GeV while the model based on the Bonn potential puts this state below 0.8 GeV. It remains to be seen if this state can be identified with a scalar partner of the f{sub 2}(1520). {copyright} {ital 1997} {ital The American Physical Society}

  14. Distinguishing treatment from research: a functional approach

    PubMed Central

    Lewens, T

    2006-01-01

    The best way to distinguish treatment from research is by their functions. This mode of distinction fits well with the basic ethical work that needs to be carried out. The distinction needs to serve as an ethical flag, highlighting areas in which the goals of doctors and patients are more likely than usual to diverge. The distinction also allows us to illuminate and understand some otherwise puzzling elements of debates on research ethics: it shows the peculiarity of exclusive conceptions of the distinction between research and treatment; it allows us to frame questions about therapeutic obligations in the research context, and it allows us to consider whether there may be research obligations in the therapeutic context. PMID:16816045

  15. Social learning in Models and Cases - an Interdisciplinary Approach

    NASA Astrophysics Data System (ADS)

    Buhl, Johannes; De Cian, Enrica; Carrara, Samuel; Monetti, Silvia; Berg, Holger

    2016-04-01

    Our paper follows an interdisciplinary understanding of social learning. We contribute to the literature on social learning in transition research by bridging case-oriented research and modelling-oriented transition research. We start by describing selected theories on social learning in innovation, diffusion and transition research. We present theoretical understandings of social learning in techno-economic and agent-based modelling. Then we elaborate on empirical research on social learning in transition case studies. We identify and synthetize key dimensions of social learning in transition case studies. In the following we bridge between more formal and generalising modelling approaches towards social learning processes and more descriptive, individualising case study approaches by interpreting the case study analysis into a visual guide on functional forms of social learning typically identified in the cases. We then try to exemplarily vary functional forms of social learning in integrated assessment models. We conclude by drawing the lessons learned from the interdisciplinary approach - methodologically and empirically.

  16. Interactively Open Autonomy Unifies Two Approaches to Function

    NASA Astrophysics Data System (ADS)

    Collier, John

    2004-08-01

    Functionality is essential to any form of anticipation beyond simple directedness at an end. In the literature on function in biology, there are two distinct approaches. One, the etiological view, places the origin of function in selection, while the other, the organizational view, individuates function by organizational role. Both approaches have well-known advantages and disadvantages. I propose a reconciliation of the two approaches, based in an interactivist approach to the individuation and stability of organisms. The approach was suggested by Kant in the Critique of Judgment, but since it requires, on his account, the identification a new form of causation, it has not been accessible by analytical techniques. I proceed by construction of the required concept to fit certain design requirements. This construction builds on concepts introduced in my previous four talks to these meetings.

  17. An approach to solving large reliability models

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.

    1988-01-01

    This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).

  18. An approach to solving large reliability models

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.

    1988-01-01

    This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).

  19. Challenges and opportunities for integrating lake ecosystem modelling approaches

    USGS Publications Warehouse

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative

  20. Understanding the Dual Inhibition of COX-2 and Carbonic Anhydrase-II by Celecoxib and CG100649 Using Density Functional Theory Calculations and other Molecular Modelling Approaches.

    PubMed

    Singh, Omkar; Kakularam, Kumar R; Reddanna, Pallu; Aparoy, Polamarasetty

    2015-01-01

    Recent developments in the dual inhibition studies of cyclooxygenase-2 (COX-2) and carbonic anhydrase (CA-II) imply a promising platform for the development of new generations of nonsteroidal anti-inflammatory drugs (NSAIDs). CG100649 is such a molecule that got recently approved by Korean Ministry of Food and Drug safety (MFDS) and is being marketed by the name polmacoxib for the treatment of osteoarthritis. CG100649 significantly inhibits CA-II in blood and COX-2 in inflammatory tissues. However, the mechanism of CG100649 dual inhibition of COX-2/CA-II is not well understood. In this study, we employed well known methods like pharmacophore modelling, a DFT based quantum chemical descriptors analysis, and molecular docking to explore the chemical features and to understand the binding behaviour of CG100649 along with other COX-2/CA-II dual inhibitors. The HOMO-LUMO and docking results indicated the prominent role of aryl sulphonamide in CG100649. The aryl sulphonamide moiety formed T-shaped Π…Π interactions with His94 in the CA-II active site, which was not observed in the case of celecoxib. Other crucial interactions were also observed which may aid in further understanding the action of dual inhibitors of this class.

  1. Hybrid approaches to physiologic modeling and prediction

    NASA Astrophysics Data System (ADS)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  2. Functional infrared imaging in medicine: a quantitative diagnostic approach.

    PubMed

    Merla, A; Romani, G L

    2006-01-01

    The role and the potentialities of high-resolution infrared thermography, combined to bio-heat modelling, have been largely described in the last years in a wide variety of biomedical applications. Quantitative assessment over time of the cutaneous temperature and/or of other biomedical parameters related to the temperature (e.g., cutaneous blood flow, thermal inertia, sympathetic skin response) allows for a better and more complete understanding and description of functional processes involved and/or altered in presence of ailment and interfering with the regular cutaneous thermoregulation. Such an approach to thermal medical imaging requires both new methodologies and tools, like diagnostic paradigms, appropriate software for data analysis and, even, a completely new way to look at data processing. In this paper, some of the studies recently made in our laboratory are presented and described, with the general intent of introducing the reader to these innovative methods to obtain quantitative diagnostic tools based on thermal imaging.

  3. Changes in water budgets and sediment yields from a hypothetical agricultural field as a function of landscape and management characteristics--A unit field modeling approach

    USGS Publications Warehouse

    Roth, Jason L.; Capel, Paul D.

    2012-01-01

    Crop agriculture occupies 13 percent of the conterminous United States. Agricultural management practices, such as crop and tillage types, affect the hydrologic flow paths through the landscape. Some agricultural practices, such as drainage and irrigation, create entirely new hydrologic flow paths upon the landscapes where they are implemented. These hydrologic changes can affect the magnitude and partitioning of water budgets and sediment erosion. Given the wide degree of variability amongst agricultural settings, changes in the magnitudes of hydrologic flow paths and sediment erosion induced by agricultural management practices commonly are difficult to characterize, quantify, and compare using only field observations. The Water Erosion Prediction Project (WEPP) model was used to simulate two landscape characteristics (slope and soil texture) and three agricultural management practices (land cover/crop type, tillage type, and selected agricultural land management practices) to evaluate their effects on the water budgets of and sediment yield from agricultural lands. An array of sixty-eight 60-year simulations were run, each representing a distinct natural or agricultural scenario with various slopes, soil textures, crop or land cover types, tillage types, and select agricultural management practices on an isolated 16.2-hectare field. Simulations were made to represent two common agricultural climate regimes: arid with sprinkler irrigation and humid. These climate regimes were constructed with actual climate and irrigation data. The results of these simulations demonstrate the magnitudes of potential changes in water budgets and sediment yields from lands as a result of landscape characteristics and agricultural practices adopted on them. These simulations showed that variations in landscape characteristics, such as slope and soil type, had appreciable effects on water budgets and sediment yields. As slopes increased, sediment yields increased in both the arid and

  4. Mass functions in coupled dark energy models

    SciTech Connect

    Mainini, Roberto; Bonometto, Silvio

    2006-08-15

    We evaluate the mass function of virialized halos, by using Press and Schechter (PS) and/or Steth and Tormen (ST) expressions, for cosmologies where dark energy (DE) is due to a scalar self-interacting field, coupled with dark matter (DM). We keep to coupled DE (cDE) models known to fit linear observables. To implement the PS-ST approach, we start from reviewing and extending the results of a previous work on the growth of a spherical top-hat fluctuation in cDE models, confirming their most intriguing astrophysical feature, i.e. a significant baryon-DM segregation, occurring well before the onset of any hydrodynamical effect. Accordingly, the predicted mass function depends on how halo masses are measured. For any option, however, the coupling causes a distortion of the mass function, still at z=0. Furthermore, the z-dependence of cDE mass functions is mostly displaced, in respect to {lambda}CDM, in the opposite way of uncoupled dynamical DE. This is an aspect of the basic underlying result, that even a little DM-DE coupling induces relevant modifications in the nonlinear evolution. Therefore, without causing great shifts in linear astrophysical observables, the DM-baryon segregation induced by the coupling can have an impact on a number of cosmological problems, e.g., galaxy satellite abundance, spiral disk formation, apparent baryon shortage, entropy input in clusters, etc.

  5. Searching for new mathematical growth model approaches for Listeria monocytogenes.

    PubMed

    Valero, A; Hervás, C; García-Gimeno, R M; Zurera, G

    2007-01-01

    Different secondary modeling approaches for the estimation of Listeria monocytogenes growth rate as a function of temperature (4 to 30 degrees C), citric acid (0% to 0.4% w/v), and ascorbic acid (0% to 0.4% w/v) are presented. Response surface (RS) and square-root (SR) models are proposed together with different artificial neural networks (ANN) based on product functions units (PU), sigmoidal functions units (SU), and a novel approach based on the use of hybrid functions units (PSU), which results from a combination of PU and SU. In this study, a significantly better goodness-of-fit was obtained in the case of the ANN models presented, reflected by the lower SEP values obtained (< 24.23 for both training and generalization datasets). Among these models, the SU model provided the best generalization capacity, displaying lower RMSE and SEP values, with fewer parameters compared to the PU and PSU models. The bias factor (B(f)) and accuracy factor (A(f)) of the mathematical validation dataset were above 1 in all cases, providing fail-safe predictions. The balance between generalization properties and the ease of use is the main consideration when applying secondary modeling approaches to achieve accurate predictions about the behavior of microorganisms.

  6. Defining and Applying a Functionality Approach to Intellectual Disability

    ERIC Educational Resources Information Center

    Luckasson, R.; Schalock, R. L.

    2013-01-01

    Background: The current functional models of disability do not adequately incorporate significant changes of the last three decades in our understanding of human functioning, and how the human functioning construct can be applied to clinical functions, professional practices and outcomes evaluation. Methods: The authors synthesise current…

  7. Defining and Applying a Functionality Approach to Intellectual Disability

    ERIC Educational Resources Information Center

    Luckasson, R.; Schalock, R. L.

    2013-01-01

    Background: The current functional models of disability do not adequately incorporate significant changes of the last three decades in our understanding of human functioning, and how the human functioning construct can be applied to clinical functions, professional practices and outcomes evaluation. Methods: The authors synthesise current…

  8. Heterogeneous Factor Analysis Models: A Bayesian Approach.

    ERIC Educational Resources Information Center

    Ansari, Asim; Jedidi, Kamel; Dube, Laurette

    2002-01-01

    Developed Markov Chain Monte Carlo procedures to perform Bayesian inference, model checking, and model comparison in heterogeneous factor analysis. Tested the approach with synthetic data and data from a consumption emotion study involving 54 consumers. Results show that traditional psychometric methods cannot fully capture the heterogeneity in…

  9. Modeling Functions with the Calculator Based Ranger.

    ERIC Educational Resources Information Center

    Sherrill, Donna; Tibbs, Peggy

    This paper presents two mathematics activities that model functions studied using the Calculator Based Ranger (CBR) software for TI-82 and TI-83 graphing calculators. The activities concern a bouncing ball experiment and modeling a decaying exponential function. (ASK)

  10. Algebraic and Geometric Approach in Function Problem Solving

    ERIC Educational Resources Information Center

    Mousoulides, Nikos; Gagatsis, Athanasios

    2004-01-01

    This study explores students algebraic and geometric approach in solving tasks in functions and the relation of these approaches with complex geometric problem solving. Data were obtained from 95 sophomore pre-service teachers, enrolled in a basic algebra course. Implicative statistical analysis was performed to evaluate the relation between…

  11. Combining Formal and Functional Approaches to Topic Structure

    ERIC Educational Resources Information Center

    Zellers, Margaret; Post, Brechtje

    2012-01-01

    Fragmentation between formal and functional approaches to prosodic variation is an ongoing problem in linguistic research. In particular, the frameworks of the Phonetics of Talk-in-Interaction (PTI) and Empirical Phonology (EP) take very different theoretical and methodological approaches to this kind of variation. We argue that it is fruitful to…

  12. Density functional theory-based simulations of sum frequency generation spectra involving methyl stretching vibrations: effect of the molecular model on the deduced molecular orientation and comparison with an analytical approach

    NASA Astrophysics Data System (ADS)

    Cecchet, F.; Lis, D.; Caudano, Y.; Mani, A. A.; Peremans, A.; Champagne, B.; Guthmuller, J.

    2012-03-01

    The knowledge of the first hyperpolarizability tensor elements of molecular groups is crucial for a quantitative interpretation of the sum frequency generation (SFG) activity of thin organic films at interfaces. Here, the SFG response of the terminal methyl group of a dodecanethiol (DDT) monolayer has been interpreted on the basis of calculations performed at the density functional theory (DFT) level of approximation. In particular, DFT calculations have been carried out on three classes of models for the aliphatic chains. The first class of models consists of aliphatic chains, containing from 3 to 12 carbon atoms, in which only one methyl group can freely vibrate, while the rest of the chain is frozen by a strong overweight of its C and H atoms. This enables us to localize the probed vibrational modes on the methyl group. In the second class, only one methyl group is frozen, while the entire remaining chain is allowed to vibrate. This enables us to analyse the influence of the aliphatic chain on the methyl stretching vibrations. Finally, the dodecanethiol (DDT) molecule is considered, for which the effects of two dielectrics, i.e. n-hexane and n-dodecane, are investigated. Moreover, DDT calculations are also carried out by using different exchange-correlation (XC) functionals in order to assess the DFT approximations. Using the DFT IR vectors and Raman tensors, the SFG spectrum of DDT has been simulated and the orientation of the methyl group has then been deduced and compared with that obtained using an analytical approach based on a bond additivity model. This analysis shows that when using DFT molecular properties, the predicted orientation of the terminal methyl group tends to converge as a function of the alkyl chain length and that the effects of the chain as well as of the dielectric environment are small. Instead, a more significant difference is observed when comparing the DFT-based results with those obtained from the analytical approach, thus indicating

  13. Density functional theory-based simulations of sum frequency generation spectra involving methyl stretching vibrations: effect of the molecular model on the deduced molecular orientation and comparison with an analytical approach.

    PubMed

    Cecchet, F; Lis, D; Caudano, Y; Mani, A A; Peremans, A; Champagne, B; Guthmuller, J

    2012-03-28

    The knowledge of the first hyperpolarizability tensor elements of molecular groups is crucial for a quantitative interpretation of the sum frequency generation (SFG) activity of thin organic films at interfaces. Here, the SFG response of the terminal methyl group of a dodecanethiol (DDT) monolayer has been interpreted on the basis of calculations performed at the density functional theory (DFT) level of approximation. In particular, DFT calculations have been carried out on three classes of models for the aliphatic chains. The first class of models consists of aliphatic chains, containing from 3 to 12 carbon atoms, in which only one methyl group can freely vibrate, while the rest of the chain is frozen by a strong overweight of its C and H atoms. This enables us to localize the probed vibrational modes on the methyl group. In the second class, only one methyl group is frozen, while the entire remaining chain is allowed to vibrate. This enables us to analyse the influence of the aliphatic chain on the methyl stretching vibrations. Finally, the dodecanethiol (DDT) molecule is considered, for which the effects of two dielectrics, i.e. n-hexane and n-dodecane, are investigated. Moreover, DDT calculations are also carried out by using different exchange-correlation (XC) functionals in order to assess the DFT approximations. Using the DFT IR vectors and Raman tensors, the SFG spectrum of DDT has been simulated and the orientation of the methyl group has then been deduced and compared with that obtained using an analytical approach based on a bond additivity model. This analysis shows that when using DFT molecular properties, the predicted orientation of the terminal methyl group tends to converge as a function of the alkyl chain length and that the effects of the chain as well as of the dielectric environment are small. Instead, a more significant difference is observed when comparing the DFT-based results with those obtained from the analytical approach, thus indicating

  14. Functional integral approach: a third formulation of quantum statistical mechanics.

    PubMed

    Dai, Xian Xi; Evenson, William E

    2002-02-01

    Quantum statistical mechanics has developed primarily through two approaches, pioneered by Gibbs and Feynman, respectively. In Gibbs' method one calculates partition functions from phase-space integrations or sums over stationary states. Alternatively, in Feynman's approach, the focus is on the path-integral formulation. The Hubbard-Stratonovich transformation leads to a functional-integral formulation for calculating partition functions. We outline here the functional integral approach to quantum statistical mechanics, including generalizations and improvements to Hubbard's formulation. We show how the dimensionality of the integrals is reduced exactly, how the problem of assuming an unknown canonical transformation is avoided, how the reality of the partition function in the complex representation is guaranteed, and how the extremum conditions are simplified. This formulation can be applied to general systems, including superconductors.

  15. Defining mental disorder. Exploring the 'natural function' approach

    PubMed Central

    2011-01-01

    Due to several socio-political factors, to many psychiatrists only a strictly objective definition of mental disorder, free of value components, seems really acceptable. In this paper, I will explore a variant of such an objectivist approach to defining metal disorder, natural function objectivism. Proponents of this approach make recourse to the notion of natural function in order to reach a value-free definition of mental disorder. The exploration of Christopher Boorse's 'biostatistical' account of natural function (1) will be followed an investigation of the 'hybrid naturalism' approach to natural functions by Jerome Wakefield (2). In the third part, I will explore two proposals that call into question the whole attempt to define mental disorder (3). I will conclude that while 'natural function objectivism' accounts fail to provide the backdrop for a reliable definition of mental disorder, there is no compelling reason to conclude that a definition cannot be achieved. PMID:21255405

  16. Defining mental disorder. Exploring the 'natural function' approach.

    PubMed

    Varga, Somogy

    2011-01-21

    Due to several socio-political factors, to many psychiatrists only a strictly objective definition of mental disorder, free of value components, seems really acceptable. In this paper, I will explore a variant of such an objectivist approach to defining metal disorder, natural function objectivism. Proponents of this approach make recourse to the notion of natural function in order to reach a value-free definition of mental disorder. The exploration of Christopher Boorse's 'biostatistical' account of natural function (1) will be followed an investigation of the 'hybrid naturalism' approach to natural functions by Jerome Wakefield (2). In the third part, I will explore two proposals that call into question the whole attempt to define mental disorder (3). I will conclude that while 'natural function objectivism' accounts fail to provide the backdrop for a reliable definition of mental disorder, there is no compelling reason to conclude that a definition cannot be achieved.

  17. Function Model for Community Health Service Information

    NASA Astrophysics Data System (ADS)

    Yang, Peng; Pan, Feng; Liu, Danhong; Xu, Yongyong

    In order to construct a function model of community health service (CHS) information for development of CHS information management system, Integration Definition for Function Modeling (IDEF0), an IEEE standard which is extended from Structured Analysis and Design(SADT) and now is a widely used function modeling method, was used to classifying its information from top to bottom. The contents of every level of the model were described and coded. Then function model for CHS information, which includes 4 super-classes, 15 classes and 28 sub-classed of business function, 43 business processes and 168 business activities, was established. This model can facilitate information management system development and workflow refinement.

  18. Wave-function model for the CP violation in mesons

    NASA Astrophysics Data System (ADS)

    Saberi Fathi, S. M.; Courbage, M.; Durt, T.

    2017-10-01

    In this paper, we propose a simple quantum model of the kaons decay providing an estimate of the CP symmetry violation parameter. We use the two-level Friedrich's Hamiltonian model to obtain a good quantitative agreement with the experimental estimate of the violation parameter for neutral kaons. A temporal wave-function approach, based on an analogy with spatial wave-functions, plays a crucial role in our model.

  19. Belief Function Model for Information Retrieval.

    ERIC Educational Resources Information Center

    Silva, Wagner Teixeira da; Milidiu, Ruy Luiz

    1993-01-01

    Describes the Belief Function Model for automatic indexing and ranking of documents which is based on a controlled vocabulary and on term frequencies in each document. Belief Function Theory is explained, and the Belief Function Model is compared to the Standard Vector Space Model. (17 references) (LRW)

  20. A stochastic approach to model validation

    NASA Astrophysics Data System (ADS)

    Luis, Steven J.; McLaughlin, Dennis

    This paper describes a stochastic approach for assessing the validity of environmental models. In order to illustrate basic concepts we focus on the problem of modeling moisture movement through an unsaturated porous medium. We assume that the modeling objective is to predict the mean distribution of moisture content over time and space. The mean moisture content describes the large-scale flow behavior of most interest in many practical applications. The model validation process attempts to determine whether the model's predictions are acceptably close to the mean. This can be accomplished by comparing small-scale measurements of moisture content to the model's predictions. Differences between these two quantities can be attributed to three distinct 'error sources': (1) measurement error, (2) spatial heterogeneity, and (3) model error. If we adopt appropriate stochastic descriptions for the first two sources of error we can view model validation as a hypothesis testing problem where the null hypothesis states that model error is negligible. We illustrate this concept by comparing the predictions of a simple two-dimensional deterministic model to measurements collected during a field experiment carried out near Las Cruces, New Mexico. Preliminary results from this field test indicate that a stochastic approach to validation can identify model deficiencies and provide objective standards for model performance.

  1. Challenges in structural approaches to cell modeling.

    PubMed

    Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A

    2016-07-31

    Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. The Functional-Notional Approach: From Theory to Practice.

    ERIC Educational Resources Information Center

    Finocchiaro, Mary; Brumfit, Christopher

    Both the theoretical basis of functional-notionalism and its practical classroom applications are discussed in this text. The major characteristic of the functional-notional approach to language teaching is a sensitivity to the individual needs of students. Based on the idea that the ability to use real, appropriate language to communicate with…

  3. Improving Treatment Integrity through a Functional Approach to Intervention Support

    ERIC Educational Resources Information Center

    Liaupsin, Carl J.

    2015-01-01

    A functional approach to intervention planning has been shown to be effective in reducing problem behaviors and promoting appropriate behaviors in children and youth with behavior disorders. When function-based intervention plans are not successful, it is often due to issues of treatment integrity in which teachers omit or do not sufficiently…

  4. Chemical biology approaches to membrane homeostasis and function.

    PubMed

    Takahashi-Umebayashi, Miwa; Pineau, Ludovic; Hannich, Thomas; Zumbuehl, Andreas; Doval, David Alonso; Matile, Stefan; Heinis, Christian; Turcatti, Gerardo; Loewith, Robbie; Roux, Aurélien; Reymond, Luc; Johnsson, Kai; Riezman, Howard

    2011-01-01

    The study of membranes is at a turning point. New theories about membrane structure and function have recently been proposed, however, new technologies, combining chemical, physical, and biochemical approaches are necessary to test these hypotheses. In particular, the NCCR in chemical biology aims to visualize and characterize membrane microdomains and determine their function during hormone signaling.

  5. Using a Functional Approach in Assessing Written Texts.

    ERIC Educational Resources Information Center

    Nunan, David

    It is argued that assessment of student writing can be enhanced by adoption of a functional approach to linguistic analysis; through their research, functional grammarians have provided language teachers with criteria for evaluating the extent to which learners have gained control of the grammatical and discourse features of a variety of…

  6. Towards new approaches in phenological modelling

    NASA Astrophysics Data System (ADS)

    Chmielewski, Frank-M.; Götz, Klaus-P.; Rawel, Harshard M.; Homann, Thomas

    2014-05-01

    Modelling of phenological stages is based on temperature sums for many decades, describing both the chilling and the forcing requirement of woody plants until the beginning of leafing or flowering. Parts of this approach go back to Reaumur (1735), who originally proposed the concept of growing degree-days. Now, there is a growing body of opinion that asks for new methods in phenological modelling and more in-depth studies on dormancy release of woody plants. This requirement is easily understandable if we consider the wide application of phenological models, which can even affect the results of climate models. To this day, in phenological models still a number of parameters need to be optimised on observations, although some basic physiological knowledge of the chilling and forcing requirement of plants is already considered in these approaches (semi-mechanistic models). Limiting, for a fundamental improvement of these models, is the lack of knowledge about the course of dormancy in woody plants, which cannot be directly observed and which is also insufficiently described in the literature. Modern metabolomic methods provide a solution for this problem and allow both, the validation of currently used phenological models as well as the development of mechanistic approaches. In order to develop this kind of models, changes of metabolites (concentration, temporal course) must be set in relation to the variability of environmental (steering) parameters (weather, day length, etc.). This necessarily requires multi-year (3-5 yr.) and high-resolution (weekly probes between autumn and spring) data. The feasibility of this approach has already been tested in a 3-year pilot-study on sweet cherries. Our suggested methodology is not only limited to the flowering of fruit trees, it can be also applied to tree species of the natural vegetation, where even greater deficits in phenological modelling exist.

  7. Dynamic geometry, brain function modeling, and consciousness.

    PubMed

    Roy, Sisir; Llinás, Rodolfo

    2008-01-01

    Pellionisz and Llinás proposed, years ago, a geometric interpretation towards understanding brain function. This interpretation assumes that the relation between the brain and the external world is determined by the ability of the central nervous system (CNS) to construct an internal model of the external world using an interactive geometrical relationship between sensory and motor expression. This approach opened new vistas not only in brain research but also in understanding the foundations of geometry itself. The approach named tensor network theory is sufficiently rich to allow specific computational modeling and addressed the issue of prediction, based on Taylor series expansion properties of the system, at the neuronal level, as a basic property of brain function. It was actually proposed that the evolutionary realm is the backbone for the development of an internal functional space that, while being purely representational, can interact successfully with the totally different world of the so-called "external reality". Now if the internal space or functional space is endowed with stochastic metric tensor properties, then there will be a dynamic correspondence between events in the external world and their specification in the internal space. We shall call this dynamic geometry since the minimal time resolution of the brain (10-15 ms), associated with 40 Hz oscillations of neurons and their network dynamics, is considered to be responsible for recognizing external events and generating the concept of simultaneity. The stochastic metric tensor in dynamic geometry can be written as five-dimensional space-time where the fifth dimension is a probability space as well as a metric space. This extra dimension is considered an imbedded degree of freedom. It is worth noticing that the above-mentioned 40 Hz oscillation is present both in awake and dream states where the central difference is the inability of phase resetting in the latter. This framework of dynamic

  8. Crossing Hazard Functions in Common Survival Models.

    PubMed

    Zhang, Jiajia; Peng, Yingwei

    2009-10-15

    Crossing hazard functions have extensive applications in modeling survival data. However, existing studies in the literature mainly focus on comparing crossed hazard functions and estimating the time at which the hazard functions cross, and there is little theoretical work on conditions under which hazard functions from a model will have a crossing. In this paper, we investigate crossing status of hazard functions from the proportional hazards (PH) model, the accelerated hazard (AH) model, and the accelerated failure time (AFT) model. We provide and prove conditions under which the hazard functions from the AH and the AFT models have no crossings or a single crossing. A few examples are also provided to demonstrate how the conditions can be used to determine crossing status of hazard functions from the three models.

  9. Local-basis-function approach to computed tomography

    NASA Astrophysics Data System (ADS)

    Hanson, K. M.; Wecksung, G. W.

    1985-12-01

    In the local basis-function approach, a reconstruction is represented as a linear expansion of basis functions, which are arranged on a rectangular grid and possess a local region of support. The basis functions considered here are positive and may overlap. It is found that basis functions based on cubic B-splines offer significant improvements in the calculational accuracy that can be achieved with iterative tomographic reconstruction algorithms. By employing repetitive basis functions, the computational effort involved in these algorithms can be minimized through the use of tabulated values for the line or strip integrals over a single-basis function. The local nature of the basis functions reduces the difficulties associated with applying local constraints on reconstruction values, such as upper and lower limits. Since a reconstruction is specified everywhere by a set of coefficients, display of a coarsely represented image does not require an arbitrary choice of an interpolation function.

  10. A Continuum Approach For Neural Network Modelling Of Anisotropic Materials

    NASA Astrophysics Data System (ADS)

    Man, Hou; Furukawa, Tomonari

    2010-05-01

    This paper presents an approach for constitutive modelling of anisotropic materials using neural networks on a continuum basis. The proposed approach develops the models by using an error function formulated from the minimum total potential energy principle. The variation of the strain energy of a deformed geometry is approximated by using the full field strain measurement with the neural network constitutive model (NNCM) and the coordinate frame transformation. It is subsequently compared with the variation of the applied external work, such that the discrepancy is fed back to update the model properties. The proposed approach is, therefore, able to develop the NNCM without the presence of stress data. This not only facilitates the use of multi-axial load tests and non-standard specimens to produce more realistic experimental results, but also reduces the number of different specimen configurations used for the model development. A numerical example is presented in this paper to validate the performance and applicability of the proposed approach by modelling a carbon fibre reinforced plastic (CFRP) lamina. Artificial experimental results of tensile tests with two different specimens are used to facilitate the validation. The results emphasise the flexibility and applicability of the proposed approach for constitutive modelling of anisotropic materials.

  11. Shell Model Approach to Nuclear Level Density

    NASA Astrophysics Data System (ADS)

    Horoi, Mihai

    2000-04-01

    Nuclear level densities (NLD) are traditionally estimated using variations of Fermi Gas Formula (FGF) or combinatoric techniques. Recent investigations using Monte Carlo Shell Model (MCSM) techniques indicate that a shell model description of NLD may be an accurate and stable approach. Full shell model calculations of NLD are very difficult. We calculated the NLD for all nuclei in the sd shell and show that the results can be described by a single particle combinatoric model, which depends on two parameters similar to FGF. We further investigated other models and find that a sum of gaussians with means and variances given by French and Ratcliff averages (Phys. Rev. C 3, 94(1971)) is able to accurately describe shell model NLD, even when shell effects are present. The contribution of the spurious center-of-mass motion to the shell model NLD is also discussed.

  12. Selectionist and Evolutionary Approaches to Brain Function: A Critical Appraisal

    PubMed Central

    Fernando, Chrisantha; Szathmáry, Eörs; Husbands, Phil

    2012-01-01

    We consider approaches to brain dynamics and function that have been claimed to be Darwinian. These include Edelman’s theory of neuronal group selection, Changeux’s theory of synaptic selection and selective stabilization of pre-representations, Seung’s Darwinian synapse, Loewenstein’s synaptic melioration, Adam’s selfish synapse, and Calvin’s replicating activity patterns. Except for the last two, the proposed mechanisms are selectionist but not truly Darwinian, because no replicators with information transfer to copies and hereditary variation can be identified in them. All of them fit, however, a generalized selectionist framework conforming to the picture of Price’s covariance formulation, which deliberately was not specific even to selection in biology, and therefore does not imply an algorithmic picture of biological evolution. Bayesian models and reinforcement learning are formally in agreement with selection dynamics. A classification of search algorithms is shown to include Darwinian replicators (evolutionary units with multiplication, heredity, and variability) as the most powerful mechanism for search in a sparsely occupied search space. Examples are given of cases where parallel competitive search with information transfer among the units is more efficient than search without information transfer between units. Finally, we review our recent attempts to construct and analyze simple models of true Darwinian evolutionary units in the brain in terms of connectivity and activity copying of neuronal groups. Although none of the proposed neuronal replicators include miraculous mechanisms, their identification remains a challenge but also a great promise. PMID:22557963

  13. Measuring Social Returns to Higher Education Investments in Hong Kong: Production Function Approach.

    ERIC Educational Resources Information Center

    Voon, Jan P.

    2001-01-01

    Uses a growth model involving an aggregate production function to measure social benefits from human capital improvements due to investments in Hong Kong higher education. Returns calculated using the production-function approach are significantly higher than those derived from the wage-increment method. Returns declined during the past 10 years.…

  14. A numerical approach for modelling fault-zone trapped waves

    NASA Astrophysics Data System (ADS)

    Gulley, A. K.; Kaipio, J. P.; Eccles, J. D.; Malin, P. E.

    2017-08-01

    We develop a computationally efficient approach to compute the waveforms and the dispersion curves for fault-zone trapped waves guided by arbitrary transversely isotropic across-fault velocity models. The approach is based on a Green's function type representation for FL and FR type fault-zone trapped waves. The model can be used for simulation of the waveforms generated by both infinite line sources (2-D) and point sources (3-D). The numerical scheme is based on a high order finite element approximation and, to increase computational efficiency, we make use of absorbing boundary conditions and mass lumping of finite element matrices.

  15. Towards a Multiscale Approach to Cybersecurity Modeling

    SciTech Connect

    Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay; Halappanavar, Mahantesh; Oler, Kiri J.; Joslyn, Cliff A.

    2013-11-12

    We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example of a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.

  16. The totally constrained model: three quantization approaches

    NASA Astrophysics Data System (ADS)

    Gambini, Rodolfo; Olmedo, Javier

    2014-08-01

    We provide a detailed comparison of the different approaches available for the quantization of a totally constrained system with a constraint algebra generating the non-compact group. In particular, we consider three schemes: the Refined Algebraic Quantization, the Master Constraint Programme and the Uniform Discretizations approach. For the latter, we provide a quantum description where we identify semiclassical sectors of the kinematical Hilbert space. We study the quantum dynamics of the system in order to show that it is compatible with the classical continuum evolution. Among these quantization approaches, the Uniform Discretizations provides the simpler description in agreement with the classical theory of this particular model, and it is expected to give new insights about the quantum dynamics of more realistic totally constrained models such as canonical general relativity.

  17. System Behavior Models: A Survey of Approaches

    DTIC Science & Technology

    2016-06-01

    estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining the...CLASSIFICATION OF ABSTRACT Unclassified 20. LIMITATION OF ABSTRACT UU NSN 7540–01-280-5500 Standard Form 298 (Rev. 2–89) Prescribed by ANSI Std. 239...DOMAIN-SPECIFIC APPROACHES ..................................................18 1. Automotive Requirements Modeling

  18. Post-16 Biology--Some Model Approaches?

    ERIC Educational Resources Information Center

    Lock, Roger

    1997-01-01

    Outlines alternative approaches to the teaching of difficult concepts in A-level biology which may help student learning by making abstract ideas more concrete and accessible. Examples include models, posters, and poems for illustrating meiosis, mitosis, genetic mutations, and protein synthesis. (DDR)

  19. Post-16 Biology--Some Model Approaches?

    ERIC Educational Resources Information Center

    Lock, Roger

    1997-01-01

    Outlines alternative approaches to the teaching of difficult concepts in A-level biology which may help student learning by making abstract ideas more concrete and accessible. Examples include models, posters, and poems for illustrating meiosis, mitosis, genetic mutations, and protein synthesis. (DDR)

  20. A frequentist approach to computer model calibration

    SciTech Connect

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates of convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.

  1. A frequentist approach to computer model calibration

    DOE PAGES

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  2. Building Water Models: A Different Approach

    PubMed Central

    2015-01-01

    Simplified classical water models are currently an indispensable component in practical atomistic simulations. Yet, despite several decades of intense research, these models are still far from perfect. Presented here is an alternative approach to constructing widely used point charge water models. In contrast to the conventional approach, we do not impose any geometry constraints on the model other than the symmetry. Instead, we optimize the distribution of point charges to best describe the “electrostatics” of the water molecule. The resulting “optimal” 3-charge, 4-point rigid water model (OPC) reproduces a comprehensive set of bulk properties significantly more accurately than commonly used rigid models: average error relative to experiment is 0.76%. Close agreement with experiment holds over a wide range of temperatures. The improvements in the proposed model extend beyond bulk properties: compared to common rigid models, predicted hydration free energies of small molecules using OPC are uniformly closer to experiment, with root-mean-square error <1 kcal/mol. PMID:25400877

  3. From equation to inequality using a function-based approach

    NASA Astrophysics Data System (ADS)

    Verikios, Petros; Farmaki, Vassiliki

    2010-06-01

    This article presents features of a qualitative research study concerning the teaching and learning of school algebra using a function-based approach in a grade 8 class, of 23 students, in 26 lessons, in a state school of Athens, in the school year 2003-2004. In this article, we are interested in the inequality concept and our aim is to investigate if and how our approach could facilitate students to comprehend inequality and to solve problems related to this concept. Data analysis showed that, in order to comprehend the new concept, the students should make a transition from equation to inequality. The role of the situation context proved decisive in this transition and in making sense of involved symbols. Also, students used function representations as problem-solving strategies in problems that included inequalities. However, the extension of the function-based approach in solving an abstract equation or inequality proved problematic for the students.

  4. Neural network approaches for noisy language modeling.

    PubMed

    Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid

    2013-11-01

    Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.

  5. Models of Protocellular Structure, Function and Evolution

    NASA Technical Reports Server (NTRS)

    New, Michael H.; Pohorille, Andrew; Szostak, Jack W.; Keefe, Tony; Lanyi, Janos K.; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    In the absence of any record of protocells, the most direct way to test our understanding, of the origin of cellular life is to construct laboratory models that capture important features of protocellular systems. Such efforts are currently underway in a collaborative project between NASA-Ames, Harvard Medical School and University of California. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures. The centerpiece of this project is a method for the in vitro evolution of protein enzymes toward arbitrary catalytic targets. A similar approach has already been developed for nucleic acids in which a small number of functional molecules are selected from a large, random population of candidates. The selected molecules are next vastly multiplied using the polymerase chain reaction.

  6. Models of Protocellular Structure, Function and Evolution

    NASA Technical Reports Server (NTRS)

    New, Michael H.; Pohorille, Andrew; Szostak, Jack W.; Keefe, Tony; Lanyi, Janos K.; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    In the absence of any record of protocells, the most direct way to test our understanding, of the origin of cellular life is to construct laboratory models that capture important features of protocellular systems. Such efforts are currently underway in a collaborative project between NASA-Ames, Harvard Medical School and University of California. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures. The centerpiece of this project is a method for the in vitro evolution of protein enzymes toward arbitrary catalytic targets. A similar approach has already been developed for nucleic acids in which a small number of functional molecules are selected from a large, random population of candidates. The selected molecules are next vastly multiplied using the polymerase chain reaction.

  7. An object-oriented approach to energy-economic modeling

    SciTech Connect

    Wise, M.A.; Fox, J.A.; Sands, R.D.

    1993-12-01

    In this paper, the authors discuss the experiences in creating an object-oriented economic model of the U.S. energy and agriculture markets. After a discussion of some central concepts, they provide an overview of the model, focusing on the methodology of designing an object-oriented class hierarchy specification based on standard microeconomic production functions. The evolution of the model from the class definition stage to programming it in C++, a standard object-oriented programming language, will be detailed. The authors then discuss the main differences between writing the object-oriented program versus a procedure-oriented program of the same model. Finally, they conclude with a discussion of the advantages and limitations of the object-oriented approach based on the experience in building energy-economic models with procedure-oriented approaches and languages.

  8. Structure Model of Salmonella typhimurium Ethanolamine Ammonia-Lyase Directs a Rational Approach to the Assembly of the Functional [(EutB-EutC)2]3 Oligomer from Isolated Subunits

    PubMed Central

    Bovell, Adonis; Warncke, Kurt

    2013-01-01

    Ethanolamine ammonia-lyase (EAL) is a 5’-deoxyadenosylcobalamin (AdoCbl; coenzyme B12) –dependent bacterial enzyme that catalyzes the deamination of the short-chain vicinal amino alcohols, aminoethanol and [S]- and [R]-2-aminopropanol. The coding sequence for EAL is located within the 17-gene eut operon, which codes for the broad spectrum of proteins that comprise the eut metabolosome sub-organelle structure. A high-resolution structure of the ~500 kDa EAL [(EutB-EutC)2]3 oligomer from Escherichia coli has been determined by X-ray crystallography, but high-resolution spectroscopic determinations of reactant intermediate state structures, and detailed kinetic and thermodynamic studies of EAL, have been conducted for the Salmonella typhimurium enzyme. Therefore, a statistically robust homology model for the S. typhimurium EAL is constructed from the E. coli structure. The model structure is used to describe the hierarchy of EutB and EutC subunit interactions that construct the native EAL oligomer, and specifically, to address the long-standing challenge of reconstitution of the functional oligomer from isolated, purified subunits. Model prediction that the (EutB2)3 oligomer assembly will occur from isolated EutB, and that this hexameric structure will template the formation of the complete, native [(EutB-EutC)2]3 oligomer, is verified by biochemical methods. Prediction that cysteine residues on the exposed subunit-subunit contact surfaces of isolated EutB and EutC will interfere with assembly by cystine formation is verified by activating effects of disulfide reducing agents. Ångstrom-scale congruence of the reconstituted and native EAL in the active site region is shown by electron paramagnetic resonance spectroscopy. Overall, the hierarchy of subunit interactions and microscopic features of the contact surfaces, that are revealed by the homology model, guide and provide a rationale for a refined genetic and biochemical approach to reconstitution of the

  9. Molecular modelling approaches for cystic fibrosis transmembrane conductance regulator studies.

    PubMed

    Odolczyk, Norbert; Zielenkiewicz, Piotr

    2014-07-01

    Cystic fibrosis (CF) is one of the most common genetic disorders, caused by loss of function mutations in the gene encoding the CF transmembrane conductance regulator (CFTR) protein. CFTR is a member of ATP-binding cassette (ABC) transporters superfamily and functions as an ATP-gated anion channel. This review summarises the vast majority of the efforts which utilised molecular modelling approaches to gain insight into the various aspects of CFTR protein, related to its structure, dynamic properties, function and interactions with other protein partners, or drug-like compounds, with emphasis to its relation to CF disease.

  10. A hybrid modeling approach for option pricing

    NASA Astrophysics Data System (ADS)

    Hajizadeh, Ehsan; Seifi, Abbas

    2011-11-01

    The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.

  11. Filtered density function approach for reactive transport in groundwater

    NASA Astrophysics Data System (ADS)

    Suciu, Nicolae; Schüler, Lennart; Attinger, Sabine; Knabner, Peter

    2016-04-01

    Spatial filtering may be used in coarse-grained simulations (CGS) of reactive transport in groundwater, similar to the large eddy simulations (LES) in turbulence. The filtered density function (FDF), stochastically equivalent to a probability density function (PDF), provides a statistical description of the sub-grid, unresolved, variability of the concentration field. Besides closing the chemical source terms in the transport equation for the mean concentration, like in LES-FDF methods, the CGS-FDF approach aims at quantifying the uncertainty over the whole hierarchy of heterogeneity scales exhibited by natural porous media. Practically, that means estimating concentration PDFs on coarse grids, at affordable computational costs. To cope with the high dimensionality of the problem in case of multi-component reactive transport and to reduce the numerical diffusion, FDF equations are solved by particle methods. But, while trajectories of computational particles are modeled as stochastic processes indexed by time, the concentration's heterogeneity is modeled as a random field, with multi-dimensional, spatio-temporal sets of indices. To overcome this conceptual inconsistency, we consider FDFs/PDFs of random species concentrations weighted by conserved scalars and we show that their evolution equations can be formulated as Fokker-Planck equations describing stochastically equivalent processes in concentration-position spaces. Numerical solutions can then be approximated by the density in the concentration-position space of an ensemble of computational particles governed by the associated Itô equations. Instead of sequential particle methods we use a global random walk (GRW) algorithm, which is stable, free of numerical diffusion, and practically insensitive to the increase of the number of particles. We illustrate the general FDF approach and the GRW numerical solution for a reduced complexity problem consisting of the transport of a single scalar in groundwater

  12. Interprofessional approach for teaching functional knee joint anatomy.

    PubMed

    Meyer, Jakob J; Obmann, Markus M; Gießler, Marianne; Schuldis, Dominik; Brückner, Ann-Kathrin; Strohm, Peter C; Sandeck, Florian; Spittau, Björn

    2017-03-01

    Profound knowledge in functional and clinical anatomy is a prerequisite for efficient diagnosis in medical practice. However, anatomy teaching does not always consider functional and clinical aspects. Here we introduce a new interprofessional approach to effectively teach the anatomy of the knee joint. The presented teaching approach involves anatomists, orthopaedists and physical therapists to teach anatomy of the knee joint in small groups under functional and clinical aspects. The knee joint courses were implemented during early stages of the medical curriculum and medical students were grouped with students of physical therapy to sensitize students to the importance of interprofessional work. Evaluation results clearly demonstrate that medical students and physical therapy students appreciated this teaching approach. First evaluations of following curricular anatomy exams suggest a benefit of course participants in knee-related multiple choice questions. Together, the interprofessional approach presented here proves to be a suitable approach to teach functional and clinical anatomy of the knee joint and further trains interprofessional work between prospective physicians and physical therapists as a basis for successful healthcare management. Copyright © 2016 The Authors. Published by Elsevier GmbH.. All rights reserved.

  13. Analyzing Measurement Models of Latent Variables through Multilevel Confirmatory Factor Analysis and Hierarchical Linear Modeling Approaches.

    ERIC Educational Resources Information Center

    Li, Fuzhong; Duncan, Terry E.; Harmer, Peter; Acock, Alan; Stoolmiller, Mike

    1998-01-01

    Discusses the utility of multilevel confirmatory factor analysis and hierarchical linear modeling methods in testing measurement models in which the underlying attribute may vary as a function of levels of observation. A real dataset is used to illustrate the two approaches and their comparability. (SLD)

  14. New approach to folding with the Coulomb wave function

    SciTech Connect

    Blokhintsev, L. D.; Savin, D. A.; Kadyrov, A. S.; Mukhamedzhanov, A. M.

    2015-05-15

    Due to the long-range character of the Coulomb interaction theoretical description of low-energy nuclear reactions with charged particles still remains a formidable task. One way of dealing with the problem in an integral-equation approach is to employ a screened Coulomb potential. A general approach without screening requires folding of kernels of the integral equations with the Coulomb wave. A new method of folding a function with the Coulomb partial waves is presented. The partial-wave Coulomb function both in the configuration and momentum representations is written in the form of separable series. Each term of the series is represented as a product of a factor depending only on the Coulomb parameter and a function depending on the spatial variable in the configuration space and the momentum variable if the momentum representation is used. Using a trial function, the method is demonstrated to be efficient and reliable.

  15. A novel pattern mining approach for identifying cognitive activity in EEG based functional brain networks.

    PubMed

    Thilaga, M; Vijayalakshmi, R; Nadarajan, R; Nandagopal, D

    2016-06-01

    The complex nature of neuronal interactions of the human brain has posed many challenges to the research community. To explore the underlying mechanisms of neuronal activity of cohesive brain regions during different cognitive activities, many innovative mathematical and computational models are required. This paper presents a novel Common Functional Pattern Mining approach to demonstrate the similar patterns of interactions due to common behavior of certain brain regions. The electrode sites of EEG-based functional brain network are modeled as a set of transactions and node-based complex network measures as itemsets. These itemsets are transformed into a graph data structure called Functional Pattern Graph. By mining this Functional Pattern Graph, the common functional patterns due to specific brain functioning can be identified. The empirical analyses show the efficiency of the proposed approach in identifying the extent to which the electrode sites (transactions) are similar during various cognitive load states.

  16. A Bayesian Shrinkage Approach for AMMI Models.

    PubMed

    da Silva, Carlos Pereira; de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves; Balestre, Marcio

    2015-01-01

    Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior

  17. A Bayesian Shrinkage Approach for AMMI Models

    PubMed Central

    de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves

    2015-01-01

    Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior

  18. Introduction to the Subjective Transfer Function Approach to Analyzing Systems.

    DTIC Science & Technology

    1984-07-01

    STANDARtDS- 1963-A R-3021-AF Intoduction to the Subjective Transfer Function Approach to Analyzing Systems 00 • CO Lf. Clairice T. Veit, Monti Callero ...34Prepared for the United States Air Force." Bibliography: p. • "R-3021-AF." 1. Subjective transfer function method. 2. System analysis. I. Callero , Monti...to Analyzing Systems T, Clairice T. Veit, Monti Callero , Barbara J. Rose July 1984 A Project AIR FORCE report prepared for the - United States Air

  19. Partitioned density functional approach for a Lennard-Jones fluid.

    PubMed

    Zhou, Shiqi

    2003-12-01

    The existing classical density functional approach for nonuniform Lennard-Jones fluid, which is based on dividing the Lennard-Jones interaction potential into a short-range, repulsive part, and a smoothly varying, long-range, attractive tail, was improved by dividing the bulk second-order direct correlation function into strongly density-depending short-range part and weakly density-depending long-range part. The latter is treated by functional perturbation expansion truncated at the lowest order whose accuracy depends on how weakly the long-range part depends on the bulk density. The former is treated by the truncated functional perturbation expansion which is rewritten in the form of the simple weighted density approximation and incorporates the omitted higher-order terms by applying Lagrangian theorem of differential calculus to the reformulated form. The two approximations are put into the density profile equation of the density functional theory formalism to predict the density distribution for Lennard-Jones fluid in contact with a hard wall or between two hard walls within the whole density range for reduced temperature T(*)=1.35 and a density point for reduced temperature T(*)=1. The present partitioned density functional theory performs much better than several previous density functional perturbation theory approaches and a recently proposed bridge density functional approximation.

  20. Bayesian non-parametrics and the probabilistic approach to modelling

    PubMed Central

    Ghahramani, Zoubin

    2013-01-01

    Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman’s coalescent, Dirichlet diffusion trees and Wishart processes. PMID:23277609

  1. Multiscale adaptive basis function modeling of spatiotemporal vectorcardiogram signals.

    PubMed

    Gang Liu; Hui Yang

    2013-03-01

    Mathematical modeling of cardiac electrical signals facilitates the simulation of realistic cardiac electrical behaviors, the evaluation of algorithms, and the characterization of underlying space-time patterns. However, there are practical issues pertinent to model efficacy, robustness, and generality. This paper presents a multiscale adaptive basis function modeling approach to characterize not only temporal but also spatial behaviors of vectorcardiogram (VCG) signals. Model parameters are adaptively estimated by the "best matching" projections of VCG characteristic waves onto a dictionary of nonlinear basis functions. The model performance is experimentally evaluated with respect to the number of basis functions, different types of basis function (i.e., Gaussian, Mexican hat, customized wavelet, and Hermitian wavelets), and various cardiac conditions, including 80 healthy controls and different myocardial infarctions (i.e., 89 inferior, 77 anterior-septal, 56 inferior-lateral, 47 anterior, and 43 anterior-lateral). Multiway analysis of variance shows that the basis function and the model complexity have significant effects on model performances while cardiac conditions are not significant. The customized wavelet is found to be an optimal basis function for the modeling of spacetime VCG signals. The comparison of QT intervals shows small relative errors (<;5%) between model representations and realworld VCG signals when the model complexity is greater than 10. The proposed model shows great potentials to model space-time cardiac pathological behaviors and can lead to potential benefits in feature extraction, data compression, algorithm evaluation, and disease prognostics.

  2. Bootstrapped models for intrinsic random functions

    SciTech Connect

    Campbell, K.

    1988-08-01

    Use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process. The fact that this function has to be estimated from data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the bootstrap in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as their kriging variance, provide a reasonable picture of variability introduced by imperfect estimation of the generalized covariance function.

  3. Bootstrapped models for intrinsic random functions

    SciTech Connect

    Campbell, K.

    1987-01-01

    The use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process, and the fact that this function has to be estimated from the data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the ''bootstrap'' in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as of their ''kriging variance,'' provide a reasonable picture of the variability introduced by imperfect estimation of the generalized covariance function.

  4. An approach to validation of thermomechanical models

    SciTech Connect

    Costin, L.S.; Hardy, M.P.; Brechtel, C.E.

    1993-08-01

    Thermomechanical models are being developed to support the design of an Exploratory Studies Facility (ESF) and a potential high-level nuclear waste repository at Yucca Mountain, Nevada. These models are used for preclosure design of underground openings, such as access drifts, emplacement drifts, and waste emplacement boreholes; and in support of postclosure issue resolution relating to waste canister performance, disturbance of the hydrological properties of the host rock, and overall system performance assessment. For both design and performance assessment, the purpose of using models in analyses is to better understand and quantify some phenomenon or process. Therefore, validation is an important process that must be pursued in conjunction with the development and application of models. The Site Characterization Plan (SCP) addressed some general aspects of model validation, but no specific approach has, as yet, been developed for either design or performance assessment models. This paper will discuss a proposed process for thermomechanical model validation and will focus on the use of laboratory and in situ experiments as part of the validation process. The process may be generic enough in nature that it could be applied to the validation of other types of models, for example, models of unsaturated hydrologic flow.

  5. Beyond Resistance: A Functional Approach to Building a Shared Agenda.

    ERIC Educational Resources Information Center

    Janas, Monica; Boudreaux, Martine

    1997-01-01

    Notes full inclusion has become a reality in many schools. Discusses four basic types of resistance to such educational reforms. Suggests that a functional approach can help educators deal with resistance. Suggests learning to recognize resistance and to deal with it effectively facilitates the creation of alternatives for students with special…

  6. Questionnaire of Executive Function for Dancers: An Ecological Approach

    ERIC Educational Resources Information Center

    Wong, Alina; Rodriguez, Mabel; Quevedo, Liliana; de Cossio, Lourdes Fernandez; Borges, Ariel; Reyes, Alicia; Corral, Roberto; Blanco, Florentino; Alvarez, Miguel

    2012-01-01

    There is a current debate about the ecological validity of executive function (EF) tests. Consistent with the verisimilitude approach, this research proposes the Ballet Executive Scale (BES), a self-rating questionnaire that assimilates idiosyncratic executive behaviors of classical dance community. The BES was administrated to 149 adolescents,…

  7. From Equation to Inequality Using a Function-Based Approach

    ERIC Educational Resources Information Center

    Verikios, Petros; Farmaki, Vassiliki

    2010-01-01

    This article presents features of a qualitative research study concerning the teaching and learning of school algebra using a function-based approach in a grade 8 class, of 23 students, in 26 lessons, in a state school of Athens, in the school year 2003-2004. In this article, we are interested in the inequality concept and our aim is to…

  8. Questionnaire of Executive Function for Dancers: An Ecological Approach

    ERIC Educational Resources Information Center

    Wong, Alina; Rodriguez, Mabel; Quevedo, Liliana; de Cossio, Lourdes Fernandez; Borges, Ariel; Reyes, Alicia; Corral, Roberto; Blanco, Florentino; Alvarez, Miguel

    2012-01-01

    There is a current debate about the ecological validity of executive function (EF) tests. Consistent with the verisimilitude approach, this research proposes the Ballet Executive Scale (BES), a self-rating questionnaire that assimilates idiosyncratic executive behaviors of classical dance community. The BES was administrated to 149 adolescents,…

  9. The Effect of External Approach Septoplasty on Olfactory Function.

    PubMed

    Türk, Bilge; Akpinar, Meltem; Altundağ, Aytuğ; Kirik, Mehtap Özkahraman; Ünsal, Özlem; Coşkun, Berna Uslu

    2017-10-01

    Septal deviation-induced nasal obstruction is frequently accompanied by hyposmia. The aim of this study was to evaluate the effect of external approach septoplasty on olfactory function. Thirty patients (23 males, 7 females) who had external approach septoplasty were included in the study. The age interval was 18 to 60 years (mean 33±12 years). All subjects had olfactory function and acoustic rhinometry tests in both the pre- and postoperative periods (mean interval 6 weeks ± 3 weeks). Olfactory function was determined by the "Sniffin Sticks" test. The minimum cross-sectional area from the nostril to 2.20 cm backward was referred to as MCA1, and the minimum cross-sectional area from 2.20 to 5.40 cm was referred to as MCA2, determined by acoustic rhinometry. Olfactory threshold, discrimination, and identification function improved significantly after external approach septoplasty. A statistically significant difference was also detected between pre- and postoperative left MCA1 and left MCA2 of the nasal cavities. Postoperative hyposmic and anosmic patient improvement was statistically significant. External approach septoplasty has a beneficial effect on olfaction and this effect may be partly due to interactions between the increased perception of nasal air flow, as well as surgery-associated improvement in the internal nasal valve area.

  10. From Equation to Inequality Using a Function-Based Approach

    ERIC Educational Resources Information Center

    Verikios, Petros; Farmaki, Vassiliki

    2010-01-01

    This article presents features of a qualitative research study concerning the teaching and learning of school algebra using a function-based approach in a grade 8 class, of 23 students, in 26 lessons, in a state school of Athens, in the school year 2003-2004. In this article, we are interested in the inequality concept and our aim is to…

  11. The atomic approach for the Coqblin-Schrieffer model

    NASA Astrophysics Data System (ADS)

    Figueira, M. S.; Saguia, A.; Foglio, M. E.; Silva-Valencia, J.; Franco, R.

    2014-12-01

    In this work we consider the Coqblin-Schrieffer model when the spin is S = 1 / 2. The atomic solution has eight states: four conduction and two localized states, and we can then calculate the eigenenergies and eigenstates analytically. From this solution, employing the cumulant Green's functions results of the Anderson model, we build a "seed", that works as the input of the atomic approach, developed earlier by some of us. We obtain the T-matrix as well as the conduction Green's function of the model, both for the impurity and the lattice cases. The generalization for other moments within N states follows the same steps. We present results both for the impurity as well as for the lattice case and we indicate possible applications of the method to study ultra cold atoms confined in optical superlattices and Kondo insulators. In this last case, our results support an insulator-metal transition as a function of the temperature.

  12. A functional approach to emotion in autonomous systems.

    PubMed

    Sanz, Ricardo; Hernández, Carlos; Gómez, Jaime; Hernando, Adolfo

    2010-01-01

    The construction of fully effective systems seems to pass through the proper exploitation of goal-centric self-evaluative capabilities that let the system teleologically self-manage. Emotions seem to provide this kind of functionality to biological systems and hence the interest in emotion for function sustainment in artificial systems performing in changing and uncertain environments; far beyond the media hullabaloo of displaying human-like emotion-laden faces in robots. This chapter provides a brief analysis of the scientific theories of emotion and presents an engineering approach for developing technology for robust autonomy by implementing functionality inspired in that of biological emotions.

  13. A simple approach to covalent functionalization of boron nitride nanotubes.

    PubMed

    Ciofani, Gianni; Genchi, Giada Graziana; Liakos, Ioannis; Athanassiou, Athanassia; Dinucci, Dinuccio; Chiellini, Federica; Mattoli, Virgilio

    2012-05-15

    A novel and simple method for the preparation of chemically functionalized boron nitride nanotubes (BNNTs) is presented. Thanks to a strong oxidation followed by the silanization of the surface through 3-aminopropyl-triethoxysilane (APTES), BNNTs exposing amino groups on their surface were successfully obtained. The efficacy of the procedure was assessed with EDS and XPS analyses, which demonstrated a successful functionalization of ~15% boron sites. This approach opens interesting perspectives for further modification of BNNTs with several kinds of molecules. Since, in particular, biomedical applications are envisaged, we also demonstrated in vitro biocompatibility and cellular up-take of the functionalized BNNTs.

  14. Functional toxicology: a new approach to detect biologically active xenobiotics.

    PubMed Central

    McLachlan, J A

    1993-01-01

    The pervasiveness of chemicals in the environment with estrogenic activity and other biological functions recommends the development of new approaches to monitor and study them. Chemicals can be screened for activity in vitro using a panel of human or animal cells that have been transfected with a specific receptor and reporter gene; for example, the estrogen receptor. By using a variety of different receptors, the screening of xenobiotics for biological functions can be broad. Chemicals could then be classified by their function in vitro which, in some cases, may be a useful guide for toxicological studies. Images Figure 1. PMID:8119246

  15. The Feynman-Vernon Influence Functional Approach in QED

    NASA Astrophysics Data System (ADS)

    Biryukov, Alexander; Shleenkov, Mark

    2016-10-01

    In the path integral approach we describe evolution of interacting electromagnetic and fermionic fields by the use of density matrix formalism. The equation for density matrix and transitions probability for fermionic field is obtained as average of electromagnetic field influence functional. We obtain a formula for electromagnetic field influence functional calculating for its various initial and final state. We derive electromagnetic field influence functional when its initial and final states are vacuum. We present Lagrangian for relativistic fermionic field under influence of electromagnetic field vacuum.

  16. Neurocomputing approaches to modelling of drying process dynamics

    SciTech Connect

    Kaminski, W.; Strumillo, P.; Tomczak, E.

    1998-07-01

    The application of artificial neural networks to mathematical modeling of drying kinetics, degradation kinetics and smoothing of experimental data is discussed in the paper. A theoretical foundation of drying process description by means of artificial neural networks is presented. Two network types are proposed for drying process modelling, namely the multilayer perceptron network and the radial basis functions network. These were validated experimentally for fresh green peals and diced potatoes which represent diverse food products. Network training procedures based on experimental data are explained. Additionally, the proposed neural network modelling approach is tested on drying experiments of silica gel saturated with ascorbic acid solution.

  17. Models of Protocellular Structure, Function and Evolution

    NASA Technical Reports Server (NTRS)

    New, Michael H.; Pohorille, Andrew; Szostak, Jack W.; Keefe, Tony; Lanyi, Janos K.

    2001-01-01

    In the absence of any record of protocells, the most direct way to test our understanding of the origin of cellular life is to construct laboratory models that capture important features of protocellular systems. Such efforts are currently underway in a collaborative project between NASA-Ames, Harvard Medical School and University of California. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures. The centerpiece of this project is a method for the in vitro evolution of protein enzymes toward arbitrary catalytic targets. A similar approach has already been developed for nucleic acids in which a small number of functional molecules are selected from a large, random population of candidates. The selected molecules are next vastly multiplied using the polymerase chain reaction. A mutagenic approach, in which the sequences of selected molecules are randomly altered, can yield further improvements in performance or alterations of specificities. Unfortunately, the catalytic potential of nucleic acids is rather limited. Proteins are more catalytically capable but cannot be directly amplified. In the new technique, this problem is circumvented by covalently linking each protein of the initial, diverse, pool to the RNA sequence that codes for it. Then, selection is performed on the proteins, but the nucleic acids are replicated. Additional information is contained in the original extended abstract.

  18. Models of Protocellular Structure, Function and Evolution

    NASA Technical Reports Server (NTRS)

    New, Michael H.; Pohorille, Andrew; Szostak, Jack W.; Keefe, Tony; Lanyi, Janos K.

    2001-01-01

    In the absence of any record of protocells, the most direct way to test our understanding of the origin of cellular life is to construct laboratory models that capture important features of protocellular systems. Such efforts are currently underway in a collaborative project between NASA-Ames, Harvard Medical School and University of California. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures. The centerpiece of this project is a method for the in vitro evolution of protein enzymes toward arbitrary catalytic targets. A similar approach has already been developed for nucleic acids in which a small number of functional molecules are selected from a large, random population of candidates. The selected molecules are next vastly multiplied using the polymerase chain reaction. A mutagenic approach, in which the sequences of selected molecules are randomly altered, can yield further improvements in performance or alterations of specificities. Unfortunately, the catalytic potential of nucleic acids is rather limited. Proteins are more catalytically capable but cannot be directly amplified. In the new technique, this problem is circumvented by covalently linking each protein of the initial, diverse, pool to the RNA sequence that codes for it. Then, selection is performed on the proteins, but the nucleic acids are replicated. Additional information is contained in the original extended abstract.

  19. Systems Engineering Interfaces: A Model Based Approach

    NASA Technical Reports Server (NTRS)

    Fosse, Elyse; Delp, Christopher

    2013-01-01

    Currently: Ops Rev developed and maintains a framework that includes interface-specific language, patterns, and Viewpoints. Ops Rev implements the framework to design MOS 2.0 and its 5 Mission Services. Implementation de-couples interfaces and instances of interaction Future: A Mission MOSE implements the approach and uses the model based artifacts for reviews. The framework extends further into the ground data layers and provides a unified methodology.

  20. Systems Engineering Interfaces: A Model Based Approach

    NASA Technical Reports Server (NTRS)

    Fosse, Elyse; Delp, Christopher

    2013-01-01

    Currently: Ops Rev developed and maintains a framework that includes interface-specific language, patterns, and Viewpoints. Ops Rev implements the framework to design MOS 2.0 and its 5 Mission Services. Implementation de-couples interfaces and instances of interaction Future: A Mission MOSE implements the approach and uses the model based artifacts for reviews. The framework extends further into the ground data layers and provides a unified methodology.

  1. Algebraic operator approach to gas kinetic models

    NASA Astrophysics Data System (ADS)

    Il'ichov, L. V.

    1997-02-01

    Some general properties of the linear Boltzmann kinetic equation are used to present it in the form ∂ tϕ = - †Âϕ with the operators Âand† possessing some nontrivial algebraic properties. When applied to the Keilson-Storer kinetic model, this method gives an example of quantum ( q-deformed) Lie algebra. This approach provides also a natural generalization of the “kangaroo model”.

  2. Autonomic response to approachability characteristics, approach behavior, and social functioning in Williams syndrome

    PubMed Central

    Järvinen, Anna; Ng, Rowena; Bellugi, Ursula

    2015-01-01

    Williams syndrome (WS) is a neurogenetic disorder that is saliently characterized by a unique social phenotype, most notably associated with a dramatically increased affinity and approachability toward unfamiliar people. Despite a recent proliferation of studies into the social profile of WS, the underpinnings of the pro-social predisposition are poorly understood. To this end, the present study was aimed at elucidating approach behavior of individuals with WS contrasted with typical development (TD) by employing a multidimensional design combining measures of autonomic arousal, social functioning, and two levels of approach evaluations. Given previous evidence suggesting that approach behaviors of individuals with WS are driven by a desire for social closeness, approachability tendencies were probed across two levels of social interaction: talking versus befriending. The main results indicated that while overall level of approachability did not differ between groups, an important qualitative between-group difference emerged across the two social interaction contexts: whereas individuals with WS demonstrated a similar willingness to approach strangers across both experimental conditions, TD individuals were significantly more willing to talk to than to befriend strangers. In WS, high approachability to positive faces across both social interaction levels was further associated with more normal social functioning. A novel finding linked autonomic responses with willingness to befriend negative faces in the WS group: elevated autonomic responsivity was associated with increased affiliation to negative face stimuli, which may represent an autonomic correlate of approach behavior in WS. Implications for underlying organization of the social brain are discussed. PMID:26459097

  3. Investigations of turbulent scalar fields using probability density function approach

    NASA Technical Reports Server (NTRS)

    Gao, Feng

    1991-01-01

    Scalar fields undergoing random advection have attracted much attention from researchers in both the theoretical and practical sectors. Research interest spans from the study of the small scale structures of turbulent scalar fields to the modeling and simulations of turbulent reacting flows. The probability density function (PDF) method is an effective tool in the study of turbulent scalar fields, especially for those which involve chemical reactions. It has been argued that a one-point, joint PDF approach is the one to choose from among many simulation and closure methods for turbulent combustion and chemically reacting flows based on its practical feasibility in the foreseeable future for multiple reactants. Instead of the multi-point PDF, the joint PDF of a scalar and its gradient which represents the roles of both scalar and scalar diffusion is introduced. A proper closure model for the molecular diffusion term in the PDF equation is investigated. Another direction in this research is to study the mapping closure method that has been recently proposed to deal with the PDF's in turbulent fields. This method seems to have captured the physics correctly when applied to diffusion problems. However, if the turbulent stretching is included, the amplitude mapping has to be supplemented by either adjusting the parameters representing turbulent stretching at each time step or by introducing the coordinate mapping. This technique is still under development and seems to be quite promising. The final objective of this project is to understand some fundamental properties of the turbulent scalar fields and to develop practical numerical schemes that are capable of handling turbulent reacting flows.

  4. A Machine Learning Approach to Student Modeling.

    DTIC Science & Technology

    1984-05-01

    machine learning , and describe ACN, a student modeling system that incorporates this approach. This system begins with a set of overly general rules, which it uses to search a problem space until it arrives at the same answer as the student. The ACM computer program then uses the solution path it has discovered to determine positive and negative instances of its initial rules, and employs a discrimination learning mechanism to place additional conditions on these rules. The revised rules will reproduce the solution path without search, and constitute a cognitive model of

  5. Modeling tauopathy: a range of complementary approaches.

    PubMed

    Hall, Garth F; Yao, Jun

    2005-01-03

    The large group of neurodegenerative diseases which feature abnormal metabolism and accumulation of tau protein (tauopathies) characteristically produce a multiplicity of cellular and systemic abnormalities in human patients. Understanding the complex pathogenetic mechanisms by which abnormalities in tau lead to systemic neurofibrillary degenerative disease requires the construction and use of model experimental systems in which the behavior of human tau can be analyzed under controlled conditions. In this paper, we survey the ways in which in vitro, cellular and whole-animal models of human tauopathy are being used to add to our knowledge of the pathogenetic mechanisms underlying these conditions. In particular, we focus on the complementary advantages and limitations of various approaches to constructing tauopathy models presently in use with respect to those of murine transgenic tauopathy models.

  6. Computational Modeling of Mitochondrial Function

    PubMed Central

    Cortassa, Sonia; Aon, Miguel A.

    2012-01-01

    The advent of techniques with the ability to scan massive changes in cellular makeup (genomics, proteomics, etc.) has revealed the compelling need for analytical methods to interpret and make sense of those changes. Computational models built on sound physico-chemical mechanistic basis are unavoidable at the time of integrating, interpreting, and simulating high-throughput experimental data. Another powerful role of computational models is predicting new behavior provided they are adequately validated. Mitochondrial energy transduction has been traditionally studied with thermodynamic models. More recently, kinetic or thermo-kinetic models have been proposed, leading the path toward an understanding of the control and regulation of mitochondrial energy metabolism and its interaction with cytoplasmic and other compartments. In this work, we outline the methods, step-by-step, that should be followed to build a computational model of mitochondrial energetics in isolation or integrated to a network of cellular processes. Depending on the question addressed by the modeler, the methodology explained herein can be applied with different levels of detail, from the mitochondrial energy producing machinery in a network of cellular processes to the dynamics of a single enzyme during its catalytic cycle. PMID:22057575

  7. Emerging approaches to probing ion channel structure and function.

    PubMed

    Li, Wei-Guang; Xu, Tian-Le

    2012-08-01

    Ion channels, as membrane proteins, are the sensors of the cell. They act as the first line of communication with the world beyond the plasma membrane and transduce changes in the external and internal environments into unique electrical signals to shape the responses of excitable cells. Because of their importance in cellular communication, ion channels have been intensively studied at the structural and functional levels. Here, we summarize the diverse approaches, including molecular and cellular, chemical, optical, biophysical, and computational, used to probe the structural and functional rearrangements that occur during channel activation (or sensitization), inactivation (or desensitization), and various forms of modulation. The emerging insights into the structure and function of ion channels by multidisciplinary approaches allow the development of new pharmacotherapies as well as new tools useful in controlling cellular activity.

  8. Classical Testing in Functional Linear Models

    PubMed Central

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155

  9. Approaches to modelling hydrology and ecosystem interactions

    NASA Astrophysics Data System (ADS)

    Silberstein, Richard P.

    2014-05-01

    As the pressures of industry, agriculture and mining on groundwater resources increase there is a burgeoning un-met need to be able to capture these multiple, direct and indirect stresses in a formal framework that will enable better assessment of impact scenarios. While there are many catchment hydrological models and there are some models that represent ecological states and change (e.g. FLAMES, Liedloff and Cook, 2007), these have not been linked in any deterministic or substantive way. Without such coupled eco-hydrological models quantitative assessments of impacts from water use intensification on water dependent ecosystems under changing climate are difficult, if not impossible. The concept would include facility for direct and indirect water related stresses that may develop around mining and well operations, climate stresses, such as rainfall and temperature, biological stresses, such as diseases and invasive species, and competition such as encroachment from other competing land uses. Indirect water impacts could be, for example, a change in groundwater conditions has an impact on stream flow regime, and hence aquatic ecosystems. This paper reviews previous work examining models combining ecology and hydrology with a view to developing a conceptual framework linking a biophysically defensable model that combines ecosystem function with hydrology. The objective is to develop a model capable of representing the cumulative impact of multiple stresses on water resources and associated ecosystem function.

  10. Design for diagnostics and prognostics: A physical-functional approach

    NASA Astrophysics Data System (ADS)

    Niculita, O.; Jennions, I. K.; Irving, P.

    This paper describes an end-to-end Integrated Vehicle Health Management (IVHM) development process with a strong emphasis on the COTS software tools employed for the implementation of this process. A mix of physical simulation and functional failure analysis was chosen as a route for early assessment of degradation in complex systems as capturing system failure modes and their symptoms facilitates the assessment of health management solutions for a complex asset. The method chosen for the IVHM development is closely correlated to the generic engineering cycle. The concepts employed by this method are further demonstrated on a laboratory fuel system test rig, but they can also be applied to both new and legacy hi-tech high-value systems. Another objective of the study is to identify the relations between the different types of knowledge supporting the health management development process when using together physical and functional models. The conclusion of this lead is that functional modeling and physical simulation should not be done in isolation. The functional model requires permanent feedback from a physical system simulator in order to be able to build a functional model that will accurately represent the real system. This paper will therefore also describe the steps required to correctly develop a functional model that will reflect the physical knowledge inherently known about a given system.

  11. Bioactive Functions of Milk Proteins: a Comparative Genomics Approach.

    PubMed

    Sharp, Julie A; Modepalli, Vengama; Enjapoori, Ashwanth Kumar; Bisana, Swathi; Abud, Helen E; Lefevre, Christophe; Nicholas, Kevin R

    2014-12-01

    The composition of milk includes factors required to provide appropriate nutrition for the growth of the neonate. However, it is now clear that milk has many functions and comprises bioactive molecules that play a central role in regulating developmental processes in the young while providing a protective function for both the suckled young and the mammary gland during the lactation cycle. Identifying these bioactives and their physiological function in eutherians can be difficult and requires extensive screening of milk components that may function to improve well-being and options for prevention and treatment of disease. New animal models with unique reproductive strategies are now becoming increasingly relevant to search for these factors.

  12. Regularization of turbulence - a comprehensive modeling approach

    NASA Astrophysics Data System (ADS)

    Geurts, B. J.

    2011-12-01

    Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fluid turbulence, alternative coarsened descriptions need to be developed to cope with the wide range of length and time scales. These coarsened descriptions are known as large-eddy simulations in which one aims to capture only the primary features of a flow, at considerably reduced computational effort. Such coarsening introduces a closure problem that requires additional phenomenological modeling. A systematic approach to the closure problem, know as regularization modeling, will be reviewed. Its application to multiphase turbulent will be illustrated in which a basic regularization principle is enforced to physically consistently approximate momentum and scalar transport. Examples of Leray and LANS-alpha regularization are discussed in some detail, as are compatible numerical strategies. We illustrate regularization modeling to turbulence under the influence of rotation and buoyancy and investigate the accuracy with which particle-laden flow can be represented. A discussion of the numerical and modeling errors incurred will be given on the basis of homogeneous isotropic turbulence.

  13. Multiscale model approach for magnetization dynamics simulations

    NASA Astrophysics Data System (ADS)

    De Lucia, Andrea; Krüger, Benjamin; Tretiakov, Oleg A.; Kläui, Mathias

    2016-11-01

    Simulations of magnetization dynamics in a multiscale environment enable the rapid evaluation of the Landau-Lifshitz-Gilbert equation in a mesoscopic sample with nanoscopic accuracy in areas where such accuracy is required. We have developed a multiscale magnetization dynamics simulation approach that can be applied to large systems with spin structures that vary locally on small length scales. To implement this, the conventional micromagnetic simulation framework has been expanded to include a multiscale solving routine. The software selectively simulates different regions of a ferromagnetic sample according to the spin structures located within in order to employ a suitable discretization and use either a micromagnetic or an atomistic model. To demonstrate the validity of the multiscale approach, we simulate the spin wave transmission across the regions simulated with the two different models and different discretizations. We find that the interface between the regions is fully transparent for spin waves with frequency lower than a certain threshold set by the coarse scale micromagnetic model with no noticeable attenuation due to the interface between the models. As a comparison to exact analytical theory, we show that in a system with a Dzyaloshinskii-Moriya interaction leading to spin spirals, the simulated multiscale result is in good quantitative agreement with the analytical calculation.

  14. Penalized spline estimation for functional coefficient regression models.

    PubMed

    Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan

    2010-04-01

    The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.

  15. Accuracy of functional surfaces on comparatively modeled protein structures

    PubMed Central

    Zhao, Jieling; Dundas, Joe; Kachalo, Sema; Ouyang, Zheng; Liang, Jie

    2012-01-01

    Identification and characterization of protein functional surfaces are important for predicting protein function, understanding enzyme mechanism, and docking small compounds to proteins. As the rapid speed of accumulation of protein sequence information far exceeds that of structures, constructing accurate models of protein functional surfaces and identify their key elements become increasingly important. A promising approach is to build comparative models from sequences using known structural templates such as those obtained from structural genome projects. Here we assess how well this approach works in modeling binding surfaces. By systematically building three-dimensional comparative models of proteins using Modeller, we determine how well functional surfaces can be accurately reproduced. We use an alpha shape based pocket algorithm to compute all pockets on the modeled structures, and conduct a large-scale computation of similarity measurements (pocket RMSD and fraction of functional atoms captured) for 26,590 modeled enzyme protein structures. Overall, we find that when the sequence fragment of the binding surfaces has more than 45% identity to that of the tempalte protein, the modeled surfaces have on average an RMSD of 0.5 Å, and contain 48% or more of the binding surface atoms, with nearly all of the important atoms in the signatures of binding pockets captured. PMID:21541664

  16. Finite Element Model Calibration Approach for Ares I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  17. Finite Element Model Calibration Approach for Area I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  18. Quasielastic scattering with the relativistic Green’s function approach

    SciTech Connect

    Meucci, Andrea; Giusti, Carlotta

    2015-05-15

    A relativistic model for quasielastic (QE) lepton-nucleus scattering is presented. The effects of final-state interactions (FSI) between the ejected nucleon and the residual nucleus are described in the relativistic Green’s function (RGF) model where FSI are consistently described with exclusive scattering using a complex optical potential. The results of the model are compared with experimental results of electron and neutrino scattering.

  19. An Evolutionary Computation Approach to Examine Functional Brain Plasticity.

    PubMed

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A; Hillary, Frank G

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  20. An Evolutionary Computation Approach to Examine Functional Brain Plasticity

    PubMed Central

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  1. Modeling of human artery tissue with probabilistic approach.

    PubMed

    Xiong, Linfei; Chui, Chee-Kong; Fu, Yabo; Teo, Chee-Leong; Li, Yao

    2015-04-01

    Accurate modeling of biological soft tissue properties is vital for realistic medical simulation. Mechanical response of biological soft tissue always exhibits a strong variability due to the complex microstructure and different loading conditions. The inhomogeneity in human artery tissue is modeled with a computational probabilistic approach by assuming that the instantaneous stress at a specific strain varies according to normal distribution. Material parameters of the artery tissue which are modeled with a combined logarithmic and polynomial energy equation are represented by a statistical function with normal distribution. Mean and standard deviation of the material parameters are determined using genetic algorithm (GA) and inverse mean-value first-order second-moment (IMVFOSM) method, respectively. This nondeterministic approach was verified using computer simulation based on the Monte-Carlo (MC) method. Cumulative distribution function (CDF) of the MC simulation corresponds well with that of the experimental stress-strain data and the probabilistic approach is further validated using data from other studies. By taking into account the inhomogeneous mechanical properties of human biological tissue, the proposed method is suitable for realistic virtual simulation as well as an accurate computational approach for medical device validation.

  2. Robust classification of functional and quantitative image data using functional mixed models.

    PubMed

    Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S

    2012-12-01

    This article introduces new methods for performing classification of complex, high-dimensional functional data using the functional mixed model (FMM) framework. The FMM relates a functional response to a set of predictors through functional fixed and random effects, which allows it to account for various factors and between-function correlations. The methods include training and prediction steps. In the training steps we train the FMM model by treating class designation as one of the fixed effects, and in the prediction steps we classify the new objects using posterior predictive probabilities of class. Through a Bayesian scheme, we are able to adjust for factors affecting both the functions and the class designations. While the methods can be used in any FMM framework, we provide details for two specific Bayesian approaches: the Gaussian, wavelet-based FMM (G-WFMM) and the robust, wavelet-based FMM (R-WFMM). Both methods perform modeling in the wavelet space, which yields parsimonious representations for the functions, and can naturally adapt to local features and complex nonstationarities in the functions. The R-WFMM allows potentially heavier tails for features of the functions indexed by particular wavelet coefficients, leading to a down-weighting of outliers that makes the method robust to outlying functions or regions of functions. The models are applied to a pancreatic cancer mass spectroscopy data set and compared with other recently developed functional classification methods. © 2012, The International Biometric Society.

  3. Olfactory functions after transsphenoidal pituitary surgery: endoscopic versus microscopic approach.

    PubMed

    Kahilogullari, Gokmen; Beton, Suha; Al-Beyati, Eyyub S M; Kantarcioglu, Ozlem; Bozkurt, Melih; Kantarcioglu, Emrah; Comert, Ayhan; Unlu, M Agahan; Meco, Cem

    2013-09-01

    Olfactory disturbances could be observed following transsphenoidal pituitary surgeries. To our knowledge, no previous comparative studies on olfactory functions after transsphenoidal endoscopic and microscopic approaches have been performed. Prospective study comparing olfactory functions between endoscopic and microscopic transsphenoidal pituitary surgery. Twenty-five patients operated on with the endoscopic approach and 25 patients operated on with the microscopic transsphenoidal approach have been evaluated. The Smell Diskettes Olfaction Test was used during the preoperative period, 1 month after the operation, and 6 months after the operation. In addition, the relationship between intraoperative cerebrospinal fluid leakage from the pituitary and postoperative synechiae formation with olfaction system was evaluated. The results were analyzed using the Friedman test, Mann-Whitney test, and Chi-Square test. In the endoscopic group, there were two hyposmic patients and no anosmic patients. In the microscopic group, there were 13 hyposmic patients and five anosmic patients. The data was statistically different between both groups (P <0.05). Cerebrospinal fluid leakage was observed in nine patients in the endoscopic group and in 10 patients in the microscopic group. There was no statistically significant difference between cerebrospinal fluid leakage and olfactory disturbances in both groups (P >0.05). Synechia was observed in nine patients in the microscopic group and in only one patient in the endoscopic group. There was a statistically significant difference between the presence of synechia and olfactory disturbances (P <0.05). This is the first study to seek the difference between the endoscopic and microscopic transsphenoidal approaches on the olfactory system during pituitary surgery. The obtained results indicate that an endoscopic approach seems to be more advantageous than a microscopic approach for protecting olfactory system and function. Copyright © 2013

  4. Merging Digital Surface Models Implementing Bayesian Approaches

    NASA Astrophysics Data System (ADS)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  5. Mechanistic approaches to the study of evolution: the functional synthesis.

    PubMed

    Dean, Antony M; Thornton, Joseph W

    2007-09-01

    An emerging synthesis of evolutionary biology and experimental molecular biology is providing much stronger and deeper inferences about the dynamics and mechanisms of evolution than were possible in the past. The new approach combines statistical analyses of gene sequences with manipulative molecular experiments to reveal how ancient mutations altered biochemical processes and produced novel phenotypes. This functional synthesis has set the stage for major advances in our understanding of fundamental questions in evolutionary biology. Here we describe this emerging approach, highlight important new insights that it has made possible, and suggest future directions for the field.

  6. Elements of a function analytic approach to probability.

    SciTech Connect

    Ghanem, Roger Georges; Red-Horse, John Robert

    2008-02-01

    We first provide a detailed motivation for using probability theory as a mathematical context in which to analyze engineering and scientific systems that possess uncertainties. We then present introductory notes on the function analytic approach to probabilistic analysis, emphasizing the connections to various classical deterministic mathematical analysis elements. Lastly, we describe how to use the approach as a means to augment deterministic analysis methods in a particular Hilbert space context, and thus enable a rigorous framework for commingling deterministic and probabilistic analysis tools in an application setting.

  7. Evaluation of the storage function model parameter characteristics

    NASA Astrophysics Data System (ADS)

    Sugiyama, Hironobu; Kadoya, Mutsumi; Nagai, Akihiro; Lansey, Kevin

    1997-04-01

    The storage function hydrograph model is one of the most commonly used models for flood runoff analysis in Japan. This paper studies the generality of the approach and its application to Japanese basins. Through a comparison of the basic equations for the models, the storage function model parameters, K, P, and T1, are shown to be related to the terms, k and p, in the kinematic wave model. This analysis showed that P and p are identical and K and T1 can be related to k, the basin area and its land use. To apply the storage function model throughout Japan, regional parameter relationships for K and T1 were developed for different land-use conditions using data from 22 watersheds and 91 flood events. These relationships combine the kinematic wave parameters with general topographic information using Hack's Law. The sensitivity of the parameters and their physical significance are also described.

  8. Functioning of the planktonic ecosystem of the Rhone River plume (NW Mediterranean) during spring and its impact on the carbon export: a field data and 3-D modelling combined approach

    NASA Astrophysics Data System (ADS)

    Auger, P. A.; Diaz, F.; Ulses, C.; Estournel, C.; Neveux, J.; Joux, F.; Pujo-Pay, M.; Naudin, J. J.

    2010-12-01

    Low-salinity water (LSW, Salinity < 37.5) lenses detached from the Rhone River plume under specific wind conditions tend to favour the biological productivity and potentially a transfer of energy to higher trophic levels on the Gulf of Lions (GoL). A field cruise conducted in May 2006 (BIOPRHOFI) followed some LSW lenses by using a lagrangian strategy. A thorough analysis of the available data set enabled to further improve our understanding of the LSW lenses' functioning and their potential influence on marine ecosystems. Through an innovative 3-D coupled hydrodynamic-biogeochemical modelling approach, a specific calibration dedicated to river plume ecosystems was then proposed and validated on field data. Exploring the role of ecosystems on the particulate organic carbon (POC) export and deposition on the shelf, a sensitivity analysis to the particulate organic matter inputs from the Rhone River was carried out from 1 April to 15 July 2006. Over such a typical end-of-spring period marked by moderate floods, the main deposition area of POC was identified alongshore between 0 and 50 m depth on the GoL, extending the Rhone prodelta to the west towards the exit of the shelf. Moreover, the main deposition area of terrestrial POC was found on the prodelta region, which confirms recent results from sediment data. The averaged daily deposition of particulate organic carbon over the whole GoL is estimated by the model between 40 and 80 mgC/m2, which is in the range of previous secular estimations. The role of ecosystems on the POC export toward sediments or offshore areas was actually highlighted and feedbacks between ecosystems and particulate organic matters are proposed to explain paradoxical model results to the sensitivity test. In fact, the conversion of organic matter in living organisms would increase the retention of organic matter in the food web and this matter transfer along the food web could explain the minor quantity of POC of marine origin observed in the

  9. Modelling approaches: the case of schizophrenia.

    PubMed

    Heeg, Bart M S; Damen, Joep; Buskens, Erik; Caleo, Sue; de Charro, Frank; van Hout, Ben A

    2008-01-01

    Schizophrenia is a chronic disease characterized by periods of relative stability interrupted by acute episodes (or relapses). The course of the disease may vary considerably between patients. Patient histories show considerable inter- and even intra-individual variability. We provide a critical assessment of the advantages and disadvantages of three modelling techniques that have been used in schizophrenia: decision trees, (cohort and micro-simulation) Markov models and discrete event simulation models. These modelling techniques are compared in terms of building time, data requirements, medico-scientific experience, simulation time, clinical representation, and their ability to deal with patient heterogeneity, the timing of events, prior events, patient interaction, interaction between co-variates and variability (first-order uncertainty). We note that, depending on the research question, the optimal modelling approach should be selected based on the expected differences between the comparators, the number of co-variates, the number of patient subgroups, the interactions between co-variates, and simulation time. Finally, it is argued that in case micro-simulation is required for the cost-effectiveness analysis of schizophrenia treatments, a discrete event simulation model is best suited to accurately capture all of the relevant interdependencies in this chronic, highly heterogeneous disease with limited long-term follow-up data.

  10. Comparison and Contrast of Two General Functional Regression Modeling Frameworks

    PubMed Central

    Morris, Jeffrey S.

    2017-01-01

    In this article, Greven and Scheipl describe an impressively general framework for performing functional regression that builds upon the generalized additive modeling framework. Over the past number of years, my collaborators and I have also been developing a general framework for functional regression, functional mixed models, which shares many similarities with this framework, but has many differences as well. In this discussion, I compare and contrast these two frameworks, to hopefully illuminate characteristics of each, highlighting their respecitve strengths and weaknesses, and providing recommendations regarding the settings in which each approach might be preferable. PMID:28736502

  11. A motivic approach to phase transitions in Potts models

    NASA Astrophysics Data System (ADS)

    Aluffi, Paolo; Marcolli, Matilde

    2013-01-01

    We describe an approach to the study of phase transitions in Potts models based on an estimate of the complexity of the locus of real zeros of the partition function, computed in terms of the classes in the Grothendieck ring of the affine algebraic varieties defined by the vanishing of the multivariate Tutte polynomial. We give completely explicit calculations for the examples of the chains of linked polygons and of the graphs obtained by replacing the polygons with their dual graphs. These are based on a deletion-contraction formula for the Grothendieck classes and on generating functions for splitting and doubling edges.

  12. Computational approaches for rational design of proteins with novel functionalities

    PubMed Central

    Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul

    2012-01-01

    Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes. PMID:24688643

  13. Proteomic approaches to dissect platelet function: half the story

    PubMed Central

    Gnatenko, Dmitri V.; Perrotta, Peter L.; Bahou, Wadie F.

    2006-01-01

    Platelets play critical roles in diverse hemostatic and pathologic disorders and are broadly implicated in various biological processes that include inflammation, wound healing, and thrombosis. Recent progress in high-throughput mRNA and protein profiling techniques has advanced our understanding of the biological functions of platelets. Platelet proteomics has been adopted to decode the complex processes that underlie platelet function by identifying novel platelet-expressed proteins, dissecting mechanisms of signal or metabolic pathways, and analyzing functional changes of the platelet proteome in normal and pathologic states. The integration of transcriptomics and proteomics, coupled with progress in bioinformatics, provides novel tools for dissecting platelet biology. In this review, we focus on current advances in platelet proteomic studies, with emphasis on the importance of parallel transcriptomic studies to optimally dissect platelet function. Applications of these global profiling approaches to investigate platelet genetic diseases and platelet-related disorders are also addressed. PMID:16926286

  14. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    PubMed

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2016-02-08

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  15. Modeling Negotiation by a Paticipatory Approach

    NASA Astrophysics Data System (ADS)

    Torii, Daisuke; Ishida, Toru; Bousquet, François

    In a participatory approach by social scientists, role playing games (RPG) are effectively used to understand real thinking and behavior of stakeholders, but RPG is not sufficient to handle a dynamic process like negotiation. In this study, a participatory simulation where user-controlled avatars and autonomous agents coexist is introduced to the participatory approach for modeling negotiation. To establish a modeling methodology of negotiation, we have tackled the following two issues. First, for enabling domain experts to concentrate interaction design for participatory simulation, we have adopted the architecture in which an interaction layer controls agents and have defined three types of interaction descriptions (interaction protocol, interaction scenario and avatar control scenario) to be described. Second, for enabling domain experts and stakeholders to capitalize on participatory simulation, we have established a four-step process for acquiring negotiation model: 1) surveys and interviews to stakeholders, 2) RPG, 3) interaction design, and 4) participatory simulation. Finally, we discussed our methodology through a case study of agricultural economics in the northeast Thailand.

  16. Hierarchical approaches for systems modeling in cardiac development.

    PubMed

    Gould, Russell A; Aboulmouna, Lina M; Varner, Jeffrey D; Butcher, Jonathan T

    2013-01-01

    Ordered cardiac morphogenesis and function are essential for all vertebrate life. The heart begins as a simple contractile tube, but quickly grows and morphs into a multichambered pumping organ complete with valves, while maintaining regulation of blood flow and nutrient distribution. Though not identical, cardiac morphogenesis shares many molecular and morphological processes across vertebrate species. Quantitative data across multiple time and length scales have been gathered through decades of reductionist single variable analyses. These range from detailed molecular signaling pathways at the cellular levels to cardiac function at the tissue/organ levels. However, none of these components act in true isolation from others, and each, in turn, exhibits short- and long-range effects in both time and space. With the absence of a gene, entire signaling cascades and genetic profiles may be shifted, resulting in complex feedback mechanisms. Also taking into account local microenvironmental changes throughout development, it is apparent that a systems level approach is an essential resource to accelerate information generation concerning the functional relationships across multiple length scales (molecular data vs physiological function) and structural development. In this review, we discuss relevant in vivo and in vitro experimental approaches, compare different computational frameworks for systems modeling, and the latest information about systems modeling of cardiac development. Finally, we conclude with some important future directions for cardiac systems modeling.

  17. Data Mining Approaches for Modeling Complex Electronic Circuit Design Activities

    SciTech Connect

    Kwon, Yongjin; Omitaomu, Olufemi A; Wang, Gi-Nam

    2008-01-01

    A printed circuit board (PCB) is an essential part of modern electronic circuits. It is made of a flat panel of insulating materials with patterned copper foils that act as electric pathways for various components such as ICs, diodes, capacitors, resistors, and coils. The size of PCBs has been shrinking over the years, while the number of components mounted on these boards has increased considerably. This trend makes the design and fabrication of PCBs ever more difficult. At the beginning of design cycles, it is important to estimate the time to complete the steps required accurately, based on many factors such as the required parts, approximate board size and shape, and a rough sketch of schematics. Current approach uses multiple linear regression (MLR) technique for time and cost estimations. However, the need for accurate predictive models continues to grow as the technology becomes more advanced. In this paper, we analyze a large volume of historical PCB design data, extract some important variables, and develop predictive models based on the extracted variables using a data mining approach. The data mining approach uses an adaptive support vector regression (ASVR) technique; the benchmark model used is the MLR technique currently being used in the industry. The strengths of SVR for this data include its ability to represent data in high-dimensional space through kernel functions. The computational results show that a data mining approach is a better prediction technique for this data. Our approach reduces computation time and enhances the practical applications of the SVR technique.

  18. Connectotyping: model based fingerprinting of the functional connectome.

    PubMed

    Miranda-Dominguez, Oscar; Mills, Brian D; Carpenter, Samuel D; Grant, Kathleen A; Kroenke, Christopher D; Nigg, Joel T; Fair, Damien A

    2014-01-01

    A better characterization of how an individual's brain is functionally organized will likely bring dramatic advances to many fields of study. Here we show a model-based approach toward characterizing resting state functional connectivity MRI (rs-fcMRI) that is capable of identifying a so-called "connectotype", or functional fingerprint in individual participants. The approach rests on a simple linear model that proposes the activity of a given brain region can be described by the weighted sum of its functional neighboring regions. The resulting coefficients correspond to a personalized model-based connectivity matrix that is capable of predicting the timeseries of each subject. Importantly, the model itself is subject specific and has the ability to predict an individual at a later date using a limited number of non-sequential frames. While we show that there is a significant amount of shared variance between models across subjects, the model's ability to discriminate an individual is driven by unique connections in higher order control regions in frontal and parietal cortices. Furthermore, we show that the connectotype is present in non-human primates as well, highlighting the translational potential of the approach.

  19. Comparative flood damage model assessment: Towards a European approach

    NASA Astrophysics Data System (ADS)

    Jongman, B.; Kreibich, H.; Bates, P. D.; de Roo, A. P. J.; Barredo, J. I.; Gericke, A.; Apel, H.; Neal, J.; Aerts, J. C. J. H.; Ward, P. J.

    2012-04-01

    There is a wide variety of flood damage assessment models in use across countries and institutions, with large variations in their approaches and assumptions. In this study we compare seven established methodologies qualitatively and quantitatively, in order to identify key factors that should be taken into consideration in the development of a pan-European flood damage model. In the comparison, we included seven different flood damage models: FLEMO (Germany), Damage Scanner (The Netherlands), Rhine Atlas (Rhine basin), the Flemish method (Belgium), Multi-Coloured Manual (United Kingdom), HAZUS-MH (United States) and the aggregated EC-JRC approach (European Commission). The study is based on two case-studies of historical flood events, for which both hydrological and land-use data are available, as well as data on observed economic damages. One case-study is based on a 2002 flood event in Eilenburg, Germany. The second case-study covers the 2005 flooding in Carlisle, United Kingdom. We found that the models designed for the specific regions come very close to estimating the observed economic damage. A sensitivity analysis shows that the model results are most sensitive to variation in assumed maximum damage values, and almost as much to variation in the applied depth-damage functions. On the basis of these results, we propose the development of a Europe-wide flood damage model that is based on disaggregated land-use data, local asset values and a variable set of depth-damage functions.

  20. An approach to multiscale modelling with graph grammars.

    PubMed

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-09-01

    Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.

  1. Recreating an esthetically and functionally acceptable dentition: a multidisciplinary approach.

    PubMed

    Goyal, Mukesh Kumar; Goyal, Shelly; Hegde, Veena; Balkrishana, Dhanasekar; Narayana, Aparna I

    2013-01-01

    Patients today demand a youthful, attractive smile with comfortable functional acceptance. The complete oral rehabilitation of patients with a functionally compromised dentition frequently involves a multidisciplinary approach and presents a considerable clinical challenge. To a great extent, proper patient selection and careful interdisciplinary treatment planning, including acknowledgment of the patient's perceived needs, reasons for seeking services, financial ability, and socioeconomic profile, can govern the predictability of successful restorations. This clinical report describes a successful interdisciplinary approach for the management of a severely worn dentition with reduced vertical dimension of occlusion. Treatment modalities included periodontal crown lengthening procedures, endodontic treatment followed by post and core restorations, and prosthetic rehabilitation for severe tooth surface loss and reduced vertical dimension of occlusion comprising metal-ceramic restorations in esthetic zones and full-metal restorations in posterior regions.

  2. Similarity transformation approach to identifiability analysis of nonlinear compartmental models.

    PubMed

    Vajda, S; Godfrey, K R; Rabitz, H

    1989-04-01

    Through use of the local state isomorphism theorem instead of the algebraic equivalence theorem of linear systems theory, the similarity transformation approach is extended to nonlinear models, resulting in finitely verifiable sufficient and necessary conditions for global and local identifiability. The approach requires testing of certain controllability and observability conditions, but in many practical examples these conditions prove very easy to verify. In principle the method also involves nonlinear state variable transformations, but in all of the examples presented in the paper the transformations turn out to be linear. The method is applied to an unidentifiable nonlinear model and a locally identifiable nonlinear model, and these are the first nonlinear models other than bilinear models where the reason for lack of global identifiability is nontrivial. The method is also applied to two models with Michaelis-Menten elimination kinetics, both of considerable importance in pharmacokinetics, and for both of which the complicated nature of the algebraic equations arising from the Taylor series approach has hitherto defeated attempts to establish identifiability results for specific input functions.

  3. A Multi-Level Model of Moral Functioning Revisited

    ERIC Educational Resources Information Center

    Reed, Don Collins

    2009-01-01

    The model of moral functioning scaffolded in the 2008 "JME" Special Issue is here revisited in response to three papers criticising that volume. As guest editor of that Special Issue I have formulated the main body of this response, concerning the dynamic systems approach to moral development, the problem of moral relativism and the role of…

  4. Fuzzy set approach to quality function deployment: An investigation

    NASA Technical Reports Server (NTRS)

    Masud, Abu S. M.

    1992-01-01

    The final report of the 1992 NASA/ASEE Summer Faculty Fellowship at the Space Exploration Initiative Office (SEIO) in Langley Research Center is presented. Quality Function Deployment (QFD) is a process, focused on facilitating the integration of the customer's voice in the design and development of a product or service. Various input, in the form of judgements and evaluations, are required during the QFD analyses. All the input variables in these analyses are treated as numeric variables. The purpose of the research was to investigate how QFD analyses can be performed when some or all of the input variables are treated as linguistic variables with values expressed as fuzzy numbers. The reason for this consideration is that human judgement, perception, and cognition are often ambiguous and are better represented as fuzzy numbers. Two approaches for using fuzzy sets in QFD have been proposed. In both cases, all the input variables are considered as linguistic variables with values indicated as linguistic expressions. These expressions are then converted to fuzzy numbers. The difference between the two approaches is due to how the QFD computations are performed with these fuzzy numbers. In Approach 1, the fuzzy numbers are first converted to their equivalent crisp scores and then the QFD computations are performed using these crisp scores. As a result, the output of this approach are crisp numbers, similar to those in traditional QFD. In Approach 2, all the QFD computations are performed with the fuzzy numbers and the output are fuzzy numbers also. Both the approaches have been explained with the help of illustrative examples of QFD application. Approach 2 has also been applied in a QFD application exercise in SEIO, involving a 'mini moon rover' design. The mini moon rover is a proposed tele-operated vehicle that will traverse and perform various tasks, including autonomous operations, on the moon surface. The output of the moon rover application exercise is a

  5. Fuzzy set approach to quality function deployment: An investigation

    NASA Technical Reports Server (NTRS)

    Masud, Abu S. M.

    1992-01-01

    The final report of the 1992 NASA/ASEE Summer Faculty Fellowship at the Space Exploration Initiative Office (SEIO) in Langley Research Center is presented. Quality Function Deployment (QFD) is a process, focused on facilitating the integration of the customer's voice in the design and development of a product or service. Various input, in the form of judgements and evaluations, are required during the QFD analyses. All the input variables in these analyses are treated as numeric variables. The purpose of the research was to investigate how QFD analyses can be performed when some or all of the input variables are treated as linguistic variables with values expressed as fuzzy numbers. The reason for this consideration is that human judgement, perception, and cognition are often ambiguous and are better represented as fuzzy numbers. Two approaches for using fuzzy sets in QFD have been proposed. In both cases, all the input variables are considered as linguistic variables with values indicated as linguistic expressions. These expressions are then converted to fuzzy numbers. The difference between the two approaches is due to how the QFD computations are performed with these fuzzy numbers. In Approach 1, the fuzzy numbers are first converted to their equivalent crisp scores and then the QFD computations are performed using these crisp scores. As a result, the output of this approach are crisp numbers, similar to those in traditional QFD. In Approach 2, all the QFD computations are performed with the fuzzy numbers and the output are fuzzy numbers also. Both the approaches have been explained with the help of illustrative examples of QFD application. Approach 2 has also been applied in a QFD application exercise in SEIO, involving a 'mini moon rover' design. The mini moon rover is a proposed tele-operated vehicle that will traverse and perform various tasks, including autonomous operations, on the moon surface. The output of the moon rover application exercise is a

  6. A Radial Basis Function Approach to Financial Time Series Analysis

    DTIC Science & Technology

    1993-12-01

    Massachusetts Institute of Technology Artificial Intelligence Laboratory AI-TR 1457 545 Technology Square Cambridge, Massachusetts 02139 9. SPONSORING...Function Approach to Financial Time Series Analysis Ihy James M. Hutchinson Master of Science in EE(S. Massachusetts Institute of Technology (1986...Philosophy 0e r . at tilt NTiS CRA&IDTIC TAB Massachusetts Institute of Technology Unannoun•ea February. I9-1 Justific.igo,, . @1991 Massachusetts Institut

  7. Diagnosis of Analog Electronic Circuits: A Functional Approach.

    DTIC Science & Technology

    1986-03-01

    diagnostic system in an artificial intelligence envi- ronment using an expert system. An introduction to the topic, research into current technology...implement an experimental system in an attempt to correct these problems. The current knowledge in artificial intelligence has just started to define some...CIRCUITS: A FUNCTIONAL APPROACH I. Introduction This chapter presents an overview of the attempts by the Air Force to intelligently diagnose electronic

  8. Modeling for fairness: A Rawlsian approach.

    PubMed

    Diekmann, Sven; Zwart, Sjoerd D

    2014-06-01

    In this paper we introduce the overlapping design consensus for the construction of models in design and the related value judgments. The overlapping design consensus is inspired by Rawls' overlapping consensus. The overlapping design consensus is a well-informed, mutual agreement among all stakeholders based on fairness. Fairness is respected if all stakeholders' interests are given due and equal attention. For reaching such fair agreement, we apply Rawls' original position and reflective equilibrium to modeling. We argue that by striving for the original position, stakeholders expel invalid arguments, hierarchies, unwarranted beliefs, and bargaining effects from influencing the consensus. The reflective equilibrium requires that stakeholders' beliefs cohere with the final agreement and its justification. Therefore, the overlapping design consensus is not only an agreement to decisions, as most other stakeholder approaches, it is also an agreement to their justification and that this justification is consistent with each stakeholders' beliefs. For supporting fairness, we argue that fairness qualifies as a maxim in modeling. We furthermore distinguish values embedded in a model from values that are implied by its context of application. Finally, we conclude that for reaching an overlapping design consensus communication about properties of and values related to a model is required.

  9. Merging RANS & LES approaches in submesoscale modeling

    NASA Astrophysics Data System (ADS)

    Fock, B. H.; Schluenzen, K. H.

    2010-09-01

    Merging LES and RANS simulation is important for extending the application range of mesoscale models to the sub-mesoscale. Hence many traditional mesoscale modeling groups are currently working on adding LES capabilities to their models. To investigate the differences, which occur by switching from RANS to LES approaches, simulations with the METRAS and METRAS-LES (Fock, 2007) are presented. These differences are investigated in terms of effects caused by the choice of the computational grid and the sub-grid scale closures. Simulations of convective boundary layers on two different grids are compared to investigate the influence of vertical grid spacing and extension. One simulation is carried out on a high-resolution vertical homogeneous grid and the other with a vertical stretched grid, which has coarser resolution in higher altitudes. The stretched grid is vertical defined, as it would be done in the standard setup for the mesoscale model. Hence, this investigation shows to what amount the eddy resolving capabilities of a LES model is effected by the transition of the grid to a grid, which is vertically the same as typically used in mesoscale modeling. The differences, which occur by using different approaches for subgrid scale turbulence, are quantified and compared with the effects caused by the computational grid. Additional some details of the used LES SGS closure (Deardorff, 1980) are investigated. These details deal on evaluating the importance of the reduced characteristic filter length scale for stable stratification. But the main focus is on comparing RANS and LES and discussion of combination in a mixed turbulence scheme, which applies a the LES closure in the atmospheric boundary layer and a RANS based turbulence model in the stable atmosphere above. References: Deardorff J. W. (1980): Stratocumulus-capped mixed layers derived from a three-dimensional model. Boundary-Layer Meteorology. 18. (4). 495-527. DOI:10.1007/BF00119502 Fock B. H. (2007): METRAS

  10. Statistical modeling approach for detecting generalized synchronization

    NASA Astrophysics Data System (ADS)

    Schumacher, Johannes; Haslinger, Robert; Pipa, Gordon

    2012-05-01

    Detecting nonlinear correlations between time series presents a hard problem for data analysis. We present a generative statistical modeling method for detecting nonlinear generalized synchronization. Truncated Volterra series are used to approximate functional interactions. The Volterra kernels are modeled as linear combinations of basis splines, whose coefficients are estimated via l1 and l2 regularized maximum likelihood regression. The regularization manages the high number of kernel coefficients and allows feature selection strategies yielding sparse models. The method's performance is evaluated on different coupled chaotic systems in various synchronization regimes and analytical results for detecting m:n phase synchrony are presented. Experimental applicability is demonstrated by detecting nonlinear interactions between neuronal local field potentials recorded in different parts of macaque visual cortex.

  11. Casimir force in brane worlds: Coinciding results from Green's and zeta function approaches

    SciTech Connect

    Linares, Roman; Morales-Tecotl, Hugo A.; Pedraza, Omar

    2010-06-15

    Casimir force encodes the structure of the field modes as vacuum fluctuations and so it is sensitive to the extra dimensions of brane worlds. Now, in flat spacetimes of arbitrary dimension the two standard approaches to the Casimir force, Green's function, and zeta function yield the same result, but for brane world models this was only assumed. In this work we show that both approaches yield the same Casimir force in the case of universal extra dimensions and Randall-Sundrum scenarios with one and two branes added by p compact dimensions. Essentially, the details of the mode eigenfunctions that enter the Casimir force in the Green's function approach get removed due to their orthogonality relations with a measure involving the right hypervolume of the plates, and this leaves just the contribution coming from the zeta function approach. The present analysis corrects previous results showing a difference between the two approaches for the single brane Randall-Sundrum; this was due to an erroneous hypervolume of the plates introduced by the authors when using the Green's function. For all the models we discuss here, the resulting Casimir force can be neatly expressed in terms of two four-dimensional Casimir force contributions: one for the massless mode and the other for a tower of massive modes associated with the extra dimensions.

  12. A hidden Markov model approach to neuron firing patterns.

    PubMed Central

    Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G

    1996-01-01

    Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing. Images FIGURE 3 PMID:8913581

  13. Modelling approaches for bio-manufacturing operations.

    PubMed

    Chhatre, Sunil

    2013-01-01

    Fast and cost-effective methods are needed to reduce the time and money needed for drug commercialisation and to determine the risks involved in adopting specific manufacturing strategies. Simulations offer one such approach for exploring design spaces before significant process development is carried out and can be used from the very earliest development stages through to scale-up and optimisation of operating conditions and resource deployment patterns both before and after plant start-up. The advantages this brings in terms of financial savings can be considerable, but to achieve these requires a full appreciation of the complexities of processes and how best to represent them mathematically within the context of in silico software. This chapter provides a summary of some of the work that has been carried out in the areas of mathematical modelling and discrete event simulations for production, recovery and purification operations when designing bio-pharmaceutical processes, looking at both financial and technical modelling.

  14. Autonomic response to approachability characteristics, approach behavior, and social functioning in Williams syndrome.

    PubMed

    Järvinen, Anna; Ng, Rowena; Bellugi, Ursula

    2015-11-01

    Williams syndrome (WS) is a neurogenetic disorder that is saliently characterized by a unique social phenotype, most notably associated with a dramatically increased affinity and approachability toward unfamiliar people. Despite a recent proliferation of studies into the social profile of WS, the underpinnings of the pro-social predisposition are poorly understood. To this end, the present study was aimed at elucidating approach behavior of individuals with WS contrasted with typical development (TD) by employing a multidimensional design combining measures of autonomic arousal, social functioning, and two levels of approach evaluations. Given previous evidence suggesting that approach behaviors of individuals with WS are driven by a desire for social closeness, approachability tendencies were probed across two levels of social interaction: talking versus befriending. The main results indicated that while overall level of approachability did not differ between groups, an important qualitative between-group difference emerged across the two social interaction contexts: whereas individuals with WS demonstrated a similar willingness to approach strangers across both experimental conditions, TD individuals were significantly more willing to talk to than to befriend strangers. In WS, high approachability to positive faces across both social interaction levels was further associated with more normal social functioning. A novel finding linked autonomic responses with willingness to befriend negative faces in the WS group: elevated autonomic responsivity was associated with increased affiliation to negative face stimuli, which may represent an autonomic correlate of approach behavior in WS. Implications for underlying organization of the social brain are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. On an approach for computing the generating functions of the characters of simple Lie algebras

    NASA Astrophysics Data System (ADS)

    Fernández Núñez, José; García Fuertes, Wifredo; Perelomov, Askold M.

    2014-04-01

    We describe a general approach to obtain the generating functions of the characters of simple Lie algebras which is based on the theory of the quantum trigonometric Calogero-Sutherland model. We show how the method works in practice by means of a few examples involving some low rank classical algebras.

  16. [Functional Analytic Psychotherapy: Approaches and scope of behavior therapy based on changes in the therapeutic context].

    PubMed

    Muñoz-Martínez, Amanda M; Coletti, Juan P

    2015-01-01

    Functional Analytic Psychotherapy (FAP) is a therapeutic approach developed in 'third wave therapies' context. FAP is characterized by use therapeutic relationship and the behaviors emit into it to improve clients daily life functioning. This therapeutic model is supported in behavior analysis principles and contextual functionalism philosophy. FAP proposes that clients behavior in session are functional equivalent with those out of session; therefore, when therapists respond to clients behaviors in session contingently, they promote and increase improvements in the natural setting. This article poses main features of FAP, its philosophical roots, achievements and research challenges to establish FAP as an independent treatment based on the evidence.

  17. [FUNCTIONAL ANALYTIC PSYCHOTHERAPY: APPROACHES AND SCOPE OF BEHAVIOR THERAPY BASED ON CHANGES IN THE THERAPEUTIC CONTEXT].

    PubMed

    Muñoz-Martínez, Amanda M; Coletti, Juan Pablo

    2015-01-01

    Abstract Functional Analytic Psychotherapy (FAP) is a therapeutic approach developed in context. FAP is characterized by use therapeutic relationship and the behaviors emit into it to improve clients daily life functioning. This therapeutic model is supported in behavior analysis principles and contextual functionalism philosophy. FAP proposes that clients behavior in session are functional equivalent with those out of session; therefore, when therapists respond to clients behaviors in session contingently, they promote and increase improvements in the natural setting. This article poses main features of FAP, its philosophical roots, achievements and research challenges to establish FAP as an independent treatment based on the evidence.

  18. Selecting Bayesian priors for stochastic rates using extended functional models

    NASA Astrophysics Data System (ADS)

    Gibson, Gavin J.

    2003-04-01

    We propose an extension to the functional modelling methods described by Dawid and Stone (1982 Ann. Stat. 10 1119-38) that leads naturally to a method for selecting vague parameter priors for Bayesian analyses involving stochastic population models. Motivated by applications from quantum optics and epidemiology, we focus on analysing observed sequences of event times obeying a non-homogeneous Poisson process, although the techniques are more widely applicable. The extended functional modelling approach is illustrated for the particular case of Bayesian estimation of the death rate in the immigration-death model from observation of the death times only. It is shown that the prior selected naturally leads to a well defined posterior density for parameters and avoids some undesirable pathologies reported by Gibson and Renshaw (2001a Inverse Problems 17 455-66, 2001b Stat. Comput. 11 347-58) for the case of exponential priors. Some limitations of the approach are also discussed.

  19. Novel approaches in function-driven single-cell genomics

    DOE PAGES

    Doud, Devin F. R.; Woyke, Tanja

    2017-06-07

    Deeper sequencing and improved bioinformatics in conjunction with single-cell and metagenomic approaches continue to illuminate undercharacterized environmental microbial communities. This has propelled the 'who is there, and what might they be doing' paradigm to the uncultivated and has already radically changed the topology of the tree of life and provided key insights into the microbial contribution to biogeochemistry. While characterization of 'who' based on marker genes can describe a large fraction of the community, answering 'what are they doing' remains the elusive pinnacle for microbiology. Function-driven single-cell genomics provides a solution by using a function-based screen to subsample complex microbialmore » communities in a targeted manner for the isolation and genome sequencing of single cells. This enables single-cell sequencing to be focused on cells with specific phenotypic or metabolic characteristics of interest. Recovered genomes are conclusively implicated for both encoding and exhibiting the feature of interest, improving downstream annotation and revealing activity levels within that environment. This emerging approach has already improved our understanding of microbial community functioning and facilitated the experimental analysis of uncharacterized gene product space. Here we provide a comprehensive review of strategies that have been applied for function-driven single-cell genomics and the future directions we envision.« less

  20. Combining formal and functional approaches to topic structure.

    PubMed

    Zellers, Margaret; Post, Brechtje

    2012-03-01

    Fragmentation between formal and functional approaches to prosodic variation is an ongoing problem in linguistic research. In particular, the frameworks of the Phonetics of Talk-in-Interaction (PTI) and Empirical Phonology (EP) take very different theoretical and methodological approaches to this kind of variation. We argue that it is fruitful to adopt the insights of both PTI's qualitative analysis and EP's quantitative analysis and combine them into a multiple-methods approach. One realm in which it is possible to combine these frameworks is in the analysis of discourse topic structure and the prosodic cues relevant to it. By combining a quantitative and a qualitative approach to discourse topic structure, it is possible to give a better account of the observed variation in prosody, for example in the case of fundamental frequency (F0) peak timing, which can be explained in terms of pitch accent distribution over different topic structure categories. Similarly, local and global patterns in speech rate variation can be better explained and motivated by adopting insights from both PTI and EP in the study of topic structure. Combining PTI and EP can provide better accounts of speech data as well as opening up new avenues of investigation which would not have been possible in either approach alone.

  1. Development of a structured approach for decomposition of complex systems on a functional basis

    NASA Astrophysics Data System (ADS)

    Yildirim, Unal; Felician Campean, I.

    2014-07-01

    The purpose of this paper is to present the System State Flow Diagram (SSFD) as a structured and coherent methodology to decompose a complex system on a solution- independent functional basis. The paper starts by reviewing common function modelling frameworks in literature and discusses practical requirements of the SSFD in the context of the current literature and current approaches in industry. The proposed methodology is illustrated through the analysis of a case study: design analysis of a generic Bread Toasting System (BTS).

  2. NARX prediction of some rare chaotic flows: Recurrent fuzzy functions approach

    NASA Astrophysics Data System (ADS)

    Goudarzi, Sobhan; Jafari, Sajad; Moradi, Mohammad Hassan; Sprott, J. C.

    2016-02-01

    The nonlinear and dynamic accommodating capability of time domain models makes them a useful representation of chaotic time series for analysis, modeling and prediction. This paper is devoted to the modeling and prediction of chaotic time series with hidden attractors using a nonlinear autoregressive model with exogenous inputs (NARX) based on a novel recurrent fuzzy functions (RFFs) approach. Case studies of recently introduced chaotic systems with hidden attractors plus classical chaotic systems demonstrate that the proposed modeling methodology exhibits better prediction performance from different viewpoints (short term and long term) compared to some other existing methods.

  3. A modular approach to language production: models and facts.

    PubMed

    Valle-Lisboa, Juan C; Pomi, Andrés; Cabana, Álvaro; Elvevåg, Brita; Mizraji, Eduardo

    2014-06-01

    Numerous cortical disorders affect language. We explore the connection between the observed language behavior and the underlying substrates by adopting a neurocomputational approach. To represent the observed trajectories of the discourse in patients with disorganized speech and in healthy participants, we design a graphical representation for the discourse as a trajectory that allows us to visualize and measure the degree of order in the discourse as a function of the disorder of the trajectories. Our work assumes that many of the properties of language production and comprehension can be understood in terms of the dynamics of modular networks of neural associative memories. Based upon this assumption, we connect three theoretical and empirical domains: (1) neural models of language processing and production, (2) statistical methods used in the construction of functional brain images, and (3) corpus linguistic tools, such as Latent Semantic Analysis (henceforth LSA), that are used to discover the topic organization of language. We show how the neurocomputational models intertwine with LSA and the mathematical basis of functional neuroimaging. Within this framework we describe the properties of a context-dependent neural model, based on matrix associative memories, that performs goal-oriented linguistic behavior. We link these matrix associative memory models with the mathematics that underlie functional neuroimaging techniques and present the "functional brain images" emerging from the model. This provides us with a completely "transparent box" with which to analyze the implication of some statistical images. Finally, we use these models to explore the possibility that functional synaptic disconnection can lead to an increase in connectivity between the representations of concepts that could explain some of the alterations in discourse displayed by patients with schizophrenia.

  4. Multiscale approach to equilibrating model polymer melts

    NASA Astrophysics Data System (ADS)

    Svaneborg, Carsten; Karimi-Varzaneh, Hossein Ali; Hojdis, Nils; Fleck, Frank; Everaers, Ralf

    2016-09-01

    We present an effective and simple multiscale method for equilibrating Kremer Grest model polymer melts of varying stiffness. In our approach, we progressively equilibrate the melt structure above the tube scale, inside the tube and finally at the monomeric scale. We make use of models designed to be computationally effective at each scale. Density fluctuations in the melt structure above the tube scale are minimized through a Monte Carlo simulated annealing of a lattice polymer model. Subsequently the melt structure below the tube scale is equilibrated via the Rouse dynamics of a force-capped Kremer-Grest model that allows chains to partially interpenetrate. Finally the Kremer-Grest force field is introduced to freeze the topological state and enforce correct monomer packing. We generate 15 melts of 500 chains of 10.000 beads for varying chain stiffness as well as a number of melts with 1.000 chains of 15.000 monomers. To validate the equilibration process we study the time evolution of bulk, collective, and single-chain observables at the monomeric, mesoscopic, and macroscopic length scales. Extension of the present method to longer, branched, or polydisperse chains, and/or larger system sizes is straightforward.

  5. A validated approach for modeling collapse of steel structures

    NASA Astrophysics Data System (ADS)

    Saykin, Vitaliy Victorovich

    A civil engineering structure is faced with many hazardous conditions such as blasts, earthquakes, hurricanes, tornadoes, floods, and fires during its lifetime. Even though structures are designed for credible events that can happen during a lifetime of the structure, extreme events do happen and cause catastrophic failures. Understanding the causes and effects of structural collapse is now at the core of critical areas of national need. One factor that makes studying structural collapse difficult is the lack of full-scale structural collapse experimental test results against which researchers could validate their proposed collapse modeling approaches. The goal of this work is the creation of an element deletion strategy based on fracture models for use in validated prediction of collapse of steel structures. The current work reviews the state-of-the-art of finite element deletion strategies for use in collapse modeling of structures. It is shown that current approaches to element deletion in collapse modeling do not take into account stress triaxiality in vulnerable areas of the structure, which is important for proper fracture and element deletion modeling. The report then reviews triaxiality and its role in fracture prediction. It is shown that fracture in ductile materials is a function of triaxiality. It is also shown that, depending on the triaxiality range, different fracture mechanisms are active and should be accounted for. An approach using semi-empirical fracture models as a function of triaxiality are employed. The models to determine fracture initiation, softening and subsequent finite element deletion are outlined. This procedure allows for stress-displacement softening at an integration point of a finite element in order to subsequently remove the element. This approach avoids abrupt changes in the stress that would create dynamic instabilities, thus making the results more reliable and accurate. The calibration and validation of these models are

  6. Modeling autism: a systems biology approach.

    PubMed

    Randolph-Gips, Mary; Srinivasan, Pramila

    2012-10-08

    Autism is the fastest growing developmental disorder in the world today. The prevalence of autism in the US has risen from 1 in 2500 in 1970 to 1 in 88 children today. People with autism present with repetitive movements and with social and communication impairments. These impairments can range from mild to profound. The estimated total lifetime societal cost of caring for one individual with autism is $3.2 million US dollars. With the rapid growth in this disorder and the great expense of caring for those with autism, it is imperative for both individuals and society that techniques be developed to model and understand autism. There is increasing evidence that those individuals diagnosed with autism present with highly diverse set of abnormalities affecting multiple systems of the body. To this date, little to no work has been done using a whole body systems biology approach to model the characteristics of this disorder. Identification and modelling of these systems might lead to new and improved treatment protocols, better diagnosis and treatment of the affected systems, which might lead to improved quality of life by themselves, and, in addition, might also help the core symptoms of autism due to the potential interconnections between the brain and nervous system with all these other systems being modeled. This paper first reviews research which shows that autism impacts many systems in the body, including the metabolic, mitochondrial, immunological, gastrointestinal and the neurological. These systems interact in complex and highly interdependent ways. Many of these disturbances have effects in most of the systems of the body. In particular, clinical evidence exists for increased oxidative stress, inflammation, and immune and mitochondrial dysfunction which can affect almost every cell in the body. Three promising research areas are discussed, hierarchical, subgroup analysis and modeling over time. This paper reviews some of the systems disturbed in autism and

  7. Functionalized Congener Approach to the Design of Ligands for G Protein–Coupled Receptors (GPCRs)

    PubMed Central

    Jacobson, Kenneth A.

    2009-01-01

    Functionalized congeners, in which a chemically functionalized chain is incorporated at an insensitive site on a pharmacophore, have been designed from the agonist and antagonist ligands of various G protein–coupled receptors (GPCRs). These chain extensions enable a conjugation strategy for detecting and characterizing GPCR structure and function and pharmacological modulation. The focus in many studies of functionalized congeners has been on two families of GPCRs: those responding to extracellular purines and pyrimidines—i.e., adenosine receptors (ARs) and P2Y nucleotide receptors. Functionalized congeners of small-molecule as ligands for other GPCRs and non-G protein coupled receptors have also been designed. For example, among biogenic amine neurotransmitter receptors, muscarinic acetylcholine receptor antagonists and adrenergic receptor ligands have been studied with a functionalized congener approach. Adenosine A1, A2A, and A3 receptor functionalized congeners have yielded macromolecular conjugates, irreversibly binding AR ligands for receptor inactivation and crosslinking, radioactive probes that use prosthetic groups, immobilized ligands for affinity chromatography, and dual-acting ligands that function as binary drugs. Poly(amidoamine) dendrimers have served as nanocarriers for covalently conjugated AR functionalized congeners. Rational methods of ligand design derived from molecular modeling and templates have been included in these studies. Thus, the design of novel ligands, both small molecules and macromolecular conjugates, for studying the chemical and biological properties of GPCRs have been developed with this approach, has provided researchers with a strategy that is more versatile than the classical medicinal chemical approaches. PMID:19405524

  8. A biopsychosocial approach to women's sexual function and dysfunction at midlife: A narrative review.

    PubMed

    Thomas, Holly N; Thurston, Rebecca C

    2016-05-01

    A satisfying sex life is an important component of overall well-being, but sexual dysfunction is common, especially in midlife women. The aim of this review is (a) to define sexual function and dysfunction, (b) to present theoretical models of female sexual response, (c) to examine longitudinal studies of how sexual function changes during midlife, and (d) to review treatment options. Four types of female sexual dysfunction are currently recognized: Female Orgasmic Disorder, Female Sexual Interest/Arousal Disorder, Genito-Pelvic Pain/Penetration Disorder, and Substance/Medication-Induced Sexual Dysfunction. However, optimal sexual function transcends the simple absence of dysfunction. A biopsychosocial approach that simultaneously considers physical, psychological, sociocultural, and interpersonal factors is necessary to guide research and clinical care regarding women's sexual function. Most longitudinal studies reveal an association between advancing menopause status and worsening sexual function. Psychosocial variables, such as availability of a partner, relationship quality, and psychological functioning, also play an integral role. Future directions for research should include deepening our understanding of how sexual function changes with aging and developing safe and effective approaches to optimizing women's sexual function with aging. Overall, holistic, biopsychosocial approaches to women's sexual function are necessary to fully understand and treat this key component of midlife women's well-being. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. A biopsychosocial approach to women’s sexual function and dysfunction at midlife: A narrative review

    PubMed Central

    Thomas, Holly N.; Thurston, Rebecca C.

    2016-01-01

    A satisfying sex life is an important component of overall well-being, but sexual dysfunction is common, especially in midlife women. The aim of this review is (a) to define sexual function and dysfunction, (b) to present theoretical models of female sexual response, (c) to examine longitudinal studies of how sexual function changes during midlife, and (d) to review treatment options. Four types of female sexual dysfunction are currently recognized: Female Orgasmic Disorder, Female Sexual Interest/Arousal Disorder, Genito-Pelvic Pain/Penetration Disorder, and Substance/Medication-Induced Sexual Dysfunction. However, optimal sexual function transcends the simple absence of dysfunction. A biopsychosocial approach that simultaneously considers physical, psychological, sociocultural, and interpersonal factors is necessary to guide research and clinical care regarding women’s sexual function. Most longitudinal studies reveal an association between advancing menopause status and worsening sexual function. Psychosocial variables, such as availability of a partner, relationship quality, and psychological functioning, also play an integral role. Future directions for research should include deepening our understanding of how sexual function changes with aging and developing safe and effective approaches to optimizing women’s sexual function with aging. Overall, holistic, biopsychosocial approaches to women’s sexual function are necessary to fully understand and treat this key component of midlife women’s well-being. PMID:27013288

  10. Combinatorial approach to exactly solve the 1D Ising model

    NASA Astrophysics Data System (ADS)

    Seth, Swarnadeep

    2017-01-01

    The Ising model is a well known statistical model which can be solved exactly by various methods. The most familiar one is the transfer matrix method. Sometimes it can be difficult to approach the open boundary case rather than periodic boundary ones in higher dimensions. But physically it is more intuitive to study the open boundary case, as it gives a closer view of the real system. We have introduced a new method called the pairing method to determine the exact partition function for the simplest case, a 1D Ising lattice. This method simplifies the problem's complexities and reduces it to a pure combinatorial problem. The study also reveals that it is possible to apply this pairing method in the case of a 2D square lattice. The obtained results agree perfectly with the values in the literature and this new approach provides an algorithmic insight to deal with such problems.

  11. A relaxation-based approach to damage modeling

    NASA Astrophysics Data System (ADS)

    Junker, Philipp; Schwarz, Stephan; Makowski, Jerzy; Hackl, Klaus

    2017-01-01

    Material models, including softening effects due to, for example, damage and localizations, share the problem of ill-posed boundary value problems that yield mesh-dependent finite element results. It is thus necessary to apply regularization techniques that couple local behavior described, for example, by internal variables, at a spatial level. This can take account of the gradient of the internal variable to yield mesh-independent finite element results. In this paper, we present a new approach to damage modeling that does not use common field functions, inclusion of gradients or complex integration techniques: Appropriate modifications of the relaxed (condensed) energy hold the same advantage as other methods, but with much less numerical effort. We start with the theoretical derivation and then discuss the numerical treatment. Finally, we present finite element results that prove empirically how the new approach works.

  12. Agreement in functional assessment: graphic approaches to displaying respondent effects.

    PubMed

    Haley, Stephen M; Ni, Pengsheng; Coster, Wendy J; Black-Schaffer, Randie; Siebens, Hilary; Tao, Wei

    2006-09-01

    The objective of this study was to examine the agreement between respondents of summary scores from items representing three functional content areas (physical and mobility, personal care and instrumental, applied cognition) within the Activity Measure for Postacute Care (AM-PAC). We compare proxy vs. patient report in both hospital and community settings as represented by intraclass correlation coefficients and two graphic approaches. The authors conducted a prospective, cohort study of a convenience sample of adults (n = 47) receiving rehabilitation services either in hospital (n = 31) or community (n = 16) settings. In addition to using intraclass correlation coefficients (ICC) as indices of agreement, we applied two graphic approaches to serve as complements to help interpret the direction and magnitude of respondent disagreements. We created a "mountain plot" based on a cumulative distribution curve and a "survival-agreement plot" with step functions used in the analysis of survival data. ICCs on summary scores between patient and proxy report were physical and mobility ICC = 0.92, personal care and instrumental ICC = 0.93, and applied cognition ICC = 0.77. Although combined respondent agreement was acceptable, graphic approaches helped interpret differences in separate analyses of clinician and family agreement. Graphic analyses allow for a simple interpretation of agreement data and may be useful in determining the meaningfulness of the amount and direction of interrespondent variation.

  13. Stochastic model updating utilizing Bayesian approach and Gaussian process model

    NASA Astrophysics Data System (ADS)

    Wan, Hua-Ping; Ren, Wei-Xin

    2016-03-01

    Stochastic model updating (SMU) has been increasingly applied in quantifying structural parameter uncertainty from responses variability. SMU for parameter uncertainty quantification refers to the problem of inverse uncertainty quantification (IUQ), which is a nontrivial task. Inverse problem solved with optimization usually brings about the issues of gradient computation, ill-conditionedness, and non-uniqueness. Moreover, the uncertainty present in response makes the inverse problem more complicated. In this study, Bayesian approach is adopted in SMU for parameter uncertainty quantification. The prominent strength of Bayesian approach for IUQ problem is that it solves IUQ problem in a straightforward manner, which enables it to avoid the previous issues. However, when applied to engineering structures that are modeled with a high-resolution finite element model (FEM), Bayesian approach is still computationally expensive since the commonly used Markov chain Monte Carlo (MCMC) method for Bayesian inference requires a large number of model runs to guarantee the convergence. Herein we reduce computational cost in two aspects. On the one hand, the fast-running Gaussian process model (GPM) is utilized to approximate the time-consuming high-resolution FEM. On the other hand, the advanced MCMC method using delayed rejection adaptive Metropolis (DRAM) algorithm that incorporates local adaptive strategy with global adaptive strategy is employed for Bayesian inference. In addition, we propose the use of the powerful variance-based global sensitivity analysis (GSA) in parameter selection to exclude non-influential parameters from calibration parameters, which yields a reduced-order model and thus further alleviates the computational burden. A simulated aluminum plate and a real-world complex cable-stayed pedestrian bridge are presented to illustrate the proposed framework and verify its feasibility.

  14. A functional approach to movement analysis and error identification in sports and physical education

    PubMed Central

    Hossner, Ernst-Joachim; Schiebl, Frank; Göhner, Ulrich

    2015-01-01

    In a hypothesis-and-theory paper, a functional approach to movement analysis in sports is introduced. In this approach, contrary to classical concepts, it is not anymore the “ideal” movement of elite athletes that is taken as a template for the movements produced by learners. Instead, movements are understood as the means to solve given tasks that in turn, are defined by to-be-achieved task goals. A functional analysis comprises the steps of (1) recognizing constraints that define the functional structure, (2) identifying sub-actions that subserve the achievement of structure-dependent goals, (3) explicating modalities as specifics of the movement execution, and (4) assigning functions to actions, sub-actions and modalities. Regarding motor-control theory, a functional approach can be linked to a dynamical-system framework of behavioral shaping, to cognitive models of modular effect-related motor control as well as to explicit concepts of goal setting and goal achievement. Finally, it is shown that a functional approach is of particular help for sports practice in the context of structuring part practice, recognizing functionally equivalent task solutions, finding innovative technique alternatives, distinguishing errors from style, and identifying root causes of movement errors. PMID:26441717

  15. Approaching nanoscale oxides: models and theoretical methods.

    PubMed

    Bromley, Stefan T; Moreira, Ibério de P R; Neyman, Konstantin M; Illas, Francesc

    2009-09-01

    This tutorial review deals with the rapidly developing area of modelling oxide materials at the nanoscale. Top-down and bottom-up modelling approaches and currently used theoretical methods are discussed with the help of a selection of case studies. We show that the critical oxide nanoparticle size required to be beyond the scale where every atom counts to where structural and chemical properties are essentially bulk-like (the scalable regime) strongly depends on the structural and chemical parameters of the material under consideration. This oxide-dependent behaviour with respect to size has fundamental implications with respect to their modelling. Strongly ionic materials such as MgO and CeO(2), for example, start to exhibit scalable-to-bulk crystallite-like characteristics for nanoparticles consisting of about 100 ions. For such systems there exists an overlap in nanoparticle size where both top-down and bottom-up theoretical techniques can be applied and the main problem is the choice of the most suitable computational method. However, for more covalent systems such TiO(2) or SiO(2) the onset of the scalable regime is still unclear and for intermediate sized nanoparticles there exists a gap where neither bottom-up nor top-down modelling are fully adequate. In such difficult cases new efforts to design adequate models are required. Further exacerbating these fundamental methodological concerns are oxide nanosystems exhibiting complex electronic and magnetic behaviour. Due to the need for a simultaneous accurate treatment of the atomistic, electronic and spin degrees of freedom for such systems, the top-down vs. bottom-up separation is still large, and only few studies currently exist.

  16. A Facile Approach to Functionalize Cell Membrane-Coated Nanoparticles

    PubMed Central

    Zhou, Hao; Fan, Zhiyuan; Lemons, Pelin K.; Cheng, Hao

    2016-01-01

    Convenient strategies to provide cell membrane-coated nanoparticles (CM-NPs) with multi-functionalities beyond the natural function of cell membranes would dramatically expand the application of this emerging class of nanomaterials. We have developed a facile approach to functionalize CM-NPs by chemically modifying live cell membranes prior to CM-NP fabrication using a bifunctional linker, succinimidyl-[(N-maleimidopropionamido)-polyethyleneglycol] ester (NHS-PEG-Maleimide). This method is particularly suitable to conjugate large bioactive molecules such as proteins on cell membranes as it establishes a strong anchorage and enable the control of linker length, a critical parameter for maximizing the function of anchored proteins. As a proof of concept, we show the conjugation of human recombinant hyaluronidase, PH20 (rHuPH20) on red blood cell (RBC) membranes and demonstrate that long linker (MW: 3400) is superior to short linker (MW: 425) for maintaining enzyme activity, while minimizing the changes to cell membranes. When the modified membranes were fabricated into RBC membrane-coated nanoparticles (RBCM-NPs), the conjugated rHuPH20 can assist NP diffusion more efficiently than free rHuPH20 in matrix-mimicking gels and the pericellular hyaluronic acid matrix of PC3 prostate cancer cells. After quenching the unreacted chemical groups with polyethylene glycol, we demonstrated that the rHuPH20 modification does not reduce the ultra-long blood circulation time of RBCM-NPs. Therefore, this surface engineering approach provides a platform to functionlize CM-NPs without sacrificing the natural function of cell membranes. PMID:27217834

  17. The universal function in color dipole model

    NASA Astrophysics Data System (ADS)

    Jalilian, Z.; Boroun, G. R.

    2017-10-01

    In this work we review color dipole model and recall properties of the saturation and geometrical scaling in this model. Our primary aim is determining the exact universal function in terms of the introduced scaling variable in different distance than the saturation radius. With inserting the mass in calculation we compute numerically the contribution of heavy productions in small x from the total structure function by the fraction of universal functions and show the geometrical scaling is established due to our scaling variable in this study.

  18. A comprehensive approach to age-dependent dosimetric modeling

    SciTech Connect

    Leggett, R.W.; Cristy, M.; Eckerman, K.F.

    1986-01-01

    In the absence of age-specific biokinetic models, current retention models of the International Commission on Radiological Protection (ICRP) frequently are used as a point of departure for evaluation of exposures to the general population. These models were designed and intended for estimation of long-term integrated doses to the adult worker. Their format and empirical basis preclude incorporation of much valuable physiological information and physiologically reasonable assumptions that could be used in characterizing the age-specific behavior of radioelements in humans. In this paper we discuss a comprehensive approach to age-dependent dosimetric modeling in which consideration is given not only to changes with age in masses and relative geometries of body organs and tissues but also to best available physiological and radiobiological information relating to the age-specific biobehavior of radionuclides. This approach is useful in obtaining more accurate estimates of long-term dose commitments as a function of age at intake, but it may be particularly valuable in establishing more accurate estimates of dose rate as a function of age. Age-specific dose rates are needed for a proper analysis of the potential effects on estimates or risk of elevated dose rates per unit intake in certain stages of life, elevated response per unit dose received during some stages of life, and age-specific non-radiogenic competing risks.

  19. Coherent states/density functional theory approach to molecular dynamics

    NASA Astrophysics Data System (ADS)

    Tsereteli, Kakha; Yan, Yun-an; Morales, Jorge A.

    2006-03-01

    We present a combined coherent states (CS)/density functional theory approach to molecular dynamics within the electron nuclear dynamics framework. Nuclei are described by a product of narrow, frozen Gaussian wave packets that is approximately separable into translational, rotational, and vibrational CS parts. Electrons are described by a single-determinantal Thouless CS in a time-dependent Kohn-Sham fashion. This novel approach improves several features of the Car-Parrinello method by providing an ab initio CS Lagrangian, a quasi-classical CS description for the nuclei, and a non-redundant representation of a general electronic single-determinantal state. Preliminary simulations of the H + + H 2 reaction at ELab = 30 eV are also presented.

  20. Sensorimotor integration for functional recovery and the Bobath approach.

    PubMed

    Levin, Mindy F; Panturin, Elia

    2011-04-01

    Bobath therapy is used to treat patients with neurological disorders. Bobath practitioners use hands-on approaches to elicit and reestablish typical movement patterns through therapist-controlled sensorimotor experiences within the context of task accomplishment. One aspect of Bobath practice, the recovery of sensorimotor function, is reviewed within the framework of current motor control theories. We focus on the role of sensory information in movement production, the relationship between posture and movement and concepts related to motor recovery and compensation with respect to this therapeutic approach. We suggest that a major barrier to the evaluation of the therapeutic effectiveness of the Bobath concept is the lack of a unified framework for both experimental identification and treatment of neurological motor deficits. More conclusive analysis of therapeutic effectiveness requires the development of specific outcomes that measure movement quality.

  1. An ecological approach to language development: an alternative functionalism.

    PubMed

    Dent, C H

    1990-11-01

    I argue for a new functionalist approach to language development, an ecological approach. A realist orientation is used that locates the causes of language development neither in the child nor in the language environment but in the functioning of perceptual systems that detect language-world relationships and use them to guide attention and action. The theory requires no concept of innateness, thus avoiding problems inherent in either the innate ideas or the genes-as-causal-programs explanations of the source of structure in language. An ecological explanation of language is discussed in relation to concepts and language, language as representation, problems in early word learning, metaphor, and syntactic development. Finally, problems incurred in using the idea of innateness are summarized: History prior to the chosen beginning point is ignored, data on organism-environment mutuality are not collected, and the explanation claims no effect of learning, which cannot be tested empirically.

  2. A multi-frequency receiver function inversion approach for crustal velocity structure

    NASA Astrophysics Data System (ADS)

    Li, Xuelei; Li, Zhiwei; Hao, Tianyao; Wang, Sheng; Xing, Jian

    2017-05-01

    In order to constrain the crustal velocity structures better, we developed a new nonlinear inversion approach based on multi-frequency receiver function waveforms. With the global optimizing algorithm of Differential Evolution (DE), low-frequency receiver function waveforms can primarily constrain large-scale velocity structures, while high-frequency receiver function waveforms show the advantages in recovering small-scale velocity structures. Based on the synthetic tests with multi-frequency receiver function waveforms, the proposed approach can constrain both long- and short-wavelength characteristics of the crustal velocity structures simultaneously. Inversions with real data are also conducted for the seismic stations of KMNB in southeast China and HYB in Indian continent, where crustal structures have been well studied by former researchers. Comparisons of inverted velocity models from previous and our studies suggest good consistency, but better waveform fitness with fewer model parameters are achieved by our proposed approach. Comprehensive tests with synthetic and real data suggest that the proposed inversion approach with multi-frequency receiver function is effective and robust in inverting the crustal velocity structures.

  3. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    NASA Astrophysics Data System (ADS)

    Nocera, A.; Alvarez, G.

    2016-11-01

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. This paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper then studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases studied indicate that the Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.

  4. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE PAGES

    None, None

    2016-11-21

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  5. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    SciTech Connect

    None, None

    2016-11-21

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that the Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.

  6. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors.

    PubMed

    Nocera, A; Alvarez, G

    2016-11-01

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. This paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper then studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases studied indicate that the Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.

  7. Hybrid input function estimation using a single-input-multiple-output (SIMO) approach

    NASA Astrophysics Data System (ADS)

    Su, Yi; Shoghi, Kooresh I.

    2009-02-01

    A hybrid blood input function (BIF) model that incorporates region of interests (ROIs) based peak estimation and a two exponential tail model was proposed to describe the blood input function. The hybrid BIF model was applied to the single-input-multiple-output (SIMO) optimization based approach for BIF estimation using time activity curves (TACs) obtained from ROIs defined at left ventricle (LV) blood pool and myocardium regions of dynamic PET images. The proposed BIF estimation method was applied with 0, 1 and 2 blood samples as constraints for BIF estimation using simulated small animal PET data. Relative percentage difference of the area-under-curve (AUC) measurement between the estimated BIF and the true BIF was calculated to evaluate the BIF estimation accuracy. SIMO based BIF estimation using Feng's input function model was also applied for comparison. The hybrid method provided improved BIF estimation in terms of both mean accuracy and variability compared to Feng's model based BIF estimation in our simulation study. When two blood samples were used as constraints, the percentage BIF estimation error was 0.82 +/- 4.32% for the hybrid approach and 4.63 +/- 10.67% for the Feng's model based approach. Using hybrid BIF, improved kinetic parameter estimation was also obtained.

  8. Approach to kinetic energy density functionals: Nonlocal terms with the structure of the von Weizsaecker functional

    SciTech Connect

    Garcia-Aldea, David; Alvarellos, J. E.

    2008-02-15

    We propose a kinetic energy density functional scheme with nonlocal terms based on the von Weizsaecker functional, instead of the more traditional approach where the nonlocal terms have the structure of the Thomas-Fermi functional. The proposed functionals recover the exact kinetic energy and reproduce the linear response function of homogeneous electron systems. In order to assess their quality, we have tested the total kinetic energies as well as the kinetic energy density for atoms. The results show that these nonlocal functionals give as good results as the most sophisticated functionals in the literature. The proposed scheme for constructing the functionals means a step ahead in the field of fully nonlocal kinetic energy functionals, because they are capable of giving better local behavior than the semilocal functionals, yielding at the same time accurate results for total kinetic energies. Moreover, the functionals enjoy the possibility of being evaluated as a single integral in momentum space if an adequate reference density is defined, and then quasilinear scaling for the computational cost can be achieved.

  9. Prediction of Chemical Function: Model Development and ...

    EPA Pesticide Factsheets

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  10. Prediction of Chemical Function: Model Development and ...

    EPA Pesticide Factsheets

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  11. The fruits of a functional approach for psychological science.

    PubMed

    Stewart, Ian

    2016-02-01

    The current paper introduces relational frame theory (RFT) as a functional contextual approach to complex human behaviour and examines how this theory has contributed to our understanding of several key phenomena in psychological science. I will first briefly outline the philosophical foundation of RFT and then examine its conceptual basis and core concepts. Thereafter, I provide an overview of the empirical findings and applications that RFT has stimulated in a number of key domains such as language development, linguistic generativity, rule-following, analogical reasoning, intelligence, theory of mind, psychopathology and implicit cognition.

  12. Functional model of biological neural networks

    PubMed Central

    2010-01-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks. PMID:22132040

  13. A mixed-model moving-average approach to geostatistical modeling in stream networks.

    PubMed

    Peterson, Erin E; Ver Hoef, Jay M

    2010-03-01

    Spatial autocorrelation is an intrinsic characteristic in freshwater stream environments where nested watersheds and flow connectivity may produce patterns that are not captured by Euclidean distance. Yet, many common autocovariance functions used in geostatistical models are statistically invalid when Euclidean distance is replaced with hydrologic distance. We use simple worked examples to illustrate a recently developed moving-average approach used to construct two types of valid autocovariance models that are based on hydrologic distances. These models were designed to represent the spatial configuration, longitudinal connectivity, discharge, and flow direction in a stream network. They also exhibit a different covariance structure than Euclidean models and represent a true difference in the way that spatial relationships are represented. Nevertheless, the multi-scale complexities of stream environments may not be fully captured using a model based on one covariance structure. We advocate using a variance component approach, which allows a mixture of autocovariance models (Euclidean and stream models) to be incorporated into a single geostatistical model. As an example, we fit and compare "mixed models," based on multiple covariance structures, for a biological indicator. The mixed model proves to be a flexible approach because many sources of information can be incorporated into a single model.

  14. Novel metal resistance genes from microorganisms: a functional metagenomic approach.

    PubMed

    González-Pastor, José E; Mirete, Salvador

    2010-01-01

    Most of the known metal resistance mechanisms are based on studies of cultured microorganisms, and the abundant uncultured fraction could be an important source of genes responsible for uncharacterized resistance mechanisms. A functional metagenomic approach was selected to recover metal resistance genes from the rhizosphere microbial community of an acid-mine drainage (AMD)-adapted plant, Erica andevalensis, from Rio Tinto, Spain. A total of 13 nickel resistant clones were isolated and analyzed, encoding hypothetical or conserved hypothetical proteins of uncertain functions, or well-characterized proteins, but not previously reported to be related to nickel resistance. The resistance clones were classified into two groups according to their nickel accumulation properties: those preventing or those favoring metal accumulation. Two clones encoding putative ABC transporter components and a serine O-acetyltransferase were found as representatives of each group, respectively.

  15. Forward and reverse transfer function model synthesis

    NASA Technical Reports Server (NTRS)

    Houghton, J. R.

    1985-01-01

    A process for synthesizing a mathematical model for a linear mechanical system using the forward and reverse Fourier transform functions is described. The differential equation for a system model is given. The Bode conversion of the differential equation, and the frequency and time-domain optimization matching of the model to the forward and reverse transform functions using the geometric simplex method of Nelder and Mead (1965) are examined. The effect of the window function on the linear mechanical system is analyzed. The model is applied to two examples; in one the signal damps down before the end of the time window and in the second the signal has significant energy at the end of the time window.

  16. Forward and reverse transfer function model synthesis

    NASA Technical Reports Server (NTRS)

    Houghton, J. R.

    1985-01-01

    A process for synthesizing a mathematical model for a linear mechanical system using the forward and reverse Fourier transform functions is described. The differential equation for a system model is given. The Bode conversion of the differential equation, and the frequency and time-domain optimization matching of the model to the forward and reverse transform functions using the geometric simplex method of Nelder and Mead (1965) are examined. The effect of the window function on the linear mechanical system is analyzed. The model is applied to two examples; in one the signal damps down before the end of the time window and in the second the signal has significant energy at the end of the time window.

  17. A tantalum strength model using a multiscale approach: version 2

    SciTech Connect

    Becker, R; Arsenlis, A; Hommes, G; Marian, J; Rhee, M; Yang, L H

    2009-09-21

    A continuum strength model for tantalum was developed in 2007 using a multiscale approach. This was our first attempt at connecting simulation results from atomistic to continuum length scales, and much was learned that we were not able to incorporate into the model at that time. The tantalum model described in this report represents a second cut at pulling together multiscale simulation results into a continuum model. Insight gained in creating previous multiscale models for tantalum and vanadium was used to guide the model construction and functional relations for the present model. While the basic approach follows that of the vanadium model, there are significant departures. Some of the recommendations from the vanadium report were followed, but not all. Results from several new analysis techniques have not yet been incorporated due to technical difficulties. Molecular dynamics simulations of single dislocation motion at several temperatures suggested that the thermal activation barrier was temperature dependent. This dependency required additional temperature functions be included within the assumed Arrhenius relation. The combination of temperature dependent functions created a complex model with a non unique parameterization and extra model constants. The added complexity had no tangible benefits. The recommendation was to abandon the strict Arrhenius form and create a simpler curve fit to the molecular dynamics data for shear stress versus dislocation velocity. Functions relating dislocation velocity and applied shear stress were constructed vor vanadium for both edge and screw dislocations. However, an attempt to formulate a robust continuum constitutive model for vanadium using both dislocation populations was unsuccessful; the level of coupling achieved was inadequate to constrain the dislocation evolution properly. Since the behavior of BCC materials is typically assumed to be dominated by screw dislocations, the constitutive relations were ultimately

  18. Direction-dependent learning approach for radial basis function networks.

    PubMed

    Singla, Puneet; Subbarao, Kamesh; Junkins, John L

    2007-01-01

    Direction-dependent scaling, shaping, and rotation of Gaussian basis functions are introduced for maximal trend sensing with minimal parameter representations for input output approximation. It is shown that shaping and rotation of the radial basis functions helps in reducing the total number of function units required to approximate any given input-output data, while improving accuracy. Several alternate formulations that enforce minimal parameterization of the most general radial basis functions are presented. A novel "directed graph" based algorithm is introduced to facilitate intelligent direction based learning and adaptation of the parameters appearing in the radial basis function network. Further, a parameter estimation algorithm is incorporated to establish starting estimates for the model parameters using multiple windows of the input-output data. The efficacy of direction-dependent shaping and rotation in function approximation is evaluated by modifying the minimal resource allocating network and considering different test examples. The examples are drawn from recent literature to benchmark the new algorithm versus existing methods.

  19. Elucidation of SESANS correlation functions through model

    SciTech Connect

    Shew, Chwen-Yang; Chen, Wei-Ren

    2012-01-01

    Several single-modal Debye correlation functions are closely examined to elucidate the behavior of their corresponding SESANS (Spin Echo Small Angle Neutron Scattering) correlation functions. We nd that the upper bound of a Debye correlation function and of its SESANS correlation func- tion is identical. For discrete Debye correlation functions, the peak of SESANS correlation function emerges at their rst discrete point, whereas for continuous Debye correlation functions with greater width, the peak position shifts to a greater value. In both cases, the intensity and shape of the peak of the SESANS correlation function are determined by the width of the normalized Debye correlation functions. In the application, we mimic the intramolecular and intermolecular Debye correlation functions of liquids composed of interacting particles by using the simple models to elucidate their competition in the SESANS correlation function. Our calculations show that the position of the rst minimum of SESANS correlation function shifts to a smaller value as inter- molecular attraction or correlation is enhanced. The minimum value can be positive or negative, and the positive values are observed for the cases equivalent to stronger intermolecular attraction, consistent with literature results based on more sophisticated liquid state theory and simulations.

  20. Vertebrate Membrane Proteins: Structure, Function, and Insights from Biophysical Approaches

    PubMed Central

    MÜLLER, DANIEL J.; WU, NAN; PALCZEWSKI, KRZYSZTOF

    2008-01-01

    Membrane proteins are key targets for pharmacological intervention because they are vital for cellular function. Here, we analyze recent progress made in the understanding of the structure and function of membrane proteins with a focus on rhodopsin and development of atomic force microscopy techniques to study biological membranes. Membrane proteins are compartmentalized to carry out extra- and intracellular processes. Biological membranes are densely populated with membrane proteins that occupy approximately 50% of their volume. In most cases membranes contain lipid rafts, protein patches, or paracrystalline formations that lack the higher-order symmetry that would allow them to be characterized by diffraction methods. Despite many technical difficulties, several crystal structures of membrane proteins that illustrate their internal structural organization have been determined. Moreover, high-resolution atomic force microscopy, near-field scanning optical microscopy, and other lower resolution techniques have been used to investigate these structures. Single-molecule force spectroscopy tracks interactions that stabilize membrane proteins and those that switch their functional state; this spectroscopy can be applied to locate a ligand-binding site. Recent development of this technique also reveals the energy landscape of a membrane protein, defining its folding, reaction pathways, and kinetics. Future development and application of novel approaches during the coming years should provide even greater insights to the understanding of biological membrane organization and function. PMID:18321962

  1. Green's function approach for quantum graphs: An overview

    NASA Astrophysics Data System (ADS)

    Andrade, Fabiano M.; Schmidt, A. G. M.; Vicentini, E.; Cheng, B. K.; da Luz, M. G. E.

    2016-08-01

    Here we review the many aspects and distinct phenomena associated to quantum dynamics on general graph structures. For so, we discuss such class of systems under the energy domain Green's function (G) framework. This approach is particularly interesting because G can be written as a sum over classical-like paths, where local quantum effects are taken into account through the scattering matrix elements (basically, transmission and reflection amplitudes) defined on each one of the graph vertices. Hence, the exact G has the functional form of a generalized semiclassical formula, which through different calculation techniques (addressed in detail here) always can be cast into a closed analytic expression. It allows to solve exactly arbitrary large (although finite) graphs in a recursive and fast way. Using the Green's function method, we survey many properties of open and closed quantum graphs as scattering solutions for the former and eigenspectrum and eigenstates for the latter, also considering quasi-bound states. Concrete examples, like cube, binary trees and Sierpiński-like topologies are presented. Along the work, possible distinct applications using the Green's function methods for quantum graphs are outlined.

  2. Making metals transparent: a circuit model approach.

    PubMed

    Molero, Carlos; Medina, Francisco; Rodríguez-Berral, Rauĺ; Mesa, Francisco

    2016-05-16

    Solid metal films are well known to be opaque to electromagnetic waves over a wide frequency range, from low frequency to optics. High values of the conductivity at relatively low frequencies or negative values of the permittivity at the optical regime provide the macroscopic explanation for such opacity. In the microwave range, even extremely thin metal layers (much smaller than the skin depth at the operation frequency) reflect most of the impinging electromagnetic energy, thus precluding significant transmission. However, a drastic resonant narrow-band enhancement of the transparency has recently been reported. The quasi-transparent window is opened by placing the metal film between two symmetrically arranged and closely spaced copper strip gratings. This letter proposes an analytical circuit model that yields a simple explanation to this unexpected phenomenon. The proposed approach avoids the use of lengthy numerical calculations and suggests how the transmissivity can be controlled and enhanced by manipulating the values of the electrical parameters of the associated circuit model.

  3. Understanding human functioning using graphical models

    PubMed Central

    2010-01-01

    Background Functioning and disability are universal human experiences. However, our current understanding of functioning from a comprehensive perspective is limited. The development of the International Classification of Functioning, Disability and Health (ICF) on the one hand and recent developments in graphical modeling on the other hand might be combined and open the door to a more comprehensive understanding of human functioning. The objective of our paper therefore is to explore how graphical models can be used in the study of ICF data for a range of applications. Methods We show the applicability of graphical models on ICF data for different tasks: Visualization of the dependence structure of the data set, dimension reduction and comparison of subpopulations. Moreover, we further developed and applied recent findings in causal inference using graphical models to estimate bounds on intervention effects in an observational study with many variables and without knowing the underlying causal structure. Results In each field, graphical models could be applied giving results of high face-validity. In particular, graphical models could be used for visualization of functioning in patients with spinal cord injury. The resulting graph consisted of several connected components which can be used for dimension reduction. Moreover, we found that the differences in the dependence structures between subpopulations were relevant and could be systematically analyzed using graphical models. Finally, when estimating bounds on causal effects of ICF categories on general health perceptions among patients with chronic health conditions, we found that the five ICF categories that showed the strongest effect were plausible. Conclusions Graphical Models are a flexible tool and lend themselves for a wide range of applications. In particular, studies involving ICF data seem to be suited for analysis using graphical models. PMID:20149230

  4. An approach to the residence time distribution for stochastic multi-compartment models.

    PubMed

    Yu, Jihnhee; Wehrly, Thomas E

    2004-10-01

    Stochastic compartmental models are widely used in modeling processes such as drug kinetics in biological systems. This paper considers the distribution of the residence times for stochastic multi-compartment models, especially systems with non-exponential lifetime distributions. The paper first derives the moment generating function of the bivariate residence time distribution for the two-compartment model with general lifetimes and approximates the density of the residence time using the saddlepoint approximation. Then, it extends the distributional approach to the residence time for multi-compartment semi-Markov models combining the cofactor rule for a single destination and the analytic approach to the two-compartment model. This approach provides a complete specification of the residence time distribution based on the moment generating function and thus facilitates an easier calculation of high-order moments than the approach using the coefficient matrix. Applications to drug kinetics demonstrate the simplicity and usefulness of this approach.

  5. a New Approach of Digital Bridge Surface Model Generation

    NASA Astrophysics Data System (ADS)

    Ju, H.

    2012-07-01

    Bridge areas present difficulties for orthophotos generation and to avoid "collapsed" bridges in the orthoimage, operator assistance is required to create the precise DBM (Digital Bridge Model), which is, subsequently, used for the orthoimage generation. In this paper, a new approach of DBM generation, based on fusing LiDAR (Light Detection And Ranging) data and aerial imagery, is proposed. The no precise exterior orientation of the aerial image is required for the DBM generation. First, a coarse DBM is produced from LiDAR data. Then, a robust co-registration between LiDAR intensity and aerial image using the orientation constraint is performed. The from-coarse-to-fine hybrid co-registration approach includes LPFFT (Log-Polar Fast Fourier Transform), Harris Corners, PDF (Probability Density Function) feature descriptor mean-shift matching, and RANSAC (RANdom Sample Consensus) as main components. After that, bridge ROI (Region Of Interest) from LiDAR data domain is projected to the aerial image domain as the ROI in the aerial image. Hough transform linear features are extracted in the aerial image ROI. For the straight bridge, the 1st order polynomial function is used; whereas, for the curved bridge, 2nd order polynomial function is used to fit those endpoints of Hough linear features. The last step is the transformation of the smooth bridge boundaries from aerial image back to LiDAR data domain and merge them with the coarse DBM. Based on our experiments, this new approach is capable of providing precise DBM which can be further merged with DTM (Digital Terrain Model) derived from LiDAR data to obtain the precise DSM (Digital Surface Model). Such a precise DSM can be used to improve the orthophoto product quality.

  6. A Functional Approach to Deconvolve Dynamic Neuroimaging Data

    PubMed Central

    Jiang, Ci-Ren; Aston, John A. D.; Wang, Jane-Ling

    2016-01-01

    Positron emission tomography (PET) is an imaging technique which can be used to investigate chemical changes in human biological processes such as cancer development or neurochemical reactions. Most dynamic PET scans are currently analyzed based on the assumption that linear first-order kinetics can be used to adequately describe the system under observation. However, there has recently been strong evidence that this is not the case. To provide an analysis of PET data which is free from this compartmental assumption, we propose a nonparametric deconvolution and analysis model for dynamic PET data based on functional principal component analysis. This yields flexibility in the possible deconvolved functions while still performing well when a linear compartmental model setup is the true data generating mechanism. As the deconvolution needs to be performed on only a relative small number of basis functions rather than voxel by voxel in the entire three-dimensional volume, the methodology is both robust to typical brain imaging noise levels while also being computationally efficient. The new methodology is investigated through simulations in both one-dimensional functions and 2D images and also applied to a neuroimaging study whose goal is the quantification of opioid receptor concentration in the brain. PMID:27226673

  7. Covariant chiral kinetic equation in the Wigner function approach

    NASA Astrophysics Data System (ADS)

    Gao, Jian-hua; Pu, Shi; Wang, Qun

    2017-07-01

    The covariant chiral kinetic equation (CCKE) is derived from the four-dimensional Wigner function by an improved perturbative method under the static equilibrium conditions. The chiral kinetic equation in three dimensions can be obtained by integration over the time component of the four-momentum. There is freedom to add more terms to the CCKE allowed by conservation laws. In the derivation of the three-dimensional equation, there is also freedom to choose coefficients of some terms in d x0/d τ and d x /d τ [τ is a parameter along the worldline, and (x0,x ) denotes the time-space position of a particle] whose three-momentum integrals are vanishing. So the three-dimensional chiral kinetic equation derived from the CCKE is not uniquely determined in the current approach. The key assumption of our approach is the perturbation in powers of space-time derivative and constant electromagnetic field strength tensor under the static equilibrium conditions. To go beyond the current approach and overcome these problems one needs a new way of building up the three-dimensional chiral kinetic equation from the CCKE or directly from covariant Wigner equations.

  8. A Network Approach to Rare Disease Modeling

    NASA Astrophysics Data System (ADS)

    Ghiassian, Susan; Rabello, Sabrina; Sharma, Amitabh; Wiest, Olaf; Barabasi, Albert-Laszlo

    2011-03-01

    Network approaches have been widely used to better understand different areas of natural and social sciences. Network Science had a particularly great impact on the study of biological systems. In this project, using biological networks, candidate drugs as a potential treatment of rare diseases were identified. Developing new drugs for more than 2000 rare diseases (as defined by ORPHANET) is too expensive and beyond expectation. Disease proteins do not function in isolation but in cooperation with other interacting proteins. Research on FDA approved drugs have shown that most of the drugs do not target the disease protein but a protein which is 2 or 3 steps away from the disease protein in the Protein-Protein Interaction (PPI) network. We identified the already known drug targets in the disease gene's PPI subnetwork (up to the 3rd neighborhood) and among them those in the same sub cellular compartment and higher coexpression coefficient with the disease gene are expected to be stronger candidates. Out of 2177 rare diseases, 1092 were found not to have any drug target. Using the above method, we have found the strongest candidates among the rest in order to further experimental validations.

  9. Model approaches for advancing interprofessional prevention education.

    PubMed

    Evans, Clyde H; Cashman, Suzanne B; Page, Donna A; Garr, David R

    2011-02-01

    Healthy People 2010 included an objective to "increase the proportion of … health professional training schools whose basic curriculum for healthcare providers includes the core competencies in health promotion and disease prevention." Interprofessional prevention education has been seen by the Healthy People Curriculum Task Force as a key strategy for achieving this objective and strengthening prevention content in health professions education programs. To fulfill these aims, the Association for Prevention Teaching and Research sponsored the Institute for Interprofessional Prevention Education in 2007 and in 2008. The institutes were based on the premise that if clinicians from different professions are to function effectively in teams, health professions students need to learn with, from, and about students from other professions. The institutes assembled interprofessional teams of educators from academic health centers across the country and provided instruction in approaches for improving interprofessional prevention education. Interprofessional education also plays a key role in implementation of Healthy People 2020 Education for Health framework. The delivery of preventive services provides a nearly level playing field in which multiple professions each make important contributions. Prevention education should take place during that phase of the educational continuum in which the attitudes, skills, and knowledge necessary for both effective teamwork and prevention are incorporated into the "DNA" of future health professionals. Evaluation of the teams' educational initiatives holds important lessons. These include allowing ample time for planning, obtaining student input during planning, paying explicit attention to teamwork, and taking account of cultural differences across professions.

  10. Rational transfer function models for biofilm reactors

    SciTech Connect

    Wik, T.; Breitholtz, C.

    1998-12-01

    Design of controllers and optimization of plants using biofilm reactors often require dynamic models and efficient simulation methods. Standard model assumptions were used to derive nonrational transfer functions describing the fast dynamics of stirred-tank reactors with zero- or first-order reactions inside the biofilm. A method based on the location of singularities was used to derive rational transfer functions that approximate nonrational ones. These transfer functions can be used in efficient simulation routines and in standard methods of controller design. The order of the transfer functions can be chosen in a natural way, and changes in physical parameters may directly be related to changes in the transfer functions. Further, the mass balances used and, hence, the transfer functions, are applicable to catalytic reactors with porous catalysts as well. By applying the methods to a nitrifying trickling filter, reactor parameters are estimated from residence-time distributions and low-order rational transfer functions are achieved. Simulated effluent dynamics, using these transfer functions, agree closely with measurements.

  11. FINDSITE: a combined evolution/structure-based approach to protein function prediction

    PubMed Central

    Brylinski, Michal

    2009-01-01

    A key challenge of the post-genomic era is the identification of the function(s) of all the molecules in a given organism. Here, we review the status of sequence and structure-based approaches to protein function inference and ligand screening that can provide functional insights for a significant fraction of the ∼50% of ORFs of unassigned function in an average proteome. We then describe FINDSITE, a recently developed algorithm for ligand binding site prediction, ligand screening and molecular function prediction, which is based on binding site conservation across evolutionary distant proteins identified by threading. Importantly, FINDSITE gives comparable results when high-resolution experimental structures as well as predicted protein models are used. PMID:19324930

  12. Spatiotemporal Infectious Disease Modeling: A BME-SIR Approach

    PubMed Central

    Angulo, Jose; Yu, Hwa-Lung; Langousis, Andrea; Kolovos, Alexander; Wang, Jinfeng; Madrid, Ana Esther; Christakos, George

    2013-01-01

    This paper is concerned with the modeling of infectious disease spread in a composite space-time domain under conditions of uncertainty. We focus on stochastic modeling that accounts for basic mechanisms of disease distribution and multi-sourced in situ uncertainties. Starting from the general formulation of population migration dynamics and the specification of transmission and recovery rates, the model studies the functional formulation of the evolution of the fractions of susceptible-infected-recovered individuals. The suggested approach is capable of: a) modeling population dynamics within and across localities, b) integrating the disease representation (i.e. susceptible-infected-recovered individuals) with observation time series at different geographical locations and other sources of information (e.g. hard and soft data, empirical relationships, secondary information), and c) generating predictions of disease spread and associated parameters in real time, while considering model and observation uncertainties. Key aspects of the proposed approach are illustrated by means of simulations (i.e. synthetic studies), and a real-world application using hand-foot-mouth disease (HFMD) data from China. PMID:24086257

  13. A probabilistic approach to modeling and controlling fluid flows

    NASA Astrophysics Data System (ADS)

    Kaiser, Eurika; Noack, Bernd R.; Spohn, Andreas; Cattafesta, Louis N.; Morzynski, Marek; Daviller, Guillaume; Brunton, Bingni W.; Brunton, Steven L.

    2016-11-01

    We extend cluster-based reduced-order modeling (CROM) (Kaiser et al., 2014) to include control inputs in order to determine optimal control laws with respect to a cost function for unsteady flows. The proposed methodology frames high-dimensional, nonlinear dynamics into low- dimensional, probabilistic, linear dynamics which considerably simplifies the optimal control problem while preserving nonlinear actuation mechanisms. The data-driven approach builds upon the unsupervised partitioning of the data into few kinematically similar flow states using a clustering algorithm. The coarse-grained dynamics are then described by a Markov model which is closely related to the approximation of Perron-Frobenius operators. The Markov model can be used as predictor for the ergodic probability distribution for a particular control law approximating the long-term behavior of the system on which basis the optimal control law is determined. Moreover, we combine CROM with a recently developed approach for optimal sparse sensor placement for classification (Brunton et al., 2013) as a critical enabler for in-time control and for the systematic identification of dynamical regimes from few measurements. The approach is applied to a separating flow and a mixing layer exhibiting vortex pairing.

  14. Incorporating covariates in skewed functional data models.

    PubMed

    Li, Meng; Staicu, Ana-Maria; Bondell, Howard D

    2015-07-01

    We introduce a class of covariate-adjusted skewed functional models (cSFM) designed for functional data exhibiting location-dependent marginal distributions. We propose a semi-parametric copula model for the pointwise marginal distributions, which are allowed to depend on covariates, and the functional dependence, which is assumed covariate invariant. The proposed cSFM framework provides a unifying platform for pointwise quantile estimation and trajectory prediction. We consider a computationally feasible procedure that handles densely as well as sparsely observed functional data. The methods are examined numerically using simulations and is applied to a new tractography study of multiple sclerosis. Furthermore, the methodology is implemented in the R package cSFM, which is publicly available on CRAN.

  15. Mining Functional Modules in Heterogeneous Biological Networks Using Multiplex PageRank Approach

    PubMed Central

    Li, Jun; Zhao, Patrick X.

    2016-01-01

    Identification of functional modules/sub-networks in large-scale biological networks is one of the important research challenges in current bioinformatics and systems biology. Approaches have been developed to identify functional modules in single-class biological networks; however, methods for systematically and interactively mining multiple classes of heterogeneous biological networks are lacking. In this paper, we present a novel algorithm (called mPageRank) that utilizes the Multiplex PageRank approach to mine functional modules from two classes of biological networks. We demonstrate the capabilities of our approach by successfully mining functional biological modules through integrating expression-based gene-gene association networks and protein-protein interaction networks. We first compared the performance of our method with that of other methods using simulated data. We then applied our method to identify the cell division cycle related functional module and plant signaling defense-related functional module in the model plant Arabidopsis thaliana. Our results demonstrated that the mPageRank method is effective for mining sub-networks in both expression-based gene-gene association networks and protein-protein interaction networks, and has the potential to be adapted for the discovery of functional modules/sub-networks in other heterogeneous biological networks. The mPageRank executable program, source code, the datasets and results of the presented two case studies are publicly and freely available at http://plantgrn.noble.org/MPageRank/. PMID:27446133

  16. Generalized exponential function and discrete growth models

    NASA Astrophysics Data System (ADS)

    Souto Martinez, Alexandre; Silva González, Rodrigo; Lauri Espíndola, Aquino

    2009-07-01

    Here we show that a particular one-parameter generalization of the exponential function is suitable to unify most of the popular one-species discrete population dynamic models into a simple formula. A physical interpretation is given to this new introduced parameter in the context of the continuous Richards model, which remains valid for the discrete case. From the discretization of the continuous Richards’ model (generalization of the Gompertz and Verhulst models), one obtains a generalized logistic map and we briefly study its properties. Notice, however that the physical interpretation for the introduced parameter persists valid for the discrete case. Next, we generalize the (scramble competition) θ-Ricker discrete model and analytically calculate the fixed points as well as their stabilities. In contrast to previous generalizations, from the generalized θ-Ricker model one is able to retrieve either scramble or contest models.

  17. A Model Transformation Approach to Derive Architectural Models from Goal-Oriented Requirements Models

    NASA Astrophysics Data System (ADS)

    Lucena, Marcia; Castro, Jaelson; Silva, Carla; Alencar, Fernanda; Santos, Emanuel; Pimentel, João

    Requirements engineering and architectural design are key activities for successful development of software systems. Both activities are strongly intertwined and interrelated, but many steps toward generating architecture models from requirements models are driven by intuition and architectural knowledge. Thus, systematic approaches that integrate requirements engineering and architectural design activities are needed. This paper presents an approach based on model transformations to generate architectural models from requirements models. The source and target languages are respectively the i* modeling language and Acme architectural description language (ADL). A real web-based recommendation system is used as case study to illustrate our approach.

  18. A new approach to modeling aviation accidents

    NASA Astrophysics Data System (ADS)

    Rao, Arjun Harsha

    views aviation accidents as a set of hazardous states of a system (pilot and aircraft), and triggers that cause the system to move between hazardous states. I used the NTSB's accident coding manual (that contains nearly 4000 different codes) to develop a "dictionary" of hazardous states, triggers, and information codes. Then, I created the "grammar", or a set of rules, that: (1) orders the hazardous states in each accident; and, (2) links the hazardous states using the appropriate triggers. This approach: (1) provides a more correct count of the causes for accidents in the NTSB database; and, (2) checks for gaps or omissions in NTSB accident data, and fills in some of these gaps using logic-based rules. These rules also help identify and count causes for accidents that were not discernable from previous analyses of historical accident data. I apply the model to 6200 helicopter accidents that occurred in the US between 1982 and 2015. First, I identify the states and triggers that are most likely to be associated with fatal and non-fatal accidents. The results suggest that non-fatal accidents, which account for approximately 84% of the accidents, provide valuable opportunities to learn about the causes for accidents. Next, I investigate the causes of inflight loss of control using both a conventional approach and using the state-based approach. The conventional analysis provides little insight into the causal mechanism for LOC. For instance, the top cause of LOC is "aircraft control/directional control not maintained", which does not provide any insight. In contrast, the state-based analysis showed that pilots' tendency to clip objects frequently triggered LOC (16.7% of LOC accidents)--this finding was not directly discernable from conventional analyses. Finally, I investigate the causes for improper autorotations using both a conventional approach and the state-based approach. The conventional approach uses modifiers (e.g., "improper", "misjudged") associated with "24520

  19. Model dielectric function for 2D semiconductors including substrate screening

    PubMed Central

    Trolle, Mads L.; Pedersen, Thomas G.; Véniard, Valerie

    2017-01-01

    Dielectric screening of excitons in 2D semiconductors is known to be a highly non-local effect, which in reciprocal space translates to a strong dependence on momentum transfer q. We present an analytical model dielectric function, including the full non-linear q-dependency, which may be used as an alternative to more numerically taxing ab initio screening functions. By verifying the good agreement between excitonic optical properties calculated using our model dielectric function, and those derived from ab initio methods, we demonstrate the versatility of this approach. Our test systems include: Monolayer hBN, monolayer MoS2, and the surface exciton of a 2 × 1 reconstructed Si(111) surface. Additionally, using our model, we easily take substrate screening effects into account. Hence, we include also a systematic study of the effects of substrate media on the excitonic optical properties of MoS2 and hBN. PMID:28117326

  20. Modeling the evolution of the cerebellum: from macroevolution to function.

    PubMed

    Smaers, Jeroen B

    2014-01-01

    The purpose of this contribution is to explore how macroevolutionary studies of the cerebellum can contribute to theories on cerebellar function and connectivity. New approaches in modeling the evolution of biological traits have provided new insights in the evolutionary pathways that underlie cerebellar evolution. These approaches reveal patterns of coordinated size changes among brain structures across evolutionary time, demonstrate how particular lineages/species stand out, and what the rate and timing of neuroanatomical changes were in evolutionary history. Using these approaches, recent studies demonstrated that changes in the relative size of the posterior cerebellar cortex and associated cortical areas indicate taxonomic differences in great apes and humans. Considering comparative differences in behavioral capacity, macroevolutionary results are discussed in the context of theories on cerebellar function and learning. © 2014 Elsevier B.V. All rights reserved.

  1. Direct and Evolutionary Approaches for Optimal Receiver Function Inversion

    NASA Astrophysics Data System (ADS)

    Dugda, Mulugeta Tuji

    Receiver functions are time series obtained by deconvolving vertical component seismograms from radial component seismograms. Receiver functions represent the impulse response of the earth structure beneath a seismic station. Generally, receiver functions consist of a number of seismic phases related to discontinuities in the crust and upper mantle. The relative arrival times of these phases are correlated with the locations of discontinuities as well as the media of seismic wave propagation. The Moho (Mohorovicic discontinuity) is a major interface or discontinuity that separates the crust and the mantle. In this research, automatic techniques to determine the depth of the Moho from the earth's surface (the crustal thickness H) and the ratio of crustal seismic P-wave velocity (Vp) to S-wave velocity (Vs) (kappa= Vp/Vs) were developed. In this dissertation, an optimization problem of inverting receiver functions has been developed to determine crustal parameters and the three associated weights using evolutionary and direct optimization techniques. The first technique developed makes use of the evolutionary Genetic Algorithms (GA) optimization technique. The second technique developed combines the direct Generalized Pattern Search (GPS) and evolutionary Fitness Proportionate Niching (FPN) techniques by employing their strengths. In a previous study, Monte Carlo technique has been utilized for determining variable weights in the H-kappa stacking of receiver functions. Compared to that previously introduced variable weights approach, the current GA and GPS-FPN techniques have tremendous advantages of saving time and these new techniques are suitable for automatic and simultaneous determination of crustal parameters and appropriate weights. The GA implementation provides optimal or near optimal weights necessary in stacking receiver functions as well as optimal H and kappa values simultaneously. Generally, the objective function of the H-kappa stacking problem

  2. A Wigner Monte Carlo approach to density functional theory

    SciTech Connect

    Sellier, J.M. Dimov, I.

    2014-08-01

    In order to simulate quantum N-body systems, stationary and time-dependent density functional theories rely on the capacity of calculating the single-electron wave-functions of a system from which one obtains the total electron density (Kohn–Sham systems). In this paper, we introduce the use of the Wigner Monte Carlo method in ab-initio calculations. This approach allows time-dependent simulations of chemical systems in the presence of reflective and absorbing boundary conditions. It also enables an intuitive comprehension of chemical systems in terms of the Wigner formalism based on the concept of phase-space. Finally, being based on a Monte Carlo method, it scales very well on parallel machines paving the way towards the time-dependent simulation of very complex molecules. A validation is performed by studying the electron distribution of three different systems, a Lithium atom, a Boron atom and a hydrogenic molecule. For the sake of simplicity, we start from initial conditions not too far from equilibrium and show that the systems reach a stationary regime, as expected (despite no restriction is imposed in the choice of the initial conditions). We also show a good agreement with the standard density functional theory for the hydrogenic molecule. These results demonstrate that the combination of the Wigner Monte Carlo method and Kohn–Sham systems provides a reliable computational tool which could, eventually, be applied to more sophisticated problems.

  3. Finite frequency Seebeck coefficient of metals: A memory function approach

    NASA Astrophysics Data System (ADS)

    Bhalla, Pankaj; Kumar, Pradeep; Das, Nabyendu; Singh, Navinder

    2017-10-01

    We study the dynamical thermoelectric transport in metals subjected to the electron-impurity and the electron-phonon interactions using the memory function formalism. We introduce a generalized Drude form for the Seebeck coefficient in terms of thermoelectric memory function and calculate the latter in various temperature and frequency limits. In the zero frequency and high temperature limit, we find that our results are consistent with the experimental findings and with the traditional Boltzmann equation approach. In the low temperature limit, we find that the Seebeck coefficient is quadratic in temperature. In the finite frequency regime, we report new results: In the electron-phonon interaction case, we find that the Seebeck coefficient shows frequency independent behavior both in the high frequency regime (ω ≫ωD , where ωD is the Debye frequency) and in the low frequency regime (ω ≪ωD), whereas in the intermediate frequencies, it is a monotonically increasing function of frequency. In the case of the electron-impurity interaction, first it decays and then after passing through a minimum it increases with the increase in frequency and saturates at high frequencies.

  4. Pediatrician's knowledge on the approach of functional constipation

    PubMed Central

    Vieira, Mario C.; Negrelle, Isadora Carolina Krueger; Webber, Karla Ulaf; Gosdal, Marjorie; Truppel, Sabine Krüger; Kusma, Solena Ziemer

    2016-01-01

    Abstract Objective: To evaluate the pediatrician's knowledge regarding the diagnostic and therapeutic approach of childhood functional constipation. Methods: A descriptive cross-sectional study was performed with the application of a self-administered questionnaire concerning a hypothetical clinical case of childhood functional constipation with fecal incontinence to physicians (n=297) randomly interviewed at the 36th Brazilian Congress of Pediatrics in 2013. Results: The majority of the participants were females, the mean age was 44.1 years, the mean time of professional practice was 18.8 years; 56.9% were Board Certified by the Brazilian Society of Pediatrics. Additional tests were ordered by 40.4%; including abdominal radiography (19.5%), barium enema (10.4%), laboratory tests (9.8%), abdominal ultrasound (6.7%), colonoscopy (2.4%), manometry and rectal biopsy (both 1.7%). The most common interventions included lactulose (26.6%), mineral oil (17.5%), polyethylene glycol (14.5%), fiber supplement (9.1%) and milk of magnesia (5.4%). Nutritional guidance (84.8%), fecal disimpaction (17.2%) and toilet training (19.5%) were also indicated. Conclusions: Our results show that pediatricians do not adhere to current recommendations for the management of childhood functional constipation, as unnecessary tests were ordered and the first-line treatment was not prescribed. PMID:27449075

  5. Functional Analysis of Jasmonates in Rice through Mutant Approaches

    PubMed Central

    Dhakarey, Rohit; Kodackattumannil Peethambaran, Preshobha; Riemann, Michael

    2016-01-01

    Jasmonic acid, one of the major plant hormones, is, unlike other hormones, a lipid-derived compound that is synthesized from the fatty acid linolenic acid. It has been studied intensively in many plant species including Arabidopsis thaliana, in which most of the enzymes participating in its biosynthesis were characterized. In the past 15 years, mutants and transgenic plants affected in the jasmonate pathway became available in rice and facilitate studies on the functions of this hormone in an important crop. Those functions are partially conserved compared to other plant species, and include roles in fertility, response to mechanical wounding and defense against herbivores. However, new and surprising functions have also been uncovered by mutant approaches, such as a close link between light perception and the jasmonate pathway. This was not only useful to show a phenomenon that is unique to rice but also helped to establish this role in plant species where such links are less obvious. This review aims to provide an overview of currently available rice mutants and transgenic plants in the jasmonate pathway and highlights some selected roles of jasmonate in this species, such as photomorphogenesis, and abiotic and biotic stress. PMID:27135235

  6. Measuring functional connectivity in stroke: Approaches and considerations.

    PubMed

    Siegel, Joshua S; Shulman, Gordon L; Corbetta, Maurizio

    2017-08-01

    Recent research has demonstrated the importance of global changes to the functional organization of brain network following stroke. Resting functional magnetic resonance imaging (R-fMRI) is a non-invasive tool that enables the measurement of functional connectivity (FC) across the entire brain while placing minimal demands on the subject. For these reasons, it is a uniquely appealing tool for studying the distant effects of stroke. However, R-fMRI studies rely on a number of premises that cannot be assumed without careful validation in the context of stroke. Here, we describe strategies to identify and mitigate confounds specific to R-fMRI research in cerebrovascular disease. Five main topics are discussed: (a) achieving adequate co-registration of lesioned brains, (b) identifying and removing hemodynamic lags in resting BOLD, (c) identifying other vascular disruptions that affect the resting BOLD signal, (d) selecting an appropriate control cohort, and (e) acquiring sufficient fMRI data to reliably identify FC changes. For each topic, we provide guidelines for steps to improve the interpretability and reproducibility of FC-stroke research. We include a table of confounds and approaches to identify and mitigate each. Our recommendations extend to any research using R-fMRI to study diseases that might alter cerebrovascular flow and dynamics or brain anatomy.

  7. A multidisciplinary approach to study the functional properties of neuron-like cell models constituting a living bio-hybrid system: SH-SY5Y cells adhering to PANI substrate

    NASA Astrophysics Data System (ADS)

    Caponi, S.; Mattana, S.; Ricci, M.; Sagini, K.; Juarez-Hernandez, L. J.; Jimenez-Garduño, A. M.; Cornella, N.; Pasquardini, L.; Urbanelli, L.; Sassi, P.; Morresi, A.; Emiliani, C.; Fioretto, D.; Dalla Serra, M.; Pederzolli, C.; Iannotta, S.; Macchi, P.; Musio, C.

    2016-11-01

    A living bio-hybrid system has been successfully implemented. It is constituted by neuroblastic cells, the SH-SY5Y human neuroblastoma cells, adhering to a poly-anyline (PANI) a semiconductor polymer with memristive properties. By a multidisciplinary approach, the biocompatibility of the substrate has been analyzed and the functionality of the adhering cells has been investigated. We found that the PANI films can support the cell adhesion. Moreover, the SH-SY5Y cells were successfully differentiated into neuron-like cells for in vitro applications demonstrating that PANI can also promote cell differentiation. In order to deeply characterize the modifications of the bio-functionality induced by the cell-substrate interaction, the functional properties of the cells have been characterized by electrophysiology and Raman spectroscopy. Our results confirm that the PANI films do not strongly affect the general properties of the cells, ensuring their viability without toxic effects on their physiology. Ascribed to the adhesion process, however, a slight increase of the markers of the cell suffering has been evidenced by Raman spectroscopy and accordingly the electrophysiology shows a reduction at positive stimulations in the cells excitability.

  8. Modeling Time-Dependent Association in Longitudinal Data: A Lag as Moderator Approach.

    PubMed

    Selig, James P; Preacher, Kristopher J; Little, Todd D

    2012-01-01

    We describe a straightforward, yet novel, approach to examine time-dependent association between variables. The approach relies on a measurement-lag research design in conjunction with statistical interaction models. We base arguments in favor of this approach on the potential for better understanding the associations between variables by describing how the association changes with time. We introduce a number of different functional forms for describing these lag-moderated associations, each with a different substantive meaning. Finally, we use empirical data to demonstrate methods for exploring functional forms and model fitting based on this approach.

  9. SMJ's analysis of Ising model correlation functions

    NASA Astrophysics Data System (ADS)

    Kadanoff, Leo P.; Kohmoto, Mahito

    1980-05-01

    In a series of recent publications Sato, Miwa, and Jimbo (SMJ) have shown how to derive multispin correlation functions of the two-dimensional Ising model in the continuum, or scaling, limit by analyzing the behavior of the solutions to the two-dimensional version of the Dirac equation. The major purpose of the present work is to describe SMJ's analysis more discursively and in terms closer to that used in previous studies of the Ising model. In addition, new and more compact expressions for their basic equations are derived. A single new answer is obtained: the form of the three-spin correlation function at criticality.

  10. Quantum criticality of a spin-1 XY model with easy-plane single-ion anisotropy via a two-time Green function approach avoiding the Anderson-Callen decoupling

    NASA Astrophysics Data System (ADS)

    Mercaldo, M. T.; Rabuffo, I.; De Cesare, L.; Caramico D'Auria, A.

    2016-04-01

    In this work we study the quantum phase transition, the phase diagram and the quantum criticality induced by the easy-plane single-ion anisotropy in a d-dimensional quantum spin-1 XY model in absence of an external longitudinal magnetic field. We employ the two-time Green function method by avoiding the Anderson-Callen decoupling of spin operators at the same sites which is of doubtful accuracy. Following the original Devlin procedure we treat exactly the higher order single-site anisotropy Green functions and use Tyablikov-like decouplings for the exchange higher order ones. The related self-consistent equations appear suitable for an analysis of the thermodynamic properties at and around second order phase transition points. Remarkably, the equivalence between the microscopic spin model and the continuous O(2) -vector model with transverse-Ising model (TIM)-like dynamics, characterized by a dynamic critical exponent z=1, emerges at low temperatures close to the quantum critical point with the single-ion anisotropy parameter D as the non-thermal control parameter. The zero-temperature critic anisotropy parameter Dc is obtained for dimensionalities d > 1 as a function of the microscopic exchange coupling parameter and the related numerical data for different lattices are found to be in re