Functional state modelling approach validation for yeast and bacteria cultivations
Roeva, Olympia; Pencheva, Tania
2014-01-01
In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778
Renton, Michael; Hanan, Jim; Burrage, Kevin
2005-06-01
Functional-structural plant models that include detailed mechanistic representation of underlying physiological processes can be expensive to construct and the resulting models can also be extremely complicated. On the other hand, purely empirical models are not able to simulate plant adaptability and response to different conditions. In this paper, we present an intermediate approach to modelling plant function that can simulate plant response without requiring detailed knowledge of underlying physiology. Plant function is modelled using a 'canonical' modelling approach, which uses compartment models with flux functions of a standard mathematical form, while plant structure is modelled using L-systems. Two modelling examples are used to demonstrate that canonical modelling can be used in conjunction with L-systems to create functional-structural plant models where function is represented either in an accurate and descriptive way, or in a more mechanistic and explanatory way. We conclude that canonical modelling provides a useful, flexible and relatively simple approach to modelling plant function at an intermediate level of abstraction. PMID:15869646
The Thirring-Wess model revisited: a functional integral approach
Belvedere, L.V. . E-mail: armflavio@if.uff.br
2005-06-01
We consider the Wess-Zumino-Witten theory to obtain the functional integral bosonization of the Thirring-Wess model with an arbitrary regularization parameter. Proceeding a systematic of decomposing the Bose field algebra into gauge-invariant- and gauge-non-invariant field subalgebras, we obtain the local decoupled quantum action. The generalized operator solutions for the equations of motion are reconstructed from the functional integral formalism. The isomorphism between the QED {sub 2} (QCD {sub 2}) with broken gauge symmetry by a regularization prescription and the Abelian (non-Abelian) Thirring-Wess model with a fixed bare mass for the meson field is established.
A Model-Based Approach to Constructing Music Similarity Functions
NASA Astrophysics Data System (ADS)
West, Kris; Lamere, Paul
2006-12-01
Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.
Efremov, A. V.; Teryaev, O. V.; Schweitzer, P.; Zavada, P.
2009-07-01
Transverse parton momentum dependent distribution functions (TMDs) of the nucleon are studied in a covariant model, which describes the intrinsic motion of partons in terms of a covariant momentum distribution. The consistency of the approach is demonstrated, and model relations among TMDs are studied. As a by-product it is shown how the approach allows to formulate the nonrelativistic limit.
Approaches to Modelling the Dynamical Activity of Brain Function Based on the Electroencephalogram
NASA Astrophysics Data System (ADS)
Liley, David T. J.; Frascoli, Federico
The brain is arguably the quintessential complex system as indicated by the patterns of behaviour it produces. Despite many decades of concentrated research efforts, we remain largely ignorant regarding the essential processes that regulate and define its function. While advances in functional neuroimaging have provided welcome windows into the coarse organisation of the neuronal networks that underlie a range of cognitive functions, they have largely ignored the fact that behaviour, and by inference brain function, unfolds dynamically. Modelling the brain's dynamics is therefore a critical step towards understanding the underlying mechanisms of its functioning. To date, models have concentrated on describing the sequential organisation of either abstract mental states (functionalism, hard AI) or the objectively measurable manifestations of the brain's ongoing activity (rCBF, EEG, MEG). While the former types of modelling approach may seem to better characterise brain function, they do so at the expense of not making a definite connection with the actual physical brain. Of the latter, only models of the EEG (or MEG) offer a temporal resolution well matched to the anticipated temporal scales of brain (mental processes) function. This chapter will outline the most pertinent of these modelling approaches, and illustrate, using the electrocortical model of Liley et al, how the detailed application of the methods of nonlinear dynamics and bifurcation theory is central to exploring and characterising their various dynamical features. The rich repertoire of dynamics revealed by such dynamical systems approaches arguably represents a critical step towards an understanding of the complexity of brain function.
Berhane, Kiros; Molitor, Nuoo-Ting
2008-01-01
Flexible multilevel models are proposed to allow for cluster-specific smooth estimation of growth curves in a mixed-effects modeling format that includes subject-specific random effects on the growth parameters. Attention is then focused on models that examine between-cluster comparisons of the effects of an ecologic covariate of interest (e.g. air pollution) on nonlinear functionals of growth curves (e.g. maximum rate of growth). A Gibbs sampling approach is used to get posterior mean estimates of nonlinear functionals along with their uncertainty estimates. A second-stage ecologic random-effects model is used to examine the association between a covariate of interest (e.g. air pollution) and the nonlinear functionals. A unified estimation procedure is presented along with its computational and theoretical details. The models are motivated by, and illustrated with, lung function and air pollution data from the Southern California Children's Health Study. PMID:18349036
Quasiclassical approach to partition functions of ions in a chemical plasma model
Shpatakovskaya, G. V.
2008-03-15
The partition functions of ions that are used in a chemical plasma model are estimated by the Thomas-Fermi free ion model without reference to empirical data. Different form factors limiting the number of the excitation levels taken into account are considered, namely, those corresponding to the average atomic radius criterion, the temperature criterion, and the Planck-Brillouin-Larkin approximation. Expressions are presented for the average excitation energy and for the temperature and volume derivatives of the partition function. A comparison with the results of the empirical approach is made for the aluminum and iron plasmas.
Morris, Jeffrey S.
2012-01-01
In recent years, developments in molecular biotechnology have led to the increased promise of detecting and validating biomarkers, or molecular markers that relate to various biological or medical outcomes. Proteomics, the direct study of proteins in biological samples, plays an important role in the biomarker discovery process. These technologies produce complex, high dimensional functional and image data that present many analytical challenges that must be addressed properly for effective comparative proteomics studies that can yield potential biomarkers. Specific challenges include experimental design, preprocessing, feature extraction, and statistical analysis accounting for the inherent multiple testing issues. This paper reviews various computational aspects of comparative proteomic studies, and summarizes contributions I along with numerous collaborators have made. First, there is an overview of comparative proteomics technologies, followed by a discussion of important experimental design and preprocessing issues that must be considered before statistical analysis can be done. Next, the two key approaches to analyzing proteomics data, feature extraction and functional modeling, are described. Feature extraction involves detection and quantification of discrete features like peaks or spots that theoretically correspond to different proteins in the sample. After an overview of the feature extraction approach, specific methods for mass spectrometry (Cromwell) and 2D gel electrophoresis (Pinnacle) are described. The functional modeling approach involves modeling the proteomic data in their entirety as functions or images. A general discussion of the approach is followed by the presentation of a specific method that can be applied, wavelet-based functional mixed models, and its extensions. All methods are illustrated by application to two example proteomic data sets, one from mass spectrometry and one from 2D gel electrophoresis. While the specific methods
Morris, Jeffrey S
2012-01-01
In recent years, developments in molecular biotechnology have led to the increased promise of detecting and validating biomarkers, or molecular markers that relate to various biological or medical outcomes. Proteomics, the direct study of proteins in biological samples, plays an important role in the biomarker discovery process. These technologies produce complex, high dimensional functional and image data that present many analytical challenges that must be addressed properly for effective comparative proteomics studies that can yield potential biomarkers. Specific challenges include experimental design, preprocessing, feature extraction, and statistical analysis accounting for the inherent multiple testing issues. This paper reviews various computational aspects of comparative proteomic studies, and summarizes contributions I along with numerous collaborators have made. First, there is an overview of comparative proteomics technologies, followed by a discussion of important experimental design and preprocessing issues that must be considered before statistical analysis can be done. Next, the two key approaches to analyzing proteomics data, feature extraction and functional modeling, are described. Feature extraction involves detection and quantification of discrete features like peaks or spots that theoretically correspond to different proteins in the sample. After an overview of the feature extraction approach, specific methods for mass spectrometry (Cromwell) and 2D gel electrophoresis (Pinnacle) are described. The functional modeling approach involves modeling the proteomic data in their entirety as functions or images. A general discussion of the approach is followed by the presentation of a specific method that can be applied, wavelet-based functional mixed models, and its extensions. All methods are illustrated by application to two example proteomic data sets, one from mass spectrometry and one from 2D gel electrophoresis. While the specific methods
NASA Astrophysics Data System (ADS)
Reich, P. B.; Butler, E. E.
2015-12-01
This project will advance global land models by shifting from the current plant functional type approach to one that better utilizes what is known about the importance and variability of plant traits, within a framework of simultaneously improving fundamental physiological relations that are at the core of model carbon cycling algorithms. Existing models represent the global distribution of vegetation types using the Plant Functional Typeconcept. Plant Functional Types are classes of plant species with similar evolutionary and life history withpresumably similar responses to environmental conditions like CO2, water and nutrient availability. Fixedproperties for each Plant Functional Type are specified through a collection of physiological parameters, or traits.These traits, mostly physiological in nature (e.g., leaf nitrogen and longevity) are used in model algorithms to estimate ecosystem properties and/or drive calculated process rates. In most models, 5 to 15 functional types represent terrestrial vegetation; in essence, they assume there are a total of only 5 to 15 different kinds of plants on the entire globe. This assumption of constant plant traits captured within the functional type concept has serious limitations, as a single set of traits does not reflect trait variation observed within and between species and communities. While this simplification was necessary decades past, substantial improvement is now possible. Rather than assigning a small number of constant parameter values to all grid cells in a model, procedures will be developed that predict a frequency distribution of values for any given grid cell. Thus, the mean and variance, and how these change with time, will inform and improve model performance. The trait-based approach will improve land modeling by (1) incorporating patterns and heterogeneity of traits into model parameterization, thus evolving away from a framework that considers large areas of vegetation to have near identical trait
Modeling and Simulation Approaches for Cardiovascular Function and Their Role in Safety Assessment
Collins, TA; Bergenholm, L; Abdulla, T; Yates, JWT; Evans, N; Chappell, MJ; Mettetal, JT
2015-01-01
Systems pharmacology modeling and pharmacokinetic-pharmacodynamic (PK/PD) analysis of drug-induced effects on cardiovascular (CV) function plays a crucial role in understanding the safety risk of new drugs. The aim of this review is to outline the current modeling and simulation (M&S) approaches to describe and translate drug-induced CV effects, with an emphasis on how this impacts drug safety assessment. Current limitations are highlighted and recommendations are made for future effort in this vital area of drug research. PMID:26225237
A new approach to wall modeling in LES of incompressible flow via function enrichment
NASA Astrophysics Data System (ADS)
Krank, Benjamin; Wall, Wolfgang A.
2016-07-01
A novel approach to wall modeling for the incompressible Navier-Stokes equations including flows of moderate and large Reynolds numbers is presented. The basic idea is that a problem-tailored function space allows prediction of turbulent boundary layer gradients with very coarse meshes. The proposed function space consists of a standard polynomial function space plus an enrichment, which is constructed using Spalding's law-of-the-wall. The enrichment function is not enforced but "allowed" in a consistent way and the overall methodology is much more general and also enables other enrichment functions. The proposed method is closely related to detached-eddy simulation as near-wall turbulence is modeled statistically and large eddies are resolved in the bulk flow. Interpreted in terms of a three-scale separation within the variational multiscale method, the standard scale resolves large eddies and the enrichment scale represents boundary layer turbulence in an averaged sense. The potential of the scheme is shown applying it to turbulent channel flow of friction Reynolds numbers from Reτ = 590 and up to 5,000, flow over periodic constrictions at the Reynolds numbers ReH = 10 , 595 and 19,000 as well as backward-facing step flow at Reh = 5 , 000, all with extremely coarse meshes. Excellent agreement with experimental and DNS data is observed with the first grid point located at up to y1+ = 500 and especially under adverse pressure gradients as well as in separated flows.
A signal subspace approach for modeling the hemodynamic response function in fMRI.
Hossein-Zadeh, Gholam-Ali; Ardekani, Babak A; Soltanian-Zadeh, Hamid
2003-10-01
Many fMRI analysis methods use a model for the hemodynamic response function (HRF). Common models of the HRF, such as the Gaussian or Gamma functions, have parameters that are usually selected a priori by the data analyst. A new method is presented that characterizes the HRF over a wide range of parameters via three basis signals derived using principal component analysis (PCA). Covering the HRF variability, these three basis signals together with the stimulation pattern define signal subspaces which are applicable to both linear and nonlinear modeling and identification of the HRF and for various activation detection strategies. Analysis of simulated fMRI data using the proposed signal subspace showed increased detection sensitivity compared to the case of using a previously proposed trigonometric subspace. The methodology was also applied to activation detection in both event-related and block design experimental fMRI data using both linear and nonlinear modeling of the HRF. The activated regions were consistent with previous studies, indicating the ability of the proposed approach in detecting brain activation without a priori assumptions about the shape parameters of the HRF. The utility of the proposed basis functions in identifying the HRF is demonstrated by estimating the HRF in different activated regions. PMID:14599533
A function space approach to state and model error estimation for elliptic systems
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1983-01-01
An approach is advanced for the concurrent estimation of the state and of the model errors of a system described by elliptic equations. The estimates are obtained by a deterministic least-squares approach that seeks to minimize a quadratic functional of the model errors, or equivalently, to find the vector of smallest norm subject to linear constraints in a suitably defined function space. The minimum norm solution can be obtained by solving either a Fredholm integral equation of the second kind for the case with continuously distributed data or a related matrix equation for the problem with discretely located measurements. Solution of either one of these equations is obtained in a batch-processing mode in which all of the data is processed simultaneously or, in certain restricted geometries, in a spatially scanning mode in which the data is processed recursively. After the methods for computation of the optimal esimates are developed, an analysis of the second-order statistics of the estimates and of the corresponding estimation error is conducted. Based on this analysis, explicit expressions for the mean-square estimation error associated with both the state and model error estimates are then developed. While this paper focuses on theoretical developments, applications arising in the area of large structure static shape determination are contained in a closely related paper (Rodriguez and Scheid, 1982).
A function space approach to state and model error estimation for elliptic systems
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1983-01-01
An approach is advanced for the concurrent estimation of the state and of the model errors of a system described by elliptic equations. The estimates are obtained by a deterministic least-squares approach that seeks to minimize a quadratic functional of the model errors, or equivalently, to find the vector of smallest norm subject to linear constraints in a suitably defined function space. The minimum norm solution can be obtained by solving either a Fredholm integral equation of the second kind for the case with continuously distributed data or a related matrix equation for the problem with discretely located measurements. Solution of either one of these equations is obtained in a batch-processing mode in which all of the data is processed simultaneously or, in certain restricted geometries, in a spatially scanning mode in which the data is processed recursively. After the methods for computation of the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the corresponding estimation error is conducted. Based on this analysis, explicit expressions for the mean-square estimation error associated with both the state and model error estimates are then developed.
NASA Astrophysics Data System (ADS)
Stradi, Daniele; Martinez, Umberto; Blom, Anders; Brandbyge, Mads; Stokbro, Kurt
2016-04-01
Metal-semiconductor contacts are a pillar of modern semiconductor technology. Historically, their microscopic understanding has been hampered by the inability of traditional analytical and numerical methods to fully capture the complex physics governing their operating principles. Here we introduce an atomistic approach based on density functional theory and nonequilibrium Green's function, which includes all the relevant ingredients required to model realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via I -Vbias curve simulations. We apply this method to characterize an Ag/Si interface relevant for photovoltaic applications and study the rectifying-to-Ohmic transition as a function of the semiconductor doping. We also demonstrate that the standard "activation energy" method for the analysis of I -Vbias data might be inaccurate for nonideal interfaces as it neglects electron tunneling, and that finite-size atomistic models have problems in describing these interfaces in the presence of doping due to a poor representation of space-charge effects. Conversely, the present method deals effectively with both issues, thus representing a valid alternative to conventional procedures for the accurate characterization of metal-semiconductor interfaces.
Optogenetic approaches to evaluate striatal function in animal models of Parkinson disease
Parker, Krystal L.; Kim, Youngcho; Alberico, Stephanie L.; Emmons, Eric B.; Narayanan, Nandakumar S.
2016-01-01
Optogenetics refers to the ability to control cells that have been genetically modified to express light-sensitive ion channels. The introduction of optogenetic approaches has facilitated the dissection of neural circuits. Optogenetics allows for the precise stimulation and inhibition of specific sets of neurons and their projections with fine temporal specificity. These techniques are ideally suited to investigating neural circuitry underlying motor and cognitive dysfunction in animal models of human disease. Here, we focus on how optogenetics has been used over the last decade to probe striatal circuits that are involved in Parkinson disease, a neurodegenerative condition involving motor and cognitive abnormalities resulting from degeneration of midbrain dopaminergic neurons. The precise mechanisms underlying the striatal contribution to both cognitive and motor dysfunction in Parkinson disease are unknown. Although optogenetic approaches are somewhat removed from clinical use, insight from these studies can help identify novel therapeutic targets and may inspire new treatments for Parkinson disease. Elucidating how neuronal and behavioral functions are influenced and potentially rescued by optogenetic manipulation in animal models could prove to be translatable to humans. These insights can be used to guide future brain-stimulation approaches for motor and cognitive abnormalities in Parkinson disease and other neuropsychiatric diseases. PMID:27069384
NASA Astrophysics Data System (ADS)
Zenzerovic, I.; Kropp, W.; Pieringer, A.
2016-08-01
Curve squeal is a strong tonal sound that may arise when a railway vehicle negotiates a tight curve. In contrast to frequency-domain models, time-domain models are able to capture the nonlinear and transient nature of curve squeal. However, these models are computationally expensive due to requirements for fine spatial and time discretization. In this paper, a computationally efficient engineering model for curve squeal in the time-domain is proposed. It is based on a steady-state point-contact model for the tangential wheel/rail contact and a Green's functions approach for wheel and rail dynamics. The squeal model also includes a simple model of sound radiation from the railway wheel from the literature. A validation of the tangential point-contact model against Kalker's transient variational contact model reveals that the point-contact model performs well within the squeal model up to at least 5 kHz. The proposed squeal model is applied to investigate the influence of lateral creepage, friction and wheel/rail contact position on squeal occurrence and amplitude. The study indicates a significant influence of the wheel/rail contact position on squeal frequencies and amplitudes. Friction and lateral creepage show an influence on squeal occurrence and amplitudes, but this is only secondary to the influence of the contact position.
An overview of the recent approaches for terroir functional modelling, footprinting and zoning
NASA Astrophysics Data System (ADS)
Vaudour, E.; Costantini, E.; Jones, G. V.; Mocali, S.
2014-11-01
Notions of terroir and their conceptualization through agri-environmental sciences have become popular in many parts of world. Originally developed for wine, terroir now encompasses many other crops including fruits, vegetables, cheese, olive oil, coffee, cacao and other crops, linking the uniqueness and quality of both beverages and foods to the environment where they are produced, giving the consumer a sense of place. Climate, geology, geomorphology, and soil are the main environmental factors which compose the terroir effect at different scales. Often considered immutable at the cultural scale, the natural components of terroir are actually a set of processes, which together create a delicate equilibrium and regulation of its effect on products in both space and time. Due to both a greater need to better understand regional to site variations in crop production and the growth in spatial analytic technologies, the study of terroir has shifted from a largely descriptive regional science to a more applied, technical research field. Furthermore, the explosion of spatial data availability and sensing technologies has made the within-field scale of study more valuable to the individual grower. The result has been greater adoption but also issues associated with both the spatial and temporal scales required for practical applications, as well as the relevant approaches for data synthesis. Moreover, as soil microbial communities are known to be of vital importance for terrestrial processes by driving the major soil geochemical cycles and supporting healthy plant growth, an intensive investigation of the microbial organization and their function is also required. Our objective is to present an overview of existing data and modelling approaches for terroir functional modelling, footprinting and zoning at local and regional scales. This review will focus on three main areas of recent terroir research: (1) quantifying the influences of terroir components on plant growth
A conditional Granger causality model approach for group analysis in functional MRI
Zhou, Zhenyu; Wang, Xunheng; Klahr, Nelson J.; Liu, Wei; Arias, Diana; Liu, Hongzhi; von Deneen, Karen M.; Wen, Ying; Lu, Zuhong; Xu, Dongrong; Liu, Yijun
2011-01-01
Granger causality model (GCM) derived from multivariate vector autoregressive models of data has been employed for identifying effective connectivity in the human brain with functional MR imaging (fMRI) and to reveal complex temporal and spatial dynamics underlying a variety of cognitive processes. In the most recent fMRI effective connectivity measures, pairwise GCM has commonly been applied based on single voxel values or average values from special brain areas at the group level. Although a few novel conditional GCM methods have been proposed to quantify the connections between brain areas, our study is the first to propose a viable standardized approach for group analysis of an fMRI data with GCM. To compare the effectiveness of our approach with traditional pairwise GCM models, we applied a well-established conditional GCM to pre-selected time series of brain regions resulting from general linear model (GLM) and group spatial kernel independent component analysis (ICA) of an fMRI dataset in the temporal domain. Datasets consisting of one task-related and one resting-state fMRI were used to investigate connections among brain areas with the conditional GCM method. With the GLM detected brain activation regions in the emotion related cortex during the block design paradigm, the conditional GCM method was proposed to study the causality of the habituation between the left amygdala and pregenual cingulate cortex during emotion processing. For the resting-state dataset, it is possible to calculate not only the effective connectivity between networks but also the heterogeneity within a single network. Our results have further shown a particular interacting pattern of default mode network (DMN) that can be characterized as both afferent and efferent influences on the medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC). These results suggest that the conditional GCM approach based on a linear multivariate vector autoregressive (MVAR) model can achieve
An overview of the recent approaches to terroir functional modelling, footprinting and zoning
NASA Astrophysics Data System (ADS)
Vaudour, E.; Costantini, E.; Jones, G. V.; Mocali, S.
2015-03-01
Notions of terroir and their conceptualization through agro-environmental sciences have become popular in many parts of world. Originally developed for wine, terroir now encompasses many other crops including fruits, vegetables, cheese, olive oil, coffee, cacao and other crops, linking the uniqueness and quality of both beverages and foods to the environment where they are produced, giving the consumer a sense of place. Climate, geology, geomorphology and soil are the main environmental factors which make up the terroir effect on different scales. Often considered immutable culturally, the natural components of terroir are actually a set of processes, which together create a delicate equilibrium and regulation of its effect on products in both space and time. Due to both a greater need to better understand regional-to-site variations in crop production and the growth in spatial analytic technologies, the study of terroir has shifted from a largely descriptive regional science to a more applied, technical research field. Furthermore, the explosion of spatial data availability and sensing technologies has made the within-field scale of study more valuable to the individual grower. The result has been greater adoption of these technologies but also issues associated with both the spatial and temporal scales required for practical applications, as well as the relevant approaches for data synthesis. Moreover, as soil microbial communities are known to be of vital importance for terrestrial processes by driving the major soil geochemical cycles and supporting healthy plant growth, an intensive investigation of the microbial organization and their function is also required. Our objective is to present an overview of existing data and modelling approaches for terroir functional modelling, footprinting and zoning on local and regional scales. This review will focus on two main areas of recent terroir research: (1) using new tools to unravel the biogeochemical cycles of both
An overview of the recent approaches for terroir functional modelling, footprinting and zoning
NASA Astrophysics Data System (ADS)
Costantini, Edoardo; Emmanuelle, Vaudour; Jones, Gregory; Mocali, Stefano
2014-05-01
Notions of terroir and their conceptualization through agri-environmental sciences have become popular in many parts of world. Originally developed for wine, terroir is now investigated for fruits, vegetables, cheese, olive oil, coffee, cacao and other crops, linking the uniqueness and quality of both beverages and foods to the environment where they are produced, giving the consumer a sense of place. Climate, geology, geomorphology, and soil are the main environmental factors which compose the terroir effect at different scales. Often considered immutable at the cultural scale, the natural components of terroir are actually a set of processes, which together create a delicate equilibrium and regulation of its effect on products in both space and time. Due to both a greater need to better understand regional to site variations in crop production and the growth in spatial analytic technologies, the study of terroir has shifted from a largely descriptive regional science to a more applied, technical research field. Furthermore, the explosion of spatial data availability and elaboration technologies have made the scale of study more valuable to the individual grower, resulting in greater adoption and application. Moreover, as soil microbial communities are known to be of vital importance for terrestrial processes by driving the major soil geochemical cycles and supporting healthy plant growth, an intensive investigation of the microbial organization and their function is also required. Our objective is to present an overview of existing data and modeling approaches for terroir functional modeling, footprinting and zoning at local and regional scales. This review will focus on four main areas of recent terroir research: 1) quantifying the influences of terroir components on plant growth, fruit composition and quality, mostly examining climate-soil-water relationships; 2) the metagenomic approach as new tool to unravel the biogeochemical cycles of both macro- and
Modeling solvation effects in real-space and real-time within density functional approaches
Delgado, Alain; Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea
2015-10-14
The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the OCTOPUS code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.
NASA Astrophysics Data System (ADS)
Beragoui, Manel; Aguir, Chadlia; Khalfaoui, Mohamed; Enciso, Eduardo; Torralvo, Maria José; Duclaux, Laurent; Reinert, Laurence; Vayer, Marylène; Ben Lamine, Abdelmottaleb
2015-03-01
The present work involves the study of bovine serum albumin adsorption onto five functionalized polystyrene lattices. The adsorption measurements have been carried out using a quartz crystal microbalance. Poly(styrene-co-itaconic acid) was found to be an effective adsorbent for bovine serum albumin molecule adsorption. The experimental isotherm data were analyzed using theoretical models based on a statistical physics approach, namely monolayer, double layer with two successive energy levels, finite multilayer, and modified Brunauer-Emmet-Teller. The equilibrium data were then analyzed using five different non-linear error analysis methods and it was found that the finite multilayer model best describes the protein adsorption data. Surface characteristics, i.e., surface charge density and number density of surface carboxyl groups, were used to investigate their effect on the adsorption capacity. The combination of the results obtained from the number of adsorbed layers, the number of adsorbed molecules per site, and the thickness of the adsorbed bovine serum albumin layer allows us to predict that the adsorption of this protein molecule can also be distinguished by monolayer or multilayer adsorption with end-on, side-on, and overlap conformations. The magnitudes of the calculated adsorption energy indicate that bovine serum albumin molecules are physisorbed onto the adsorbent lattices.
Efremov, A. V.; Teryaev, O. V.; Schweitzer, P.; Zavada, P.
2011-03-01
We derive relations between transverse momentum dependent distribution functions and the usual parton distribution functions in the 3D covariant parton model, which follow from Lorentz invariance and the assumption of a rotationally symmetric distribution of parton momenta in the nucleon rest frame. Using the known parton distribution functions f{sub 1}{sup a}(x) and g{sub 1}{sup a}(x) as input we predict the x- and p{sub T}-dependence of all twist-2 T-even transverse momentum dependent distribution functions.
ERIC Educational Resources Information Center
Herndon, Mary Anne
1978-01-01
In a model of the functioning of short term memory, the encoding of information for subsequent storage in long term memory is simulated. In the encoding process, semantically equivalent paragraphs are detected for recombination into a macro information unit. (HOD)
NASA Astrophysics Data System (ADS)
Naber, R. R.; Bahai, H.; Jones, B. E.
2006-05-01
The ability to model acoustic emission (AE) plays an important role in advancing the reliability of AE source characterisation. In this paper, an efficient numerical approach is proposed for modelling AE waves in isotropic solids. The approach is based on evaluating the reciprocal band-limited Green's functions using the finite element (FE) method. In the first section, known analytical solutions of the Green's function for an elastic isotropic infinite plate subjected to point monopole surface loading are used to validate the approach. Then, a study investigating the effects of the spatial resolution of the FE model on the accuracy of the numerical solutions is presented. Furthermore, comparisons between numerical calculations and experimental measurements are presented for a glass plate subjected to two known AE sources (pencil lead break and ball impact). Finally, the reciprocal relation between the source and the receiver is confirmed using numerical simulations of a plane stress model of an elastic isotropic plate.
Integrative approaches for modeling regulation and function of the respiratory system
Ben-Tal, Alona
2013-01-01
Mathematical models have been central to understanding the interaction between neural control and breathing. Models of the entire respiratory system – which comprises the lungs and the neural circuitry that controls their ventilation - have been derived using simplifying assumptions to compartmentalise each component of the system and to define the interactions between components. These full system models often rely – through necessity - on empirically derived relationships or parameters, in addition to physiological values. In parallel with the development of whole respiratory system models are mathematical models that focus on furthering a detailed understanding of the neural control network, or of the several functions that contribute to gas exchange within the lung. These models are biophysically based, and rely on physiological parameters. They include single-unit models for a breathing lung or neural circuit, through to spatially-distributed models of ventilation and perfusion, or multi-circuit models for neural control. The challenge is to bring together these more recent advances in models of neural control with models of lung function, into a full simulation for the respiratory system that builds upon the more detailed models but remains computationally tractable. This requires first understanding the mathematical models that have been developed for the respiratory system at different levels, and which could be used to study how physiological levels of O2 and CO2 in the blood are maintained. PMID:24591490
A.V. Efremov, P. Schweitzer, O.V. Teryaev, P. Zavada
2011-03-01
We derive relations between transverse momentum dependent distribution functions (TMDs) and the usual parton distribution functions (PDFs) in the 3D covariant parton model, which follow from Lorentz invariance and the assumption of a rotationally symmetric distribution of parton momenta in the nucleon rest frame. Using the known PDFs f_1(x) and g_1(x) as input we predict the x- and pT-dependence of all twist-2 T-even TMDs.
A new approach for determining fully empirical altimeter wind speed model functions
NASA Astrophysics Data System (ADS)
Freilich, Michael H.; Challenor, Peter G.
1994-12-01
A statistical technique is developed for determining fully empirical model functions relating altimeter radar backscatter (σ0) measurements to near-surface neutral stability wind speed. By assuming that σ0 varies monotonically and uniquely with wind speed, the method requires knowledge only of the separate, rather than joint, distribution functions of σ0 and wind speed. Analytic simplifications result from using a Weibull distribution to approximate the global ocean wind speed distribution; several different wind data sets are used to demonstrate the validity of the Weibull approximation. The technique has been applied to 1 year of Geosat data. Validation of the new and historical model functions using an independent buoy data set demonstrates that the present model function not only has small overall bias and RMS errors, but yields smaller systematic error trends with wind speed and pseudowave age than previously published models. The present analysis suggests that generally accurate altimeter model functions can be derived without the use of collocated measurements, nor is additional significant wave height information measured by the altimeter necessary.
NASA Astrophysics Data System (ADS)
Choudhury, Pallabee; Chopra, Sumer; Roy, Ketan Singha; Sharma, Jyoti
2016-04-01
In this study, ground motions are estimated for scenario earthquakes of Mw 6.0, 6.5 and 7.0 at 17 sites in Gujarat region using Empirical Green's function technique. The Dholavira earthquake of June 19, 2012 (Mw 5.1) which occurred in the Kachchh region of Gujarat is considered as an element earthquake. We estimated the focal mechanism and source parameters of the element earthquake using standard methodologies. The moment tensor inversion technique is used to determine the fault plane solution (strike = 8°, dip = 51°, and rake = - 7°). The seismic moment and the stress drop are 5.6 × 1016 Nm and 120 bars respectively. The validity of the approach was tested for a smaller earthquake. A few possible directivity scenarios were also tested to find out the effect of directivity on the level of ground motions. Our study reveals that source complexities and site effects play a very important role in deciding the level of ground motions at a site which are difficult to model by GMPEs. Our results shed new light on the expected accelerations in the region and suggest that the Kachchh region can expect maximum acceleration of around 500 cm/s2 at few sites near source and around 200 cm/s2 at most of the sites located within 50 km from the epicentre for a Mw 7.0 earthquake. The estimated ground accelerations can be used by the administrators and planners for providing a guiding framework to undertake mitigation investments and activities in the region.
A Hybrid Approach to Structure and Function Modeling of G Protein-Coupled Receptors.
Latek, Dorota; Bajda, Marek; Filipek, Sławomir
2016-04-25
The recent GPCR Dock 2013 assessment of serotonin receptor 5-HT1B and 5-HT2B, and smoothened receptor SMO targets, exposed the strengths and weaknesses of the currently used computational approaches. The test cases of 5-HT1B and 5-HT2B demonstrated that both the receptor structure and the ligand binding mode can be predicted with the atomic-detail accuracy, as long as the target-template sequence similarity is relatively high. On the other hand, the observation of a low target-template sequence similarity, e.g., between SMO from the frizzled GPCR family and members of the rhodopsin family, hampers the GPCR structure prediction and ligand docking. Indeed, in GPCR Dock 2013, accurate prediction of the SMO target was still beyond the capabilities of most research groups. Another bottleneck in the current GPCR research, as demonstrated by the 5-HT2B target, is the reliable prediction of global conformational changes induced by activation of GPCRs. In this work, we report details of our protocol used during GPCR Dock 2013. Our structure prediction and ligand docking protocol was especially successful in the case of 5-HT1B and 5-HT2B-ergotamine complexes for which we provide one of the most accurate predictions. In addition to a description of the GPCR Dock 2013 results, we propose a novel hybrid computational methodology to improve GPCR structure and function prediction. This computational methodology employs two separate rankings for filtering GPCR models. The first ranking is ligand-based while the second is based on the scoring scheme of the recently published BCL method. In this work, we prove that the use of knowledge-based potentials implemented in BCL is an efficient way to cope with major bottlenecks in the GPCR structure prediction. Thereby, we also demonstrate that the knowledge-based potentials for membrane proteins were significantly improved, because of the recent surge in available experimental structures. PMID:26978043
Liu, Yang; Magnus, Brooke E; Thissen, David
2016-06-01
Differential item functioning (DIF), referring to between-group variation in item characteristics above and beyond the group-level disparity in the latent variable of interest, has long been regarded as an important item-level diagnostic. The presence of DIF impairs the fit of the single-group item response model being used, and calls for either model modification or item deletion in practice, depending on the mode of analysis. Methods for testing DIF with continuous covariates, rather than categorical grouping variables, have been developed; however, they are restrictive in parametric forms, and thus are not sufficiently flexible to describe complex interaction among latent variables and covariates. In the current study, we formulate the probability of endorsing each test item as a general bivariate function of a unidimensional latent trait and a single covariate, which is then approximated by a two-dimensional smoothing spline. The accuracy and precision of the proposed procedure is evaluated via Monte Carlo simulations. If anchor items are available, we proposed an extended model that simultaneously estimates item characteristic functions (ICFs) for anchor items, ICFs conditional on the covariate for non-anchor items, and the latent variable density conditional on the covariate-all using regression splines. A permutation DIF test is developed, and its performance is compared to the conventional parametric approach in a simulation study. We also illustrate the proposed semiparametric DIF testing procedure with an empirical example. PMID:26155757
Optimization of global model composed of radial basis functions using the term-ranking approach
Cai, Peng; Tao, Chao Liu, Xiao-Jun
2014-03-15
A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.
NASA Astrophysics Data System (ADS)
Bodegom, P. V.
2015-12-01
In recent years a number of approaches have been developed to provide alternatives to the use of plant functional types (PFTs) with constant vegetation characteristics for simulating vegetation responses to climate changes. In this presentation, an overview of those approaches and their challenges is given. Some new approaches aim at removing PFTs altogether by determining the combination of vegetation characteristics that would fit local conditions best. Others describe the variation in traits within PFTs as a function of environmental drivers, based on community assembly principles. In the first approach, after an equilibrium has been established, vegetation composition and its functional attributes can change by allowing the emergence of a new type that is more fit. In the latter case, changes in vegetation attributes in space and time as assumed to be the result intraspecific variation, genetic adaptation and species turnover, without quantifying their respective importance. Hence, it is assumed that -by whatever mechanism- the community as a whole responds without major time lags to changes in environmental drivers. Recently, we showed that intraspecific variation is highly species- and trait-specific and that none of the current hypotheses on drivers of this variation seems to hold. Also genetic adaptation varies considerably among species and it is uncertain whether it will be fast enough to cope with climate change. Species turnover within a community is especially fast in herbaceous communities, but much slower in forest communities. Hence, it seems that assumptions made may not hold for forested ecosystems, but solutions to deal with this do not yet exist. Even despite the fact that responsiveness of vegetation to environmental change may be overestimated, we showed that -upon implementation of trait-environment relationships- major changes in global vegetation distribution are projected, to similar extents as to those without such responsiveness.
Operator function modeling: An approach to cognitive task analysis in supervisory control systems
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1987-01-01
In a study of models of operators in complex, automated space systems, an operator function model (OFM) methodology was extended to represent cognitive as well as manual operator activities. Development continued on a software tool called OFMdraw, which facilitates construction of an OFM by permitting construction of a heterarchic network of nodes and arcs. Emphasis was placed on development of OFMspert, an expert system designed both to model human operation and to assist real human operators. The system uses a blackboard method of problem solving to make an on-line representation of operator intentions, called ACTIN (actions interpreter).
Stress and Resilience in Functional Somatic Syndromes – A Structural Equation Modeling Approach
Fischer, Susanne; Lemmer, Gunnar; Gollwitzer, Mario; Nater, Urs M.
2014-01-01
Background Stress has been suggested to play a role in the development and perpetuation of functional somatic syndromes. The mechanisms of how this might occur are not clear. Purpose We propose a multi-dimensional stress model which posits that childhood trauma increases adult stress reactivity (i.e., an individual's tendency to respond strongly to stressors) and reduces resilience (e.g., the belief in one's competence). This in turn facilitates the manifestation of functional somatic syndromes via chronic stress. We tested this model cross-sectionally and prospectively. Methods Young adults participated in a web survey at two time points. Structural equation modeling was used to test our model. The final sample consisted of 3′054 participants, and 429 of these participated in the follow-up survey. Results Our proposed model fit the data in the cross-sectional (χ2(21) = 48.808, p<.001, CFI = .995, TLI = .992, RMSEA = .021, 90% CI [.013.029]) and prospective analyses (χ2(21) = 32.675, p<.05, CFI = .982, TLI = .969, RMSEA = .036, 90% CI [.001.059]). Discussion Our findings have several clinical implications, suggesting a role for stress management training in the prevention and treatment of functional somatic syndromes. PMID:25396736
Vitkin, Edward; Shlomi, Tomer
2012-01-01
Genome-scale metabolic network reconstructions are considered a key step in quantifying the genotype-phenotype relationship. We present a novel gap-filling approach, MetabolIc Reconstruction via functionAl GEnomics (MIRAGE), which identifies missing network reactions by integrating metabolic flux analysis and functional genomics data. MIRAGE's performance is demonstrated on the reconstruction of metabolic network models of E. coli and Synechocystis sp. and validated via existing networks for these species. Then, it is applied to reconstruct genome-scale metabolic network models for 36 sequenced cyanobacteria amenable for constraint-based modeling analysis and specifically for metabolic engineering. The reconstructed network models are supplied via standard SBML files. PMID:23194418
NASA Astrophysics Data System (ADS)
Benedetti, Dario; Lahoche, Vincent
2016-05-01
We develop the functional renormalization group formalism for a tensorial group field theory with closure constraint, in the case of a just renormalizable model over U{(1)}\\otimes 6, with quartic interactions. The method allows us to obtain a closed but non-autonomous system of differential equations which describe the renormalization group flow of the couplings beyond perturbation theory. The explicit dependence of the beta functions on the running scale is due to the existence of an external scale in the model, the radius of {S}1≃ U(1). We study the occurrence of fixed points and their critical properties in two different approximate regimes, corresponding to the deep UV and deep IR. Besides confirming the asymptotic freedom of the model, we find also a non-trivial fixed point, with one relevant direction. Our results are qualitatively similar to those found previously for a rank-3 model without closure constraint, and it is thus tempting to speculate that the presence of a Wilson-Fisher-like fixed point is a general feature of asymptotically free tensorial group field theories.
de Vries, Natalie Jane; Carlson, Jamie; Moscato, Pablo
2014-01-01
Online consumer behavior in general and online customer engagement with brands in particular, has become a major focus of research activity fuelled by the exponential increase of interactive functions of the internet and social media platforms and applications. Current research in this area is mostly hypothesis-driven and much debate about the concept of Customer Engagement and its related constructs remains existent in the literature. In this paper, we aim to propose a novel methodology for reverse engineering a consumer behavior model for online customer engagement, based on a computational and data-driven perspective. This methodology could be generalized and prove useful for future research in the fields of consumer behaviors using questionnaire data or studies investigating other types of human behaviors. The method we propose contains five main stages; symbolic regression analysis, graph building, community detection, evaluation of results and finally, investigation of directed cycles and common feedback loops. The ‘communities’ of questionnaire items that emerge from our community detection method form possible ‘functional constructs’ inferred from data rather than assumed from literature and theory. Our results show consistent partitioning of questionnaire items into such ‘functional constructs’ suggesting the method proposed here could be adopted as a new data-driven way of human behavior modeling. PMID:25036766
de Vries, Natalie Jane; Carlson, Jamie; Moscato, Pablo
2014-01-01
Online consumer behavior in general and online customer engagement with brands in particular, has become a major focus of research activity fuelled by the exponential increase of interactive functions of the internet and social media platforms and applications. Current research in this area is mostly hypothesis-driven and much debate about the concept of Customer Engagement and its related constructs remains existent in the literature. In this paper, we aim to propose a novel methodology for reverse engineering a consumer behavior model for online customer engagement, based on a computational and data-driven perspective. This methodology could be generalized and prove useful for future research in the fields of consumer behaviors using questionnaire data or studies investigating other types of human behaviors. The method we propose contains five main stages; symbolic regression analysis, graph building, community detection, evaluation of results and finally, investigation of directed cycles and common feedback loops. The 'communities' of questionnaire items that emerge from our community detection method form possible 'functional constructs' inferred from data rather than assumed from literature and theory. Our results show consistent partitioning of questionnaire items into such 'functional constructs' suggesting the method proposed here could be adopted as a new data-driven way of human behavior modeling. PMID:25036766
NASA Astrophysics Data System (ADS)
Freire, Hermann; Corrêa, Eberth
2012-02-01
We apply a functional implementation of the field-theoretical renormalization group (RG) method up to two loops to the single-impurity Anderson model. To achieve this, we follow a RG strategy similar to that proposed by Vojta et al. (in Phys. Rev. Lett. 85:4940, 2000), which consists of defining a soft ultraviolet regulator in the space of Matsubara frequencies for the renormalized Green's function. Then we proceed to derive analytically and solve numerically integro-differential flow equations for the effective couplings and the quasiparticle weight of the present model, which fully treat the interplay of particle-particle and particle-hole parquet diagrams and the effect of the two-loop self-energy feedback into them. We show that our results correctly reproduce accurate numerical renormalization group data for weak to slightly moderate interactions. These results are in excellent agreement with other functional Wilsonian RG works available in the literature. Since the field-theoretical RG method turns out to be easier to implement at higher loops than the Wilsonian approach, higher-order calculations within the present approach could improve further the results for this model at stronger couplings. We argue that the present RG scheme could thus offer a possible alternative to other functional RG methods to describe electronic correlations within this model.
Tomellini, Massimo; Fanfoni, Massimo
2014-11-01
The statistical methods exploiting the "Correlation-Functions" or the "Differential-Critical-Region" are both suitable for describing phase transformation kinetics ruled by nucleation and growth. We present a critical analysis of these two approaches, with particular emphasis to transformations ruled by diffusional growth which cannot be described by the Kolmogorov-Johnson-Mehl-Avrami (KJMA) theory. In order to bridge the gap between these two methods, the conditional probability functions entering the "Differential-Critical-Region" approach are determined in terms of correlation functions. The formulation of these probabilities by means of cluster expansion is also derived, which improves the accuracy of the computation. The model is applied to 2D and 3D parabolic growths occurring at constant value of either actual or phantom-included nucleation rates. Computer simulations have been employed for corroborating the theoretical modeling. The contribution to the kinetics of phantom overgrowth is estimated and it is found to be of a few percent in the case of constant value of the actual nucleation rate. It is shown that for a parabolic growth law both approaches do not provide a closed-form solution of the kinetics. In this respect, the two methods are equivalent and the longstanding overgrowth phenomenon, which limits the KJMA theory, does not admit an exact analytical solution. PMID:25493802
Salazar, Ramon B. E-mail: hilatikh@purdue.edu; Appenzeller, Joerg; Ilatikhameneh, Hesameddin E-mail: hilatikh@purdue.edu; Rahman, Rajib; Klimeck, Gerhard
2015-10-28
A new compact modeling approach is presented which describes the full current-voltage (I-V) characteristic of high-performance (aggressively scaled-down) tunneling field-effect-transistors (TFETs) based on homojunction direct-bandgap semiconductors. The model is based on an analytic description of two key features, which capture the main physical phenomena related to TFETs: (1) the potential profile from source to channel and (2) the elliptic curvature of the complex bands in the bandgap region. It is proposed to use 1D Poisson's equations in the source and the channel to describe the potential profile in homojunction TFETs. This allows to quantify the impact of source/drain doping on device performance, an aspect usually ignored in TFET modeling but highly relevant in ultra-scaled devices. The compact model is validated by comparison with state-of-the-art quantum transport simulations using a 3D full band atomistic approach based on non-equilibrium Green's functions. It is shown that the model reproduces with good accuracy the data obtained from the simulations in all regions of operation: the on/off states and the n/p branches of conduction. This approach allows calculation of energy-dependent band-to-band tunneling currents in TFETs, a feature that allows gaining deep insights into the underlying device physics. The simplicity and accuracy of the approach provide a powerful tool to explore in a quantitatively manner how a wide variety of parameters (material-, size-, and/or geometry-dependent) impact the TFET performance under any bias conditions. The proposed model presents thus a practical complement to computationally expensive simulations such as the 3D NEGF approach.
NASA Astrophysics Data System (ADS)
Salazar, Ramon B.; Ilatikhameneh, Hesameddin; Rahman, Rajib; Klimeck, Gerhard; Appenzeller, Joerg
2015-10-01
A new compact modeling approach is presented which describes the full current-voltage (I-V) characteristic of high-performance (aggressively scaled-down) tunneling field-effect-transistors (TFETs) based on homojunction direct-bandgap semiconductors. The model is based on an analytic description of two key features, which capture the main physical phenomena related to TFETs: (1) the potential profile from source to channel and (2) the elliptic curvature of the complex bands in the bandgap region. It is proposed to use 1D Poisson's equations in the source and the channel to describe the potential profile in homojunction TFETs. This allows to quantify the impact of source/drain doping on device performance, an aspect usually ignored in TFET modeling but highly relevant in ultra-scaled devices. The compact model is validated by comparison with state-of-the-art quantum transport simulations using a 3D full band atomistic approach based on non-equilibrium Green's functions. It is shown that the model reproduces with good accuracy the data obtained from the simulations in all regions of operation: the on/off states and the n/p branches of conduction. This approach allows calculation of energy-dependent band-to-band tunneling currents in TFETs, a feature that allows gaining deep insights into the underlying device physics. The simplicity and accuracy of the approach provide a powerful tool to explore in a quantitatively manner how a wide variety of parameters (material-, size-, and/or geometry-dependent) impact the TFET performance under any bias conditions. The proposed model presents thus a practical complement to computationally expensive simulations such as the 3D NEGF approach.
Perveen, Nazia; Barot, Sébastien; Alvarez, Gaël; Klumpp, Katja; Martin, Raphael; Rapaport, Alain; Herfurth, Damien; Louault, Frédérique; Fontaine, Sébastien
2014-04-01
Integration of the priming effect (PE) in ecosystem models is crucial to better predict the consequences of global change on ecosystem carbon (C) dynamics and its feedbacks on climate. Over the last decade, many attempts have been made to model PE in soil. However, PE has not yet been incorporated into any ecosystem models. Here, we build plant/soil models to explore how PE and microbial diversity influence soil/plant interactions and ecosystem C and nitrogen (N) dynamics in response to global change (elevated CO2 and atmospheric N depositions). Our results show that plant persistence, soil organic matter (SOM) accumulation, and low N leaching in undisturbed ecosystems relies on a fine adjustment of microbial N mineralization to plant N uptake. This adjustment can be modeled in the SYMPHONY model by considering the destruction of SOM through PE, and the interactions between two microbial functional groups: SOM decomposers and SOM builders. After estimation of parameters, SYMPHONY provided realistic predictions on forage production, soil C storage and N leaching for a permanent grassland. Consistent with recent observations, SYMPHONY predicted a CO2 -induced modification of soil microbial communities leading to an intensification of SOM mineralization and a decrease in the soil C stock. SYMPHONY also indicated that atmospheric N deposition may promote SOM accumulation via changes in the structure and metabolic activities of microbial communities. Collectively, these results suggest that the PE and functional role of microbial diversity may be incorporated in ecosystem models with a few additional parameters, improving accuracy of predictions. PMID:24339186
2-D Modeling of Nanoscale MOSFETs: Non-Equilibrium Green's Function Approach
NASA Technical Reports Server (NTRS)
Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan
2001-01-01
We have developed physical approximations and computer code capable of realistically simulating 2-D nanoscale transistors, using the non-equilibrium Green's function (NEGF) method. This is the most accurate full quantum model yet applied to 2-D device simulation. Open boundary conditions and oxide tunneling are treated on an equal footing. Electrons in the ellipsoids of the conduction band are treated within the anisotropic effective mass approximation. Electron-electron interaction is treated within Hartree approximation by solving NEGF and Poisson equations self-consistently. For the calculations presented here, parallelization is performed by distributing the solution of NEGF equations to various processors, energy wise. We present simulation of the "benchmark" MIT 25nm and 90nm MOSFETs and compare our results to those from the drift-diffusion simulator and the quantum-corrected results available. In the 25nm MOSFET, the channel length is less than ten times the electron wavelength, and the electron scattering time is comparable to its transit time. Our main results are: (1) Simulated drain subthreshold current characteristics are shown, where the potential profiles are calculated self-consistently by the corresponding simulation methods. The current predicted by our quantum simulation has smaller subthreshold slope of the Vg dependence which results in higher threshold voltage. (2) When gate oxide thickness is less than 2 nm, gate oxide leakage is a primary factor which determines off-current of a MOSFET (3) Using our 2-D NEGF simulator, we found several ways to drastically decrease oxide leakage current without compromising drive current. (4) Quantum mechanically calculated electron density is much smaller than the background doping density in the poly silicon gate region near oxide interface. This creates an additional effective gate voltage. Different ways to. include this effect approximately will be discussed.
Tabacchi, G; Hutter, J; Mundy, C
2005-04-07
A combined linear response--frozen electron density model has been implemented in a molecular dynamics scheme derived from an extended Lagrangian formalism. This approach is based on a partition of the electronic charge distribution into a frozen region described by Kim-Gordon theory, and a response contribution determined by the instaneous ionic configuration of the system. The method is free from empirical pair-potentials and the parameterization protocol involves only calculations on properly chosen subsystems. They apply this method to a series of alkali halides in different physical phases and are able to reproduce experimental structural and thermodynamic properties with an accuracy comparable to Kohn-Sham density functional calculations.
Functional Generalized Additive Models.
McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David
2014-01-01
We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online. PMID:24729671
NASA Astrophysics Data System (ADS)
Echavarria, E.; Tomiyama, T.; van Bussel, G. J. W.
2007-07-01
The objective of this on-going research is to develop a design methodology to increase the availability for offshore wind farms, by means of an intelligent maintenance system capable of responding to faults by reconfiguring the system or subsystems, without increasing service visits, complexity, or costs. The idea is to make use of the existing functional redundancies within the system and sub-systems to keep the wind turbine operational, even at a reduced capacity if necessary. Re-configuration is intended to be a built-in capability to be used as a repair strategy, based on these existing functionalities provided by the components. The possible solutions can range from using information from adjacent wind turbines, such as wind speed and direction, to setting up different operational modes, for instance re-wiring, re-connecting, changing parameters or control strategy. The methodology described in this paper is based on qualitative physics and consists of a fault diagnosis system based on a model-based reasoner (MBR), and on a functional redundancy designer (FRD). Both design tools make use of a function-behaviour-state (FBS) model. A design methodology based on the re-configuration concept to achieve self-maintained wind turbines is an interesting and promising approach to reduce stoppage rate, failure events, maintenance visits, and to maintain energy output possibly at reduced rate until the next scheduled maintenance.
NASA Astrophysics Data System (ADS)
Grewe, V.; Frömming, C.; Matthes, S.; Brinkop, S.; Ponater, M.; Dietmüller, S.; Jöckel, P.; Garny, H.; Tsati, E.; Dahlmann, K.; Søvde, O. A.; Fuglestvedt, J.; Berntsen, T. K.; Shine, K. P.; Irvine, E. A.; Champougny, T.; Hullah, P.
2014-01-01
In addition to CO2, the climate impact of aviation is strongly influenced by non-CO2 emissions, such as nitrogen oxides, influencing ozone and methane, and water vapour, which can lead to the formation of persistent contrails in ice-supersaturated regions. Because these non-CO2 emission effects are characterised by a short lifetime, their climate impact largely depends on emission location and time; that is to say, emissions in certain locations (or times) can lead to a greater climate impact (even on the global average) than the same emission in other locations (or times). Avoiding these climate-sensitive regions might thus be beneficial to climate. Here, we describe a modelling chain for investigating this climate impact mitigation option. This modelling chain forms a multi-step modelling approach, starting with the simulation of the fate of emissions released at a certain location and time (time-region grid points). This is performed with the chemistry-climate model EMAC, extended via the two submodels AIRTRAC (V1.0) and CONTRAIL (V1.0), which describe the contribution of emissions to the composition of the atmosphere and to contrail formation, respectively. The impact of emissions from the large number of time-region grid points is efficiently calculated by applying a Lagrangian scheme. EMAC also includes the calculation of radiative impacts, which are, in a second step, the input to climate metric formulas describing the global climate impact of the emission at each time-region grid point. The result of the modelling chain comprises a four-dimensional data set in space and time, which we call climate cost functions and which describes the global climate impact of an emission at each grid point and each point in time. In a third step, these climate cost functions are used in an air traffic simulator (SAAM) coupled to an emission tool (AEM) to optimise aircraft trajectories for the North Atlantic region. Here, we describe the details of this new modelling
NASA Astrophysics Data System (ADS)
Grewe, V.; Frömming, C.; Matthes, S.; Brinkop, S.; Ponater, M.; Dietmüller, S.; Jöckel, P.; Garny, H.; Tsati, E.; Søvde, O. A.; Fuglestvedt, J.; Berntsen, T. K.; Shine, K. P.; Irvine, E. A.; Champougny, T.; Hullah, P.
2013-08-01
In addition to CO2, the climate impact of aviation is strongly influenced by non-CO2 emissions, such as nitrogen oxides, influencing ozone and methane, and water vapour, forming contrails. Because these non-CO2 emission effects are characterised by a short lifetime, their climate impact largely depends on emission location and time, i.e. emissions in certain locations (or times) can lead to a greater climate impact (even on the global average) than the same emission in other locations (or times). Avoiding these climate sensitive regions might thus be beneficial to climate. Here, we describe a modelling chain for investigating this climate impact mitigation option. It forms a multi-step modelling approach, starting with the simulation of the fate of emissions released at a certain location and time (time-region). This is performed with the chemistry-climate model EMAC, extended by the two submodels AIRTRAC 1.0 and CONTRAIL 1.0, which describe the contribution of emissions to the composition of the atmosphere and the contrail formation. Numerous time-regions are efficiently calculated by applying a Lagrangian scheme. EMAC also includes the calculation of radiative impacts, which are, in a second step, the input to climate metric formulas describing the climate impact of the time-region emission. The result of the modelling chain comprises a four dimensional dataset in space and time, which we call climate cost functions, and which describe at each grid point and each point in time, the climate impact of an emission. In a third step, these climate cost functions are used in a traffic simulator (SAAM), coupled to an emission tool (AEM) to optimise aircraft trajectories for the North Atlantic region. Here, we describe the details of this new modelling approach and show some example results. A number of sensitivity analyses are performed to motivate the settings of individual parameters. A stepwise sanity check of the results of the modelling chain is undertaken to
Turan, Başak; Selçuki, Cenk
2014-09-01
Amino acids are constituents of proteins and enzymes which take part almost in all metabolic reactions. Glutamic acid, with an ability to form a negatively charged side chain, plays a major role in intra and intermolecular interactions of proteins, peptides, and enzymes. An exhaustive conformational analysis has been performed for all eight possible forms at B3LYP/cc-pVTZ level. All possible neutral, zwitterionic, protonated, and deprotonated forms of glutamic acid structures have been investigated in solution by using polarizable continuum model mimicking water as the solvent. Nine families based on the dihedral angles have been classified for eight glutamic acid forms. The electrostatic effects included in the solvent model usually stabilize the charged forms more. However, the stability of the zwitterionic form has been underestimated due to the lack of hydrogen bonding between the solute and solvent; therefore, it is observed that compact neutral glutamic acid structures are more stable in solution than they are in vacuum. Our calculations have shown that among all eight possible forms, some are not stable in solution and are immediately converted to other more stable forms. Comparison of isoelectronic glutamic acid forms indicated that one of the structures among possible zwitterionic and anionic forms may dominate over the other possible forms. Additional investigations using explicit solvent models are necessary to determine the stability of charged forms of glutamic acid in solution as our results clearly indicate that hydrogen bonding and its type have a major role in the structure and energy of conformers. PMID:25135067
Modelling approaches for angiogenesis.
Taraboletti, G; Giavazzi, R
2004-04-01
The development of a functional vasculature within a tumour is a requisite for its growth and progression. This fact has led to the design of therapies directed toward the tumour vasculature, aiming either to prevent the formation of new vessels (anti-angiogenic) or to damage existing vessels (vascular targeting). The development of agents with different mechanisms of action requires powerful preclinical models for the analysis and optimization of these therapies. This review concerns 'classical' assays of angiogenesis in vitro and in vivo, recent approaches to target identification (analysis of gene and protein expression), and the study of morphological and functional changes in the vasculature in vivo (imaging techniques). It mainly describes assays designed for anti-angiogenic compounds, indicating, where possible, their application to the study of vascular-targeting agents. PMID:15120043
2015-01-01
Escherichia coli thymidylate synthase (TS) is an enzyme that is indispensable to DNA synthesis and cell division, as it provides the only de novo source of dTMP by catalyzing the reductive methylation of dUMP, thus making it a key target for chemotherapeutic agents. High resolution X-ray crystallographic structures are available for TS and, owing to its relatively small size, successful experimental mutagenesis studies have been conducted on the enzyme. In this study, an in silico mutagenesis technique is used to investigate the effects of single amino acid substitutions in TS on enzymatic activity, one that employs the TS protein structure as well as a knowledge-based, four-body statistical potential. For every single residue TS variant, this approach yields both a global structural perturbation score and a set of local environmental perturbation scores that characterize the mutated position as well as all structurally neighboring residues. Global scores for the TS variants are capable of uniquely characterizing groups of residue positions in the enzyme according to their physicochemical, functional, or structural properties. Additionally, these global scores elucidate a statistically significant structure–function relationship among a collection of 372 single residue TS variants whose activity levels have been experimentally determined. Predictive models of TS variant activity are subsequently trained on this dataset of experimental mutants, whose respective feature vectors encode information regarding the mutated position as well as its six nearest residue neighbors in the TS structure, including their environmental perturbation scores. PMID:25648456
[Partial lease squares approach to functional analysis].
Preda, C
2006-01-01
We extend the partial least squares (PLS) approach to functional data represented in our models by sample paths of stochastic process with continuous time. Due to the infinite dimension, when functional data are used as a predictor for linear regression and classification models, the estimation problem is an ill-posed one. In this context, PLS offers a simple and efficient alternative to the methods based on the principal components of the stochastic process. We compare the results given by the PLS approach and other linear models using several datasets from economy, industry and medical fields. PMID:17124795
NASA Astrophysics Data System (ADS)
Maitra, Subrata; Banerjee, Debamalya
2010-10-01
Present article is based on application of the product quality and improvement of design related with the nature of failure of machineries and plant operational problems of an industrial blower fan Company. The project aims at developing the product on the basis of standardized production parameters for selling its products in the market. Special attention is also being paid to the blower fans which have been ordered directly by the customer on the basis of installed capacity of air to be provided by the fan. Application of quality function deployment is primarily a customer oriented approach. Proposed model of QFD integrated with AHP to select and rank the decision criterions on the commercial and technical factors and the measurement of the decision parameters for selection of best product in the compettitive environment. The present AHP-QFD model justifies the selection of a blower fan with the help of the group of experts' opinion by pairwise comparison of the customer's and ergonomy based technical design requirements. The steps invoved in implementation of the QFD—AHP and selection of weighted criterion may be helpful for all similar purpose industries maintaining cost and utility for competitive product.
Various modeling approaches have been developed for metal binding on humic substances. However, most of these models are still curve-fitting exercises-- the resulting set of parameters such as affinity constants (or the distribution of them) is found to depend on pH, ionic stren...
Tang, Jau
1996-02-01
As an alternative to better physical explanations of the mechanisms of quantum interference and the origins of uncertainty broadening, a linear hopping model is proposed with ``color-varying`` dynamics to reflect fast exchange between time-reversed states. Intricate relations between this model, particle-wave dualism, and relativity are discussed. The wave function is shown to possess dual characteristics of a stable, localized ``soliton-like`` de Broglie wavelet and a delocalized, interfering Schroedinger carrier wave function.
NASA Astrophysics Data System (ADS)
Pavlick, R.; Drewry, D. T.; Bohn, K.; Reu, B.; Kleidon, A.
2013-06-01
Terrestrial biosphere models typically abstract the immense diversity of vegetation forms and functioning into a relatively small set of predefined semi-empirical plant functional types (PFTs). There is growing evidence, however, from the field ecology community as well as from modelling studies that current PFT schemes may not adequately represent the observed variations in plant functional traits and their effect on ecosystem functioning. In this paper, we introduce the Jena Diversity-Dynamic Global Vegetation Model (JeDi-DGVM) as a new approach to terrestrial biosphere modelling with a richer representation of functional diversity than traditional modelling approaches based on a small number of fixed PFTs. JeDi-DGVM simulates the performance of a large number of randomly generated plant growth strategies, each defined by a set of 15 trait parameters which characterize various aspects of plant functioning including carbon allocation, ecophysiology and phenology. Each trait parameter is involved in one or more functional trade-offs. These trade-offs ultimately determine whether a strategy is able to survive under the climatic conditions in a given model grid cell and its performance relative to the other strategies. The biogeochemical fluxes and land surface properties of the individual strategies are aggregated to the grid-cell scale using a mass-based weighting scheme. We evaluate the simulated global biogeochemical patterns against a variety of field and satellite-based observations following a protocol established by the Carbon-Land Model Intercomparison Project. The land surface fluxes and vegetation structural properties are reasonably well simulated by JeDi-DGVM, and compare favourably with other state-of-the-art global vegetation models. We also evaluate the simulated patterns of functional diversity and the sensitivity of the JeDi-DGVM modelling approach to the number of sampled strategies. Altogether, the results demonstrate the parsimonious and flexible
An evolutionary approach to Function
2010-01-01
Background Understanding the distinction between function and role is vexing and difficult. While it appears to be useful, in practice this distinction is hard to apply, particularly within biology. Results I take an evolutionary approach, considering a series of examples, to develop and generate definitions for these concepts. I test them in practice against the Ontology for Biomedical Investigations (OBI). Finally, I give an axiomatisation and discuss methods for applying these definitions in practice. Conclusions The definitions in this paper are applicable, formalizing current practice. As such, they make a significant contribution to the use of these concepts within biomedical ontologies. PMID:20626924
Hadjipantelis, P. Z.; Aston, J. A. D.; Müller, H. G.; Evans, J. P.
2015-01-01
Mandarin Chinese is characterized by being a tonal language; the pitch (or F 0) of its utterances carries considerable linguistic information. However, speech samples from different individuals are subject to changes in amplitude and phase, which must be accounted for in any analysis that attempts to provide a linguistically meaningful description of the language. A joint model for amplitude, phase, and duration is presented, which combines elements from functional data analysis, compositional data analysis, and linear mixed effects models. By decomposing functions via a functional principal component analysis, and connecting registration functions to compositional data analysis, a joint multivariate mixed effect model can be formulated, which gives insights into the relationship between the different modes of variation as well as their dependence on linguistic and nonlinguistic covariates. The model is applied to the COSPRO-1 dataset, a comprehensive database of spoken Taiwanese Mandarin, containing approximately 50,000 phonetically diverse sample F 0 contours (syllables), and reveals that phonetic information is jointly carried by both amplitude and phase variation. Supplementary materials for this article are available online. PMID:26692591
ERIC Educational Resources Information Center
Lloyd, Rebecca
2015-01-01
Background: Physical Education (PE) programmes are expanding to include alternative activities yet what is missing is a conceptual model that facilitates how the learning process may be understood and assessed beyond the dominant sport-technique paradigm. Purpose: The purpose of this article was to feature the emergence of a Function-to-Flow (F2F)…
Pe'er, Guy; Henle, Klaus; Dislich, Claudia; Frank, Karin
2011-01-01
Landscape connectivity is a key factor determining the viability of populations in fragmented landscapes. Predicting ‘functional connectivity’, namely whether a patch or a landscape functions as connected from the perspective of a focal species, poses various challenges. First, empirical data on the movement behaviour of species is often scarce. Second, animal-landscape interactions are bound to yield complex patterns. Lastly, functional connectivity involves various components that are rarely assessed separately. We introduce the spatially explicit, individual-based model FunCon as means to distinguish between components of functional connectivity and to assess how each of them affects the sensitivity of species and communities to landscape structures. We then present the results of exploratory simulations over six landscapes of different fragmentation levels and across a range of hypothetical bird species that differ in their response to habitat edges. i) Our results demonstrate that estimations of functional connectivity depend not only on the response of species to edges (avoidance versus penetration into the matrix), the movement mode investigated (home range movements versus dispersal), and the way in which the matrix is being crossed (random walk versus gap crossing), but also on the choice of connectivity measure (in this case, the model output examined). ii) We further show a strong effect of the mortality scenario applied, indicating that movement decisions that do not fully match the mortality risks are likely to reduce connectivity and enhance sensitivity to fragmentation. iii) Despite these complexities, some consistent patterns emerged. For instance, the ranking order of landscapes in terms of functional connectivity was mostly consistent across the entire range of hypothetical species, indicating that simple landscape indices can potentially serve as valuable surrogates for functional connectivity. Yet such simplifications must be carefully
NASA Astrophysics Data System (ADS)
Pavlick, R.; Drewry, D. T.; Bohn, K.; Reu, B.; Kleidon, A.
2012-04-01
Dynamic Global Vegetation Models (DGVMs) typically abstract the immense diversity of vegetation forms and functioning into a relatively small set of predefined semi-empirical Plant Functional Types (PFTs). There is growing evidence, however, from the field ecology community as well as from modelling studies that current PFT schemes may not adequately represent the observed variations in plant functional traits and their effect on ecosystem functioning. In this paper, we introduce the Jena Diversity DGVM (JeDi-DGVM) as a new approach to global vegetation modelling with a richer representation of functional diversity than traditional modelling approaches based on a small number of fixed PFTs. JeDi-DGVM simulates the performance of a large number of randomly-generated plant growth strategies (PGSs), each defined by a set of 15 trait parameters which characterize various aspects of plant functioning including carbon allocation, ecophysiology and phenology. Each trait parameter is involved in one or more functional trade-offs. These trade-offs ultimately determine whether a PGS is able to survive under the climatic conditions in a given model grid cell and its performance relative to the other PGSs. The biogeochemical fluxes and land-surface properties of the individual PGSs are aggregated to the grid cell scale using a mass-based weighting scheme. Simulated global biogeochemical and biogeographical patterns are evaluated against a variety of field and satellite-based observations following a protocol established by the Carbon-Land Model Intercomparison Project. The land surface fluxes and vegetation structural properties are reasonably well simulated by JeDi-DGVM, and compare favorably with other state-of-the-art terrestrial biosphere models. This is despite the parameters describing the ecophysiological functioning and allometry of JeDi-DGVM plants evolving as a function of vegetation survival in a given climate, as opposed to typical approaches that fix land surface
Shakouri, Payman; Ordys, Andrzej; Askari, Mohamad R
2012-09-01
In the design of adaptive cruise control (ACC) system two separate control loops - an outer loop to maintain the safe distance from the vehicle traveling in front and an inner loop to control the brake pedal and throttle opening position - are commonly used. In this paper a different approach is proposed in which a single control loop is utilized. The objective of the distance tracking is incorporated into the single nonlinear model predictive control (NMPC) by extending the original linear time invariant (LTI) models obtained by linearizing the nonlinear dynamic model of the vehicle. This is achieved by introducing the additional states corresponding to the relative distance between leading and following vehicles, and also the velocity of the leading vehicle. Control of the brake and throttle position is implemented by taking the state-dependent approach. The model demonstrates to be more effective in tracking the speed and distance by eliminating the necessity of switching between the two controllers. It also offers smooth variation in brake and throttle controlling signal which subsequently results in a more uniform acceleration of the vehicle. The results of proposed method are compared with other ACC systems using two separate control loops. Furthermore, an ACC simulation results using a stop&go scenario are shown, demonstrating a better fulfillment of the design requirements. PMID:22704362
Modeling approaches for active systems
NASA Astrophysics Data System (ADS)
Herold, Sven; Atzrodt, Heiko; Mayer, Dirk; Thomaier, Martin
2006-03-01
To solve a wide range of vibration problems with the active structures technology, different simulation approaches for several models are needed. The selection of an appropriate modeling strategy is depending, amongst others, on the frequency range, the modal density and the control target. An active system consists of several components: the mechanical structure, at least one sensor and actuator, signal conditioning electronics and the controller. For each individual part of the active system the simulation approaches can be different. To integrate the several modeling approaches into an active system simulation and to ensure a highly efficient and accurate calculation, all sub models must harmonize. For this purpose, structural models considered in this article are modal state-space formulations for the lower frequency range and transfer function based models for the higher frequency range. The modal state-space formulations are derived from finite element models and/or experimental modal analyses. Consequently, the structure models which are based on transfer functions are directly derived from measurements. The transfer functions are identified with the Steiglitz-McBride iteration method. To convert them from the z-domain to the s-domain a least squares solution is implemented. An analytical approach is used to derive models of active interfaces. These models are transferred into impedance formulations. To couple mechanical and electrical sub-systems with the active materials, the concept of impedance modeling was successfully tested. The impedance models are enhanced by adapting them to adequate measurements. The controller design strongly depends on the frequency range and the number of modes to be controlled. To control systems with a small number of modes, techniques such as active damping or independent modal space control may be used, whereas in the case of systems with a large number of modes or with modes that are not well separated, other control
NASA Astrophysics Data System (ADS)
Fakhri, H.; Dehghani, A.; Mojaveri, B.
Using second-order differential operators as a realization of the su(1,1) Lie algebra by the associated Laguerre functions, it is shown that the quantum states of the Calogero-Sutherland, half-oscillator and radial part of a 3D harmonic oscillator constitute the unitary representations for the same algebra. This su(1,1) Lie algebra symmetry leads to derivation of the Barut-Girardello and Klauder-Perelomov coherent states for those models. The explicit compact forms of these coherent states are calculated. Also, to realize the resolution of the identity, their corresponding positive definite measures on the complex plane are obtained in terms of the known functions.
Introducing linear functions: an alternative statistical approach
NASA Astrophysics Data System (ADS)
Nolan, Caroline; Herbert, Sandra
2015-12-01
The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be `threshold concepts'. There is recognition that linear functions can be taught in context through the exploration of linear modelling examples, but this has its limitations. Currently, statistical data is easily attainable, and graphics or computer algebra system (CAS) calculators are common in many classrooms. The use of this technology provides ease of access to different representations of linear functions as well as the ability to fit a least-squares line for real-life data. This means these calculators could support a possible alternative approach to the introduction of linear functions. This study compares the results of an end-of-topic test for two classes of Australian middle secondary students at a regional school to determine if such an alternative approach is feasible. In this study, test questions were grouped by concept and subjected to concept by concept analysis of the means of test results of the two classes. This analysis revealed that the students following the alternative approach demonstrated greater competence with non-standard questions.
Estimating Function Approaches for Spatial Point Processes
NASA Astrophysics Data System (ADS)
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting
NASA Astrophysics Data System (ADS)
Martín-Ruiz, A.; Cambiaso, M.; Urrutia, L. F.
2016-02-01
The Green's function method is used to analyze the boundary effects produced by a Chern-Simons extension to electrodynamics. We consider the electromagnetic field coupled to a θ term that is piecewise constant in different regions of space, separated by a common interface Σ , the θ boundary, model which we will refer to as θ electrodynamics. This model provides a correct low-energy effective action for describing topological insulators. Features arising due to the presence of the boundary, such as magnetoelectric effects, are already known in Chern-Simons extended electrodynamics, and solutions for some experimental setups have been found with a specific configuration of sources. In this work we construct the static Green's function in θ electrodynamics for different geometrical configurations of the θ boundary, namely, planar, spherical and cylindrical θ -interfaces. Also, we adapt the standard Green's theorem to include the effects of the θ boundary. These are the most important results of our work, since they allow one to obtain the corresponding static electric and magnetic fields for arbitrary sources and arbitrary boundary conditions in the given geometries. Also, the method provides a well-defined starting point for either analytical or numerical approximations in the cases where the exact analytical calculations are not possible. Explicit solutions for simple cases in each of the aforementioned geometries for θ boundaries are provided. On the one hand, the adapted Green's theorem is illustrated by studying the problem of a pointlike electric charge interacting with a planar topological insulator with prescribed boundary conditions. On the other hand, we calculate the electric and magnetic static fields produced by the following sources: (i) a pointlike electric charge near a spherical θ boundary, (ii) an infinitely straight current-carrying wire near a cylindrical θ boundary and (iii) an infinitely straight uniformly charged wire near a
Arooj, Mahreen; Thangapandian, Sundarapandian; John, Shalini; Hwang, Swan; Park, Jong Keun; Lee, Keun Woo
2011-01-01
Human chymase is a very important target for the treatment of cardiovascular diseases. Using a series of theoretical methods like pharmacophore modeling, database screening, molecular docking and Density Functional Theory (DFT) calculations, an investigation for identification of novel chymase inhibitors, and to specify the key factors crucial for the binding and interaction between chymase and inhibitors is performed. A highly correlating (r = 0.942) pharmacophore model (Hypo1) with two hydrogen bond acceptors, and three hydrophobic aromatic features is generated. After successfully validating “Hypo1”, it is further applied in database screening. Hit compounds are subjected to various drug-like filtrations and molecular docking studies. Finally, three structurally diverse compounds with high GOLD fitness scores and interactions with key active site amino acids are identified as potent chymase hits. Moreover, DFT study is performed which confirms very clear trends between electronic properties and inhibitory activity (IC50) data thus successfully validating “Hypo1” by DFT method. Therefore, this research exertion can be helpful in the development of new potent hits for chymase. In addition, the combinational use of docking, orbital energies and molecular electrostatic potential analysis is also demonstrated as a good endeavor to gain an insight into the interaction between chymase and inhibitors. PMID:22272131
NASA Astrophysics Data System (ADS)
Hibbard, Bill
2012-05-01
Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.
Kavitha, Rengarajan; Karunagaran, Subramanian; Chandrabose, Subramaniam Subhash; Lee, Keun Woo; Meganathan, Chandrasekaran
2015-12-01
Fructose catabolism starts with phosphorylation of d-fructose to fructose 1-phosphate, which is performed by ketohexokinase (KHK). Fructose metabolism may be the key to understand the long-term consumption of fructose in human's obesity, diabetes and metabolic states in western populations. The inhibition of KHK has medicinally potential roles in fructose metabolism and the metabolic syndrome. To identify the essential chemical features for KHK inhibition, a three-dimensional (3D) chemical-feature-based QSAR pharmacophore model was developed for the first time by using Discovery Studio v2.5 (DS). The best pharmacophore hypothesis (Hypo1) consisting two hydrogen bond donor, two hydrophobic features and has exhibited high correlation co-efficient (0.97), cost difference (76.1) and low RMS (0.66) value. The robustness and predictability of Hypo1 was validated by fisher's randomization method, test set, and the decoy set. Subsequently, chemical databases like NCI, Chembridge and Maybridge were screened for validated Hypo1. The screened compounds were further analyzed by applying drug-like filters such as Lipinski's rule of five, ADME properties, and molecular docking studies. Further, the highest occupied molecular orbital, lowest unoccupied molecular orbital and energy gap values were calculated for the hits compounds using density functional theory. Finally, 3 hit compounds were selected based on their good molecular interactions with key amino acids in the KHK active site, GOLD fitness score, and lowest energy gaps. PMID:26521124
Menouar, Salah; Maamache, Mustapha; Choi, Jeong Ryeol
2010-08-15
The quantum states of time-dependent coupled oscillator model for charged particles subjected to variable magnetic field are investigated using the invariant operator methods. To do this, we have taken advantage of an alternative method, so-called unitary transformation approach, available in the framework of quantum mechanics, as well as a generalized canonical transformation method in the classical regime. The transformed quantum Hamiltonian is obtained using suitable unitary operators and is represented in terms of two independent harmonic oscillators which have the same frequencies as that of the classically transformed one. Starting from the wave functions in the transformed system, we have derived the full wave functions in the original system with the help of the unitary operators. One can easily take a complete description of how the charged particle behaves under the given Hamiltonian by taking advantage of these analytical wave functions.
Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2013-01-01
A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.
Meson wave function from holographic approaches
Vega, Alfredo; Schmidt, Ivan; Branz, Tanja; Gutsche, Thomas; Lyubovitskij, Valery E.
2010-08-04
We discuss the light-front wave function for the valence quark state of mesons using the AdS/CFT correspondence. We consider two kinds of wave functions obtained in different holographic Soft-Wall approaches.
Suhendi, Endi; Syariati, Rifki; Noor, Fatimah A.; Khairurrijal; Kurniasih, Neny
2014-03-24
We modeled a tunneling current in a p-n junction based on armchair graphene nanoribbons (AGNRs) by using an Airy function approach (AFA) and a transfer matrix method (TMM). We used β-type AGNRs, in which its band gap energy and electron effective mass depends on its width as given by the extended Huckel theory. It was shown that the tunneling currents evaluated by employing the AFA are the same as those obtained under the TMM. Moreover, the calculated tunneling current was proportional to the voltage bias and inversely with temperature.
Approaches for modeling magnetic nanoparticle dynamics
Reeves, Daniel B; Weaver, John B
2014-01-01
Magnetic nanoparticles are useful biological probes as well as therapeutic agents. There have been several approaches used to model nanoparticle magnetization dynamics for both Brownian as well as Néel rotation. The magnetizations are often of interest and can be compared with experimental results. Here we summarize these approaches including the Stoner-Wohlfarth approach, and stochastic approaches including thermal fluctuations. Non-equilibrium related temperature effects can be described by a distribution function approach (Fokker-Planck equation) or a stochastic differential equation (Langevin equation). Approximate models in several regimes can be derived from these general approaches to simplify implementation. PMID:25271360
Pineda, Jaime A.; Friedrich, Elisabeth V. C.; LaMarca, Kristen
2014-01-01
Autism Spectrum Disorder (ASD) is an increasingly prevalent condition with core deficits in the social domain. Understanding its neuroetiology is critical to providing insights into the relationship between neuroanatomy, physiology and social behaviors, including imitation learning, language, empathy, theory of mind, and even self-awareness. Equally important is the need to find ways to arrest its increasing prevalence and to ameliorate its symptoms. In this review, we highlight neurofeedback studies as viable treatment options for high-functioning as well as low-functioning children with ASD. Lower-functioning groups have the greatest need for diagnosis and treatment, the greatest barrier to communication, and may experience the greatest benefit if a treatment can improve function or prevent progression of the disorder at an early stage. Therefore, we focus on neurofeedback interventions combined with other kinds of behavioral conditioning to induce neuroplastic changes that can address the full spectrum of the autism phenotype. PMID:25147521
Röling, Wilfred F. M.; van Bodegom, Peter M.
2014-01-01
Molecular ecology approaches are rapidly advancing our insights into the microorganisms involved in the degradation of marine oil spills and their metabolic potentials. Yet, many questions remain open: how do oil-degrading microbial communities assemble in terms of functional diversity, species abundances and organization and what are the drivers? How do the functional properties of microorganisms scale to processes at the ecosystem level? How does mass flow among species, and which factors and species control and regulate fluxes, stability and other ecosystem functions? Can generic rules on oil-degradation be derived, and what drivers underlie these rules? How can we engineer oil-degrading microbial communities such that toxic polycyclic aromatic hydrocarbons are degraded faster? These types of questions apply to the field of microbial ecology in general. We outline how recent advances in single-species systems biology might be extended to help answer these questions. We argue that bottom-up mechanistic modeling allows deciphering the respective roles and interactions among microorganisms. In particular constraint-based, metagenome-derived community-scale flux balance analysis appears suited for this goal as it allows calculating degradation-related fluxes based on physiological constraints and growth strategies, without needing detailed kinetic information. We subsequently discuss what is required to make these approaches successful, and identify a need to better understand microbial physiology in order to advance microbial ecology. We advocate the development of databases containing microbial physiological data. Answering the posed questions is far from trivial. Oil-degrading communities are, however, an attractive setting to start testing systems biology-derived models and hypotheses as they are relatively simple in diversity and key activities, with several key players being isolated and a high availability of experimental data and approaches. PMID:24723922
Isojunno, Saana; Miller, Patrick J O
2016-01-01
The biological consequences of behavioral responses to anthropogenic noise depend on context. We explore the links between individual motivation, condition, and external constraints in a concept model and illustrate the use of motivational-behavioral states as a means to quantify the biologically relevant effects of tagging. Behavioral states were estimated from multiple streams of data in a hidden Markov model and used to test the change in foraging effort and the change in energetic success or cost given the effort. The presence of a tag boat elicited a short-term reduction in time spent in foraging states but not for proxies for success or cost within foraging states. PMID:26610996
NASA Astrophysics Data System (ADS)
Lee, Ji-Hwan; Tak, Youngjoo; Lee, Taehun; Soon, Aloysius
Ceria (CeO2-x) is widely studied as a choice electrolyte material for intermediate-temperature (~ 800 K) solid oxide fuel cells. At this temperature, maintaining its chemical stability and thermal-mechanical integrity of this oxide are of utmost importance. To understand their thermal-elastic properties, we firstly test the influence of various approximations to the density-functional theory (DFT) xc functionals on specific thermal-elastic properties of both CeO2 and Ce2O3. Namely, we consider the local-density approximation (LDA), the generalized gradient approximation (GGA-PBE) with and without additional Hubbard U as applied to the 4 f electron of Ce, as well as the recently popularized hybrid functional due to Heyd-Scuseria-Ernzehof (HSE06). Next, we then couple this to a volume-dependent Debye-Grüneisen model to determine the thermodynamic quantities of ceria at arbitrary temperatures. We find an explicit description of the strong correlation (e.g. via the DFT + U and hybrid functional approach) is necessary to have a good agreement with experimental values, in contrast to the mean-field treatment in standard xc approximations (such as LDA or GGA-PBE). We acknowledge support from Samsung Research Funding Center of Samsung Electronics (SRFC-MA1501-03).
NASA Astrophysics Data System (ADS)
Schafroth, S.; Rodríguez-Núñez, J. J.
1999-08-01
We evaluate the one-particle and double-occupied Green functions for the Hubbard model at half-filling using the moment approach of Nolting [Z. Phys. 255, 25 (1972); Grund Kurs: Theoretische Physik. 7 Viel-Teilchen-Theorie (Verlag Zimmermann-Neufang, Ulmen, 1992)]. Our starting point is a self-energy, Σ(k-->,ω), which has a single pole, Ω(k-->), with spectral weight, α(k-->), and quasiparticle lifetime, γ(k-->) [J. J. Rodríguez-Núñez and S. Schafroth, J. Phys. Condens. Matter 10, L391 (1998); J. J. Rodríguez-Núñez, S. Schafroth, and H. Beck, Physica B (to be published); (unpublished)]. In our approach, Σ(k-->,ω) becomes the central feature of the many-body problem and due to three unknown k--> parameters we have to satisfy only the first three sum rules instead of four as in the canonical formulation of Nolting [Z. Phys. 255, 25 (1972); Grund Kurs: Theoretische Physik. 7 Viel-Teilchen-Theorie (Verlag Zimmermann-Neufang, Ulmen, 1992)]. This self-energy choice forces our system to be a non-Fermi liquid for any value of the interaction, since it does not vanish at zero frequency. The one-particle Green function, G(k-->,ω), shows the fingerprint of a strongly correlated system, i.e., a double peak structure in the one-particle spectral density, A(k-->,ω), vs ω for intermediate values of the interaction. Close to the Mott insulator-transition, A(k-->,ω) becomes a wide single peak, signaling the absence of quasiparticles. Similar behavior is observed for the real and imaginary parts of the self-energy, Σ(k-->,ω). The double-occupied Green function, G2(q-->,ω), has been obtained from G(k-->,ω) by means of the equation of motion. The relation between G2(q-->,ω) and the self-energy, Σ(k-->,ω), is formally established and numerical results for the spectral function of G2(k-->,ω), χ(2)(k-->,ω)≡-(1/π)δ-->0+Im[G2(k-->,ω)], are given. Our approach represents the simplest way to include (1) lifetime effects in the moment approach of Nolting, as
Li, Xin; Carravetta, Vincenzo; Li, Cui; Monti, Susanna; Rinkevicius, Zilvinas; Ågren, Hans
2016-07-12
Motivated by the growing importance of organometallic nanostructured materials and nanoparticles as microscopic devices for diagnostic and sensing applications, and by the recent considerable development in the simulation of such materials, we here choose a prototype system - para-nitroaniline (pNA) on gold nanoparticles - to demonstrate effective strategies for designing metal nanoparticles with organic conjugates from fundamental principles. We investigated the motion, adsorption mode, and physical chemistry properties of gold-pNA particles, increasing in size, through classical molecular dynamics (MD) simulations in connection with quantum chemistry (QC) calculations. We apply the quantum mechanics-capacitance molecular mechanics method [Z. Rinkevicius et al. J. Chem. Theory Comput. 2014, 10, 989] for calculations of the properties of the conjugate nanoparticles, where time dependent density functional theory is used for the QM part and a capacitance-polarizability parametrization of the MM part, where induced dipoles and charges by metallic charge transfer are considered. Dispersion and short-range repulsion forces are included as well. The scheme is applied to one- and two-photon absorption of gold-pNA clusters increasing in size toward the nanometer scale. Charge imaging of the surface introduces red-shifts both because of altered excitation energy dependence and variation of the relative intensity of the inherent states making up for the total band profile. For the smaller nanoparticles the difference in the crystal facets are important for the spectral outcome which is also influenced by the surrounding MM environment. PMID:27224666
I. M. Robertson; A. Beaudoin; J. Lambros
2004-01-05
OAK-135 Development and validation of constitutive models for polycrystalline materials subjected to high strain rate loading over a range of temperatures are needed to predict the response of engineering materials to in-service type conditions (foreign object damage, high-strain rate forging, high-speed sheet forming, deformation behavior during forming, response to extreme conditions, etc.). To account accurately for the complex effects that can occur during extreme and variable loading conditions, requires significant and detailed computational and modeling efforts. These efforts must be closely coupled with precise and targeted experimental measurements that not only verify the predictions of the models, but also provide input about the fundamental processes responsible for the macroscopic response. Achieving this coupling between modeling and experimentation is the guiding principle of this program. Specifically, this program seeks to bridge the length scale between discrete dislocation interactions with grain boundaries and continuum models for polycrystalline plasticity. Achieving this goal requires incorporating these complex dislocation-interface interactions into the well-defined behavior of single crystals. Despite the widespread study of metal plasticity, this aspect is not well understood for simple loading conditions, let alone extreme ones. Our experimental approach includes determining the high-strain rate response as a function of strain and temperature with post-mortem characterization of the microstructure, quasi-static testing of pre-deformed material, and direct observation of the dislocation behavior during reloading by using the in situ transmission electron microscope deformation technique. These experiments will provide the basis for development and validation of physically-based constitutive models, which will include dislocation-grain boundary interactions for polycrystalline systems. One aspect of the program will involve the dire ct
Holland, Alissa K; Carmona, Joseph E; Harrison, David W
2012-01-01
Regulatory control of emotions and expressive fluency (verbal or design) have historically been associated with the frontal lobes. Moreover, research has demonstrated the importance of cerebral laterality with a prominent role of the right frontal regions in the regulation of negative affect (anger, hostility) and in the fluent production of designs rather than verbal fluency. In the present research, participants identified with high and with low levels of hostility were evaluated on a design fluency test twice in one experimental session. Before the second administration of the fluency test, each participant underwent physiological (cold pressor) stress. It was hypothesized that diminished right frontal capacity in high-hostile men would be evident through lowered performance on this cognitive stressor. Convergent validity of the capacity model was supported wherein high-hostile men evidenced reduced delta magnitude over the right frontal region after exposure to the physiological stressor but failed to maintain consistent levels of right cerebral activation across conditions. The results suggest an inability for high-hostile men to maintain stable levels of cerebral activation after exposure to physiological and cognitive stress. Moreover, low-hostiles showed enhanced cognitive performance on the design task with lower levels of arousal (heightened delta magnitude). In contrast, reduced arousal yielded increased executive deficits in high-hostiles as evidenced through increased perseverative errors on the design fluency task. PMID:22091622
Borodovsky, M.
2013-04-11
Algorithmic methods for gene prediction have been developed and successfully applied to many different prokaryotic genome sequences. As the set of genes in a particular genome is not homogeneous with respect to DNA sequence composition features, the GeneMark.hmm program utilizes two Markov models representing distinct classes of protein coding genes denoted "typical" and "atypical". Atypical genes are those whose DNA features deviate significantly from those classified as typical and they represent approximately 10% of any given genome. In addition to the inherent interest of more accurately predicting genes, the atypical status of these genes may also reflect their separate evolutionary ancestry from other genes in that genome. We hypothesize that atypical genes are largely comprised of those genes that have been relatively recently acquired through lateral gene transfer (LGT). If so, what fraction of atypical genes are such bona fide LGTs? We have made atypical gene predictions for all fully completed prokaryotic genomes; we have been able to compare these results to other "surrogate" methods of LGT prediction.
Sun, Haitao; Ryno, Sean; Zhong, Cheng; Ravva, Mahesh Kumar; Sun, Zhenrong; Körzdörfer, Thomas; Brédas, Jean-Luc
2016-06-14
We propose a new methodology for the first-principles description of the electronic properties relevant for charge transport in organic molecular crystals. This methodology, which is based on the combination of a nonempirical, optimally tuned range-separated hybrid functional with the polarizable continuum model, is applied to a series of eight representative molecular semiconductor crystals. We show that it provides ionization energies, electron affinities, and transport gaps in very good agreement with experimental values, as well as with the results of many-body perturbation theory within the GW approximation at a fraction of the computational costs. Hence, this approach represents an easily applicable and computationally efficient tool to estimate the gas-to-crystal phase shifts of the frontier-orbital quasiparticle energies in organic electronic materials. PMID:27183355
Muccioli, Luca; D'Avino, Gabriele; Berardi, Roberto; Orlandi, Silvia; Pizzirusso, Antonio; Ricci, Matteo; Roscioni, Otello Maria; Zannoni, Claudio
2014-01-01
The molecular organization of functional organic materials is one of the research areas where the combination of theoretical modeling and experimental determinations is most fruitful. Here we present a brief summary of the simulation approaches used to investigate the inner structure of organic materials with semiconducting behavior, paying special attention to applications in organic photovoltaics and clarifying the often obscure jargon hindering the access of newcomers to the literature of the field. Special attention is paid to the choice of the computational "engine" (Monte Carlo or Molecular Dynamics) used to generate equilibrium configurations of the molecular system under investigation and, more importantly, to the choice of the chemical details in describing the molecular interactions. Recent literature dealing with the simulation of organic semiconductors is critically reviewed in order of increasing complexity of the system studied, from low molecular weight molecules to semiflexible polymers, including the challenging problem of determining the morphology of heterojunctions between two different materials. PMID:24322782
NASA Astrophysics Data System (ADS)
Lee, Taehun; Soon, Aloysius
2012-02-01
For high-temperature applications, the chemical stability, as well as the mechanical integrity of the oxide material used is of utmost importance. Solving these problems demands a thorough and fundamental understanding of their thermal-elastic properties. In this work, we report density-functional theory (DFT) calculations to investigate the influence of the xc functional on specific thermal-elastic properties of some common oxides CeO2, Cu2O, and MgO. Namely, we consider the local-density approximation (LDA), the generalized gradient approximation due to Perdew, Burke, and Ernzerhof (GGA-PBE), as well as a recently popularized hybrid functional due to Heyd-Scuseria-Ernzehof (HSE06). In addition, we will also report DFT+U results where we introduce a Hubbard U term to the Cu 3d and the Ce 4f states. Upon obtaining the DFT total energies, we then couple this to a volume-dependent Debye-Gr"uneisen model [1] to determine the thermodynamic quantities of these oxides at arbitrary pressures and temperatures. We find an explicit description of the strong correlation (e.g. via the DFT+U approach and using HSE06) is necessary to have a good agreement with experimental values. [1] A. Otero-de-la-Roza, D. Abbasi-P'erez et al. Com. Phys. Com. 182 (2011) 2232
Functional Risk Modeling for Lunar Surface Systems
NASA Technical Reports Server (NTRS)
Thomson, Fraser; Mathias, Donovan; Go, Susie; Nejad, Hamed
2010-01-01
We introduce an approach to risk modeling that we call functional modeling , which we have developed to estimate the capabilities of a lunar base. The functional model tracks the availability of functions provided by systems, in addition to the operational state of those systems constituent strings. By tracking functions, we are able to identify cases where identical functions are provided by elements (rovers, habitats, etc.) that are connected together on the lunar surface. We credit functional diversity in those cases, and in doing so compute more realistic estimates of operational mode availabilities. The functional modeling approach yields more realistic estimates of the availability of the various operational modes provided to astronauts by the ensemble of surface elements included in a lunar base architecture. By tracking functional availability the effects of diverse backup, which often exists when two or more independent elements are connected together, is properly accounted for.
An Inverse Approach for Elucidating Dendritic Function
Torben-Nielsen, Benjamin; Stiefel, Klaus M.
2010-01-01
We outline an inverse approach for investigating dendritic function–structure relationships by optimizing dendritic trees for a priori chosen computational functions. The inverse approach can be applied in two different ways. First, we can use it as a “hypothesis generator” in which we optimize dendrites for a function of general interest. The optimization yields an artificial dendrite that is subsequently compared to real neurons. This comparison potentially allows us to propose hypotheses about the function of real neurons. In this way, we investigated dendrites that optimally perform input-order detection. Second, we can use it as a “function confirmation” by optimizing dendrites for functions hypothesized to be performed by classes of neurons. If the optimized, artificial, dendrites resemble the dendrites of real neurons the artificial dendrites corroborate the hypothesized function of the real neuron. Moreover, properties of the artificial dendrites can lead to predictions about yet unmeasured properties. In this way, we investigated wide-field motion integration performed by the VS cells of the fly visual system. In outlining the inverse approach and two applications, we also elaborate on the nature of dendritic function. We furthermore discuss the role of optimality in assigning functions to dendrites and point out interesting future directions. PMID:21258425
Detection of Differential Item Functioning Using the Lasso Approach
ERIC Educational Resources Information Center
Magis, David; Tuerlinckx, Francis; De Boeck, Paul
2015-01-01
This article proposes a novel approach to detect differential item functioning (DIF) among dichotomously scored items. Unlike standard DIF methods that perform an item-by-item analysis, we propose the "LR lasso DIF method": logistic regression (LR) model is formulated for all item responses. The model contains item-specific intercepts,…
Shankar Subramaniam
2009-04-01
This final project report summarizes progress made towards the objectives described in the proposal entitled “Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach”. Substantial progress has been made in theory, modeling and numerical simulation of turbulent multiphase flows. The consistent mathematical framework based on probability density functions is described. New models are proposed for turbulent particle-laden flows and sprays.
I. Robertson; A. Beaudoin; J. Lambros
2005-01-31
Development and validation of constitutive models for polycrystalline materials subjected to high strain rate loading over a range of temperatures are needed to predict the response of engineering materials to in-service type conditions (foreign object damage, high-strain rate forging, high-speed sheet forming, deformation behavior during forming, response to extreme conditions, etc.). To account accurately for the complex effects that can occur during extreme and variable loading conditions, requires significant and detailed computational and modeling efforts. These efforts must be closely coupled with precise and targeted experimental measurements that not only verify the predictions of the models, but also provide input about the fundamental processes responsible for the macroscopic response. Achieving this coupling between modeling and experimentation is the guiding principle of this program. Specifically, this program seeks to bridge the length scale between discrete dislocation interactions with grain boundaries and continuum models for polycrystalline plasticity. Achieving this goal requires incorporating these complex dislocation-interface interactions into the well-defined behavior of single crystals. Despite the widespread study of metal plasticity, this aspect is not well understood for simple loading conditions, let alone extreme ones. Our experimental approach includes determining the high-strain rate response as a function of strain and temperature with post-mortem characterization of the microstructure, quasi-static testing of pre-deformed material, and direct observation of the dislocation behavior during reloading by using the in situ transmission electron microscope deformation technique. These experiments will provide the basis for development and validation of physically-based constitutive models, which will include dislocation-grain boundary interactions for polycrystalline systems. One aspect of the program will involve the direct observation
Quadratic function approaching method for magnetotelluric soundingdata inversion
Liangjun, Yan; Wenbao, Hu; Zhang, Keni
2004-04-05
The quadratic function approaching method (QFAM) is introduced for magnetotelluric sounding (MT) data inversion. The method takes the advantage of that quadratic function has single extreme value, which avoids leading to an inversion solution for local minimum and ensures the solution for global minimization of an objective function. The method does not need calculation of sensitivity matrix and not require a strict initial earth model. Examples for synthetic data and field measurement data indicate that the proposed inversion method is effective.
New approaches to probing Minkowski functionals
NASA Astrophysics Data System (ADS)
Munshi, D.; Smidt, J.; Cooray, A.; Renzi, A.; Heavens, A.; Coles, P.
2013-10-01
We generalize the concept of the ordinary skew-spectrum to probe the effect of non-Gaussianity on the morphology of cosmic microwave background (CMB) maps in several domains: in real space (where they are commonly known as cumulant-correlators), and in harmonic and needlet bases. The essential aim is to retain more information than normally contained in these statistics, in order to assist in determining the source of any measured non-Gaussianity, in the same spirit as Munshi & Heavens skew-spectra were used to identify foreground contaminants to the CMB bispectrum in Planck data. Using a perturbative series to construct the Minkowski functionals (MFs), we provide a pseudo-C_ℓ based approach in both harmonic and needlet representations to estimate these spectra in the presence of a mask and inhomogeneous noise. Assuming homogeneous noise, we present approximate expressions for error covariance for the purpose of joint estimation of these spectra. We present specific results for four different models of primordial non-Gaussianity local, equilateral, orthogonal and enfolded models, as well as non-Gaussianity caused by unsubtracted point sources. Closed form results of next-order corrections to MFs too are obtained in terms of a quadruplet of kurt-spectra. We also use the method of modal decomposition of the bispectrum and trispectrum to reconstruct the MFs as an alternative method of reconstruction of morphological properties of CMB maps. Finally, we introduce the odd-parity skew-spectra to probe the odd-parity bispectrum and its impact on the morphology of the CMB sky. Although developed for the CMB, the generic results obtained here can be useful in other areas of cosmology.
Functional renormalization group approach to noncollinear magnets
NASA Astrophysics Data System (ADS)
Delamotte, B.; Dudka, M.; Mouhanna, D.; Yabunaka, S.
2016-02-01
A functional renormalization group approach to d -dimensional, N -component, noncollinear magnets is performed using various truncations of the effective action relevant to study their long distance behavior. With help of these truncations we study the existence of a stable fixed point for dimensions between d =2.8 and d =4 for various values of N focusing on the critical value Nc(d ) that, for a given dimension d , separates a first-order region for N
Modelling of graphene functionalization.
Pykal, Martin; Jurečka, Petr; Karlický, František; Otyepka, Michal
2016-03-01
Graphene has attracted great interest because of its remarkable properties and numerous potential applications. A comprehensive understanding of its structural and dynamic properties and those of its derivatives will be required to enable the design and optimization of sophisticated new nanodevices. While it is challenging to perform experimental studies on nanoscale systems at the atomistic level, this is the 'native' scale of computational chemistry. Consequently, computational methods are increasingly being used to complement experimental research in many areas of chemistry and nanotechnology. However, it is difficult for non-experts to get to grips with the plethora of computational tools that are available and their areas of application. This perspective briefly describes the available theoretical methods and models for simulating graphene functionalization based on quantum and classical mechanics. The benefits and drawbacks of the individual methods are discussed, and we provide numerous examples showing how computational methods have provided new insights into the physical and chemical features of complex systems including graphene and graphene derivatives. We believe that this overview will help non-expert readers to understand this field and its great potential. PMID:26323438
Modeling of functional brain imaging data
NASA Astrophysics Data System (ADS)
Horwitz, Barry
1999-03-01
The richness and complexity of data sets obtained from functional neuroimaging studies of human cognitive behavior, using techniques such as positron emission tomography and functional magnetic resonance imaging, have until recently not been exploited by computational neural modeling methods. In this article, following a brief introduction to functional neuroimaging methodology, two neural modeling approaches for use with functional brain imaging data are described. One, which uses structural equation modeling, examines the effective functional connections between various brain regions during specific cognitive tasks. The second employs large-scale neural modeling to relate functional neuroimaging signals in multiple, interconnected brain regions to the underlying neurobiological time-varying activities in each region. These two modeling procedures are illustrated using a visual processing paradigm.
Approaches toward functional fluid supported lipid bilayers
NASA Astrophysics Data System (ADS)
Weng, Kevin Chun-I.
Planar supported lipid bilayers (PSLBs) have attracted immense interest for their properties as model cell membranes and for potential applications in biosensors and lab-on-a-chip devices. Our study covers three aspects of the construction, characterization, and application of functional PSLBs. First, a combination of micro-fabrication, the Langmuir-Blodgett (LB) technique, and fusion of extruded small unilamellar vesicle (E-SUVs) in sequence was used to create polymer-cushioned PSLBs in a microarray format. Random lipo-glycocopolymer mixed with L-alpha-phosphatidylcholine (egg PC) was compressed at the air-water interface and transferred onto the photoresist-patterned substrate by the LB technique to achieve spatially directed deposition. Construction of planar bilayers in an aqueous environment was subsequently completed by vesicle fusion. Epifluorescence microscopy, fluorescence recovery after photobleaching (FRAP), and electrophoresis-relaxation were employed to examine the resulting patterns as well as to verify the two-dimensional mobility of the supported membrane systems. This approach could possibly provide a useful route to create functional arrays of polymer-supported lipid bilayers. Second, we report the formation of fluid planar biomembranes on hydrophilic silica aerogels and xerogels. When the aerogel/xerogel was pre-hydrated and then allowed to incubate in egg PC E-SUV solution, lipid bilayers were formed due to the favorable interaction of vesicles with the hydroxyl-abundant silica surface. FRAP was used to determine the lateral diffusivity of membranes on aerogels. Quartz crystal microbalance with dissipation monitoring (QCM-D) was used to monitor the kinetics of the irreversible adsorption and fusion of vesicles into bilayers on xerogel thin films. Finally, we compared the formation of PSLBs with and without incorporation of monosialoganglioside GM1 (GM1) as the antigen for in situ antibody binding. Quantifiable differences were observed in the
Modeling Protein Domain Function
ERIC Educational Resources Information Center
Baker, William P.; Jones, Carleton "Buck"; Hull, Elizabeth
2007-01-01
This simple but effective laboratory exercise helps students understand the concept of protein domain function. They use foam beads, Styrofoam craft balls, and pipe cleaners to explore how domains within protein active sites interact to form a functional protein. The activity allows students to gain content mastery and an understanding of the…
ERIC Educational Resources Information Center
Metzger, Jesse A.
2010-01-01
The aims of this research were to 1) examine the qualities for which applicants are selected for entrance into clinical psychology Ph.D. programs, and 2) investigate the prevalence and impact of the mentor-model approach to admissions on multiple domains of programs and the field at large. Fifty Directors of Clinical Training (DCTs) provided data…
Mixed Languages: A Functional-Communicative Approach.
ERIC Educational Resources Information Center
Matras, Yaron
2000-01-01
Argues that the compartmentalism of structures observed in mixed languages is the result of the cumulative effect of different contact mechanisms. These mechanisms are defined in terms of the cognitive and communicative motivations that lead speakers to model certain functions of language on an alternative linguistic system: lexical…
Pharmacological approaches to restore mitochondrial function
Andreux, Pénélope A.; Houtkooper, Riekelt H.; Auwerx, Johan
2014-01-01
Mitochondrial dysfunction is not only a hallmark of rare inherited mitochondrial disorders, but is also implicated in age-related diseases, including those that affect the metabolic and nervous system, such as type 2 diabetes and Parkinson’s disease. Numerous pathways maintain and/or restore proper mitochondrial function, including mitochondrial biogenesis, mitochondrial dynamics, mitophagy, and the mitochondrial unfolded protein response. New and powerful phenotypic assays in cell-based models, as well as multicellular organisms, have been developed to explore these different aspects of mitochondrial function. Modulating mitochondrial function has therefore emerged as an attractive therapeutic strategy for a range of diseases, which has spurred active drug discovery efforts in this area. PMID:23666487
Computational Models for Neuromuscular Function
Valero-Cuevas, Francisco J.; Hoffmann, Heiko; Kurse, Manish U.; Kutch, Jason J.; Theodorou, Evangelos A.
2011-01-01
Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data. PMID:21687779
Estimating variability in functional images using a synthetic resampling approach
Maitra, R.; O`Sullivan, F.
1996-12-31
Functional imaging of biologic parameters like in vivo tissue metabolism is made possible by Positron Emission Tomography (PET). Many techniques, such as mixture analysis, have been suggested for extracting such images from dynamic sequences of reconstructed PET scans. Methods for assessing the variability in these functional images are of scientific interest. The nonlinearity of the methods used in the mixture analysis approach makes analytic formulae for estimating variability intractable. The usual resampling approach is infeasible because of the prohibitive computational effort in simulating a number of sinogram. datasets, applying image reconstruction, and generating parametric images for each replication. Here we introduce an approach that approximates the distribution of the reconstructed PET images by a Gaussian random field and generates synthetic realizations in the imaging domain. This eliminates the reconstruction steps in generating each simulated functional image and is therefore practical. Results of experiments done to evaluate the approach on a model one-dimensional problem are very encouraging. Post-processing of the estimated variances is seen to improve the accuracy of the estimation method. Mixture analysis is used to estimate functional images; however, the suggested approach is general enough to extend to other parametric imaging methods.
Computational modeling approaches in gonadotropin signaling.
Ayoub, Mohammed Akli; Yvinec, Romain; Crépieux, Pascale; Poupon, Anne
2016-07-01
Follicle-stimulating hormone and LH play essential roles in animal reproduction. They exert their function through binding to their cognate receptors, which belong to the large family of G protein-coupled receptors. This recognition at the plasma membrane triggers a plethora of cellular events, whose processing and integration ultimately lead to an adapted biological response. Understanding the nature and the kinetics of these events is essential for innovative approaches in drug discovery. The study and manipulation of such complex systems requires the use of computational modeling approaches combined with robust in vitro functional assays for calibration and validation. Modeling brings a detailed understanding of the system and can also be used to understand why existing drugs do not work as well as expected, and how to design more efficient ones. PMID:27165991
Synchronization-based approach for detecting functional activation of brain
NASA Astrophysics Data System (ADS)
Hong, Lei; Cai, Shi-Min; Zhang, Jie; Zhuo, Zhao; Fu, Zhong-Qian; Zhou, Pei-Ling
2012-09-01
In this paper, we investigate a synchronization-based, data-driven clustering approach for the analysis of functional magnetic resonance imaging (fMRI) data, and specifically for detecting functional activation from fMRI data. We first define a new measure of similarity between all pairs of data points (i.e., time series of voxels) integrating both complete phase synchronization and amplitude correlation. These pairwise similarities are taken as the coupling between a set of Kuramoto oscillators, which in turn evolve according to a nearest-neighbor rule. As the network evolves, similar data points naturally synchronize with each other, and distinct clusters will emerge. The clustering behavior of the interaction network of the coupled oscillators, therefore, mirrors the clustering property of the original multiple time series. The clustered regions whose cross-correlation coefficients are much greater than other regions are considered as the functionally activated brain regions. The analysis of fMRI data in auditory and visual areas shows that the recognized brain functional activations are in complete correspondence with those from the general linear model of statistical parametric mapping, but with a significantly lower time complexity. We further compare our results with those from traditional K-means approach, and find that our new clustering approach can distinguish between different response patterns more accurately and efficiently than the K-means approach, and therefore more suitable in detecting functional activation from event-related experimental fMRI data.
Response Surface Modeling Using Multivariate Orthogonal Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2001-01-01
A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.
Nonaka, Junko
2012-01-01
The objective of this study was to develop a probabilistic model to predict the end of lag time (λ) during the growth of Bacillus cereus vegetative cells as a function of temperature, pH, and salt concentration using logistic regression. The developed λ model was subsequently combined with a logistic differential equation to simulate bacterial numbers over time. To develop a novel model for λ, we determined whether bacterial growth had begun, i.e., whether λ had ended, at each time point during the growth kinetics. The growth of B. cereus was evaluated by optical density (OD) measurements in culture media for various pHs (5.5 ∼ 7.0) and salt concentrations (0.5 ∼ 2.0%) at static temperatures (10 ∼ 20°C). The probability of the end of λ was modeled using dichotomous judgments obtained at each OD measurement point concerning whether a significant increase had been observed. The probability of the end of λ was described as a function of time, temperature, pH, and salt concentration and showed a high goodness of fit. The λ model was validated with independent data sets of B. cereus growth in culture media and foods, indicating acceptable performance. Furthermore, the λ model, in combination with a logistic differential equation, enabled a simulation of the population of B. cereus in various foods over time at static and/or fluctuating temperatures with high accuracy. Thus, this newly developed modeling procedure enables the description of λ using observable environmental parameters without any conceptual assumptions and the simulation of bacterial numbers over time with the use of a logistic differential equation. PMID:22729541
Koseki, Shige; Nonaka, Junko
2012-09-01
The objective of this study was to develop a probabilistic model to predict the end of lag time (λ) during the growth of Bacillus cereus vegetative cells as a function of temperature, pH, and salt concentration using logistic regression. The developed λ model was subsequently combined with a logistic differential equation to simulate bacterial numbers over time. To develop a novel model for λ, we determined whether bacterial growth had begun, i.e., whether λ had ended, at each time point during the growth kinetics. The growth of B. cereus was evaluated by optical density (OD) measurements in culture media for various pHs (5.5 ∼ 7.0) and salt concentrations (0.5 ∼ 2.0%) at static temperatures (10 ∼ 20°C). The probability of the end of λ was modeled using dichotomous judgments obtained at each OD measurement point concerning whether a significant increase had been observed. The probability of the end of λ was described as a function of time, temperature, pH, and salt concentration and showed a high goodness of fit. The λ model was validated with independent data sets of B. cereus growth in culture media and foods, indicating acceptable performance. Furthermore, the λ model, in combination with a logistic differential equation, enabled a simulation of the population of B. cereus in various foods over time at static and/or fluctuating temperatures with high accuracy. Thus, this newly developed modeling procedure enables the description of λ using observable environmental parameters without any conceptual assumptions and the simulation of bacterial numbers over time with the use of a logistic differential equation. PMID:22729541
Green functions of graphene: An analytic approach
NASA Astrophysics Data System (ADS)
Lawlor, James A.; Ferreira, Mauro S.
2015-04-01
In this article we derive the lattice Green Functions (GFs) of graphene using a Tight Binding Hamiltonian incorporating both first and second nearest neighbour hoppings and allowing for a non-orthogonal electron wavefunction overlap. It is shown how the resulting GFs can be simplified from a double to a single integral form to aid computation, and that when considering off-diagonal GFs in the high symmetry directions of the lattice this single integral can be approximated very accurately by an algebraic expression. By comparing our results to the conventional first nearest neighbour model commonly found in the literature, it is apparent that the extended model leads to a sizeable change in the electronic structure away from the linear regime. As such, this article serves as a blueprint for researchers who wish to examine quantities where these considerations are important.
Unified approach to partition functions of RNA secondary structures.
Bundschuh, Ralf
2014-11-01
RNA secondary structure formation is a field of considerable biological interest as well as a model system for understanding generic properties of heteropolymer folding. This system is particularly attractive because the partition function and thus all thermodynamic properties of RNA secondary structure ensembles can be calculated numerically in polynomial time for arbitrary sequences and homopolymer models admit analytical solutions. Such solutions for many different aspects of the combinatorics of RNA secondary structure formation share the property that the final solution depends on differences of statistical weights rather than on the weights alone. Here, we present a unified approach to a large class of problems in the field of RNA secondary structure formation. We prove a generic theorem for the calculation of RNA folding partition functions. Then, we show that this approach can be applied to the study of the molten-native transition, denaturation of RNA molecules, as well as to studies of the glass phase of random RNA sequences. PMID:24177391
Distribution function approach to redshift space distortions
Seljak, Uroš; McDonald, Patrick E-mail: pvmcdonald@lbl.gov
2011-11-01
We develop a phase space distribution function approach to redshift space distortions (RSD), in which the redshift space density can be written as a sum over velocity moments of the distribution function. These moments are density weighted and have well defined physical interpretation: their lowest orders are density, momentum density, and stress energy density. The series expansion is convergent if kμu/aH < 1, where k is the wavevector, H the Hubble parameter, u the typical gravitational velocity and μ = cos θ, with θ being the angle between the Fourier mode and the line of sight. We perform an expansion of these velocity moments into helicity modes, which are eigenmodes under rotation around the axis of Fourier mode direction, generalizing the scalar, vector, tensor decomposition of perturbations to an arbitrary order. We show that only equal helicity moments correlate and derive the angular dependence of the individual contributions to the redshift space power spectrum. We show that the dominant term of μ{sup 2} dependence on large scales is the cross-correlation between the density and scalar part of momentum density, which can be related to the time derivative of the matter power spectrum. Additional terms contributing to μ{sup 2} and dominating on small scales are the vector part of momentum density-momentum density correlations, the energy density-density correlations, and the scalar part of anisotropic stress density-density correlations. The second term is what is usually associated with the small scale Fingers-of-God damping and always suppresses power, but the first term comes with the opposite sign and always adds power. Similarly, we identify 7 terms contributing to μ{sup 4} dependence. Some of the advantages of the distribution function approach are that the series expansion converges on large scales and remains valid in multi-stream situations. We finish with a brief discussion of implications for RSD in galaxies relative to dark matter
HEDR modeling approach: Revision 1
Shipler, D.B.; Napier, B.A.
1994-05-01
This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies.
Calculus of Functions and Their Inverses: A Unified Approach
ERIC Educational Resources Information Center
Krishnan, Srilal N.
2006-01-01
In this pedagogical article, I explore a unified approach in obtaining the derivatives of functions and their inverses by adopting a guided self-discovery approach. I begin by finding the derivative of the exponential functions and the derivative of their inverses, the logarithmic functions. I extend this approach to generate formulae for the…
The NJL Model for Quark Fragmentation Functions
T. Ito, W. Bentz, I. Cloet, A W Thomas, K. Yazaki
2009-10-01
A description of fragmentation functions which satisfy the momentum and isospin sum rules is presented in an effective quark theory. Concentrating on the pion fragmentation function, we first explain the reason why the elementary (lowest order) fragmentation process q → qπ is completely inadequate to describe the empirical data, although the “crossed” process π → qq describes the quark distribution functions in the pion reasonably well. Then, taking into account cascade-like processes in a modified jet-model approach, we show that the momentum and isospin sum rules can be satisfied naturally without introducing any ad-hoc parameters. We present numerical results for the Nambu-Jona-Lasinio model in the invariant mass regularization scheme, and compare the results with the empirical parametrizations. We argue that this NJL-jet model provides a very useful framework to calculate the fragmentation functions in an effective chiral quark theory.
Piehler, Timothy F; Bloomquist, Michael L; August, Gerald J; Gewirtz, Abigail H; Lee, Susanne S; Lee, Wendy S C
2014-01-01
A culturally diverse sample of formerly homeless youth (ages 6-12) and their families (n = 223) participated in a cluster randomized controlled trial of the Early Risers conduct problems prevention program in a supportive housing setting. Parents provided 4 annual behaviorally-based ratings of executive functioning (EF) and conduct problems, including at baseline, over 2 years of intervention programming, and at a 1-year follow-up assessment. Using intent-to-treat analyses, a multilevel latent growth model revealed that the intervention group demonstrated reduced growth in conduct problems over the 4 assessment points. In order to examine mediation, a multilevel parallel process latent growth model was used to simultaneously model growth in EF and growth in conduct problems along with intervention status as a covariate. A significant mediational process emerged, with participation in the intervention promoting growth in EF, which predicted negative growth in conduct problems. The model was consistent with changes in EF fully mediating intervention-related changes in youth conduct problems over the course of the study. These findings highlight the critical role that EF plays in behavioral change and lends further support to its importance as a target in preventive interventions with populations at risk for conduct problems. PMID:24141709
Piehler, Timothy F.; Bloomquist, Michael L.; August, Gerald J.; Gewirtz, Abigail H.; Lee, Susanne S.; Lee, Wendy S. C.
2013-01-01
A culturally diverse sample of formerly homeless youth (ages 6 – 12) and their families (n=223) participated in a cluster randomized controlled trial of the Early Risers conduct problems prevention program in a supportive housing setting. Parents provided 4 annual behaviorally-based ratings of executive functioning (EF) and conduct problems, including at baseline, over 2 years of intervention programming, and at a 1-year follow-up assessment. Using intent-to-treat analyses, a multilevel latent growth model revealed that the intervention group demonstrated reduced growth in conduct problems over the 4 assessment points. In order to examine mediation, a multilevel parallel process latent growth model was used to simultaneously model growth in EF and growth in conduct problems along with intervention status as a covariate. A significant mediational process emerged, with participation in the intervention promoting growth in EF, which predicted negative growth in conduct problems. The model was consistent with changes in EF fully mediating intervention-related changes in youth conduct problems over the course of the study. These findings highlight the critical role that EF plays in behavioral change and lends further support to its importance as a target in preventive interventions with populations at risk for conduct problems. PMID:24141709
The Linearized Kinetic Equation -- A Functional Analytic Approach
NASA Astrophysics Data System (ADS)
Brinkmann, Ralf Peter
2009-10-01
Kinetic models of plasma phenomena are difficult to address for two reasons. They i) are given as systems of nonlinear coupled integro-differential equations, and ii) involve generally six-dimensional distribution functions f(r,v,t). In situations which can be addressed in a linear regime, the first difficulty disappears, but the second one still poses considerable practical problems. This contribution presents an abstract approach to linearized kinetic theory which employs the methods of functional analysis. A kinetic electron equation with elastic electron-neutral interaction is studied in the electrostatic approximation. Under certain boundary conditions, a nonlinear functional, the kinetic free energy, exists which has the properties of a Lyapunov functional. In the linear regime, the functional becomes a quadratic form which motivates the definition of a bilinear scalar product, turning the space of all distribution functions into a Hilbert space. The linearized kinetic equation can then be described in terms of dynamical operators with well-defined properties. Abstract solutions can be constructed which have mathematically plausible properties. As an example, the formalism is applied to the example of the multipole resonance probe (MRP). Under the assumption of a Maxwellian background distribution, the kinetic model of that diagnostics device is compared to a previously investigated fluid model.
Modeling Approaches in Planetary Seismology
NASA Technical Reports Server (NTRS)
Weber, Renee; Knapmeyer, Martin; Panning, Mark; Schmerr, Nick
2014-01-01
Of the many geophysical means that can be used to probe a planet's interior, seismology remains the most direct. Given that the seismic data gathered on the Moon over 40 years ago revolutionized our understanding of the Moon and are still being used today to produce new insight into the state of the lunar interior, it is no wonder that many future missions, both real and conceptual, plan to take seismometers to other planets. To best facilitate the return of high-quality data from these instruments, as well as to further our understanding of the dynamic processes that modify a planet's interior, various modeling approaches are used to quantify parameters such as the amount and distribution of seismicity, tidal deformation, and seismic structure on and of the terrestrial planets. In addition, recent advances in wavefield modeling have permitted a renewed look at seismic energy transmission and the effects of attenuation and scattering, as well as the presence and effect of a core, on recorded seismograms. In this chapter, we will review these approaches.
Nonperturbative approach to the attractive Hubbard model
Allen, S.; Tremblay, A.-M. S.
2001-08-15
A nonperturbative approach to the single-band attractive Hubbard model is presented in the general context of functional-derivative approaches to many-body theories. As in previous work on the repulsive model, the first step is based on a local-field-type ansatz, on enforcement of the Pauli principle and a number of crucial sumrules. The Mermin-Wagner theorem in two dimensions is automatically satisfied. At this level, two-particle self-consistency has been achieved. In the second step of the approximation, an improved expression for the self-energy is obtained by using the results of the first step in an exact expression for the self-energy, where the high- and low-frequency behaviors appear separately. The result is a cooperon-like formula. The required vertex corrections are included in this self-energy expression, as required by the absence of a Migdal theorem for this problem. Other approaches to the attractive Hubbard model are critically compared. Physical consequences of the present approach and agreement with Monte Carlo simulations are demonstrated in the accompanying paper (following this one).
NASA Astrophysics Data System (ADS)
Choubey, Sanjay K.; Mariadasse, Richard; Rajendran, Santhosh; Jeyaraman, Jeyakanthan
2016-12-01
Overexpression of HDAC1, a member of Class I histone deacetylase is reported to be implicated in breast cancer. Epigenetic alteration in carcinogenesis has been the thrust of research for few decades. Increased deacetylation leads to accelerated cell proliferation, cell migration, angiogenesis and invasion. HDAC1 is pronounced as the potential drug target towards the treatment of breast cancer. In this study, the biochemical potential of 6-aminonicotinamide derivatives was rationalized. Five point pharmacophore model with one hydrogen-bond acceptor (A3), two hydrogen-bond donors (D5, D6), one ring (R12) and one hydrophobic group (H8) was developed using 6-aminonicotinamide derivatives. The pharmacophore hypothesis yielded a 3D-QSAR model with correlation-coefficient (r2 = 0.977, q2 = 0.801) and it was externally validated with (r2pred = 0.929, r2cv = 0.850 and r2m = 0.856) which reveals the statistical significance of the model having high predictive power. The model was then employed as 3D search query for virtual screening against compound libraries (Zinc, Maybridge, Enamine, Asinex, Toslab, LifeChem and Specs) in order to identify novel scaffolds which can be experimentally validated to design future drug molecule. Density Functional Theory (DFT) at B3LYP/6-31G* level was employed to explore the electronic features of the ligands involved in charge transfer reaction during receptor ligand interaction. Binding free energy (ΔGbind) calculation was done using MM/GBSA which defines the affinity of ligands towards the receptor.
Chu, Congying; Fan, Lingzhong; Eickhoff, Claudia R; Liu, Yong; Yang, Yong; Eickhoff, Simon B; Jiang, Tianzi
2015-08-15
Recent progress in functional neuroimaging has prompted studies of brain activation during various cognitive tasks. Coordinate-based meta-analysis has been utilized to discover the brain regions that are consistently activated across experiments. However, within-experiment co-activation relationships, which can reflect the underlying functional relationships between different brain regions, have not been widely studied. In particular, voxel-wise co-activation, which may be able to provide a detailed configuration of the co-activation network, still needs to be modeled. To estimate the voxel-wise co-activation pattern and deduce the co-activation network, a Co-activation Probability Estimation (CoPE) method was proposed to model within-experiment activations for the purpose of defining the co-activations. A permutation test was adopted as a significance test. Moreover, the co-activations were automatically separated into local and long-range ones, based on distance. The two types of co-activations describe distinct features: the first reflects convergent activations; the second represents co-activations between different brain regions. The validation of CoPE was based on five simulation tests and one real dataset derived from studies of working memory. Both the simulated and the real data demonstrated that CoPE was not only able to find local convergence but also significant long-range co-activation. In particular, CoPE was able to identify a 'core' co-activation network in the working memory dataset. As a data-driven method, the CoPE method can be used to mine underlying co-activation relationships across experiments in future studies. PMID:26037052
Modelling approaches for evaluating multiscale tendon mechanics.
Fang, Fei; Lake, Spencer P
2016-02-01
Tendon exhibits anisotropic, inhomogeneous and viscoelastic mechanical properties that are determined by its complicated hierarchical structure and varying amounts/organization of different tissue constituents. Although extensive research has been conducted to use modelling approaches to interpret tendon structure-function relationships in combination with experimental data, many issues remain unclear (i.e. the role of minor components such as decorin, aggrecan and elastin), and the integration of mechanical analysis across different length scales has not been well applied to explore stress or strain transfer from macro- to microscale. This review outlines mathematical and computational models that have been used to understand tendon mechanics at different scales of the hierarchical organization. Model representations at the molecular, fibril and tissue levels are discussed, including formulations that follow phenomenological and microstructural approaches (which include evaluations of crimp, helical structure and the interaction between collagen fibrils and proteoglycans). Multiscale modelling approaches incorporating tendon features are suggested to be an advantageous methodology to understand further the physiological mechanical response of tendon and corresponding adaptation of properties owing to unique in vivo loading environments. PMID:26855747
Systematic approach for modeling tetrachloroethene biodegradation
Bagley, D.M.
1998-11-01
The anaerobic biodegradation of tetrachloroethene (PCE) is a reasonably well understood process. Specific organisms capable of using PCE as an electron acceptor for growth require the addition of an electron donor to remove PCE from contaminated ground waters. However, competition from other anaerobic microorganisms for added electron donor will influence the rate and completeness of PCE degradation. The approach developed here allows for the explicit modeling of PCE and byproduct biodegradation as a function of electron donor and byproduct concentrations, and the microbiological ecology of the system. The approach is general and can be easily modified for ready use with in situ ground-water models or ex situ reactor models. Simulations conducted with models developed from this approach show the sensitivity of PCE biodegradation to input parameter values, in particular initial biomass concentrations. Additionally, the dechlorination rate will be strongly influenced by the microbial ecology of the system. Finally, comparison with experimental acclimation results indicates that existing kinetic constants may not be generally applicable. Better techniques for measuring the biomass of specific organisms groups in mixed systems are required.
Component Modeling Approach Software Tool
Energy Science and Technology Software Center (ESTSC)
2010-08-23
The Component Modeling Approach Software Tool (CMAST) establishes a set of performance libraries of approved components (frames, glass, and spacer) which can be accessed for configuring fenestration products for a project, and btaining a U-factor, Solar Heat Gain Coefficient (SHGC), and Visible Transmittance (VT) rating for those products, which can then be reflected in a CMA Label Certificate for code compliance. CMAST is web-based as well as client-based. The completed CMA program and software toolmore » will be useful in several ways for a vast array of stakeholders in the industry: Generating performance ratings for bidding projects Ascertaining credible and accurate performance data Obtaining third party certification of overall product performance for code compliance« less
Introducing Linear Functions: An Alternative Statistical Approach
ERIC Educational Resources Information Center
Nolan, Caroline; Herbert, Sandra
2015-01-01
The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be "threshold concepts". There is recognition that linear functions can be taught in context through the exploration of linear…
Systems approaches to microbial communities and their functioning.
Röling, Wilfred F M; Ferrer, Manuel; Golyshin, Peter N
2010-08-01
Recent advances in molecular microbial ecology and systems biology enhance insight into microbial community structure and functioning. They provide conceptual and technical bases for the translation of species-data and community-data into a model framework accounting for the functioning of and interactions between metabolic networks of species in multispecies environments. Function-directed and single cell-directed approaches supplement and improve metagenomics-derived community information. The topology of the metabolic network, reconstructed from a species' genome sequence, provides insight into its metabolic environments and interactions with other microorganisms. Progress in the theoretical and experimental analysis of flux through metabolic networks paves the way for their application at the community level, contributing to understanding of material flows between and within species and their resilience toward perturbations. PMID:20637597
Mixture models for distance sampling detection functions.
Miller, David L; Thomas, Len
2015-01-01
We present a new class of models for the detection function in distance sampling surveys of wildlife populations, based on finite mixtures of simple parametric key functions such as the half-normal. The models share many of the features of the widely-used "key function plus series adjustment" (K+A) formulation: they are flexible, produce plausible shapes with a small number of parameters, allow incorporation of covariates in addition to distance and can be fitted using maximum likelihood. One important advantage over the K+A approach is that the mixtures are automatically monotonic non-increasing and non-negative, so constrained optimization is not required to ensure distance sampling assumptions are honoured. We compare the mixture formulation to the K+A approach using simulations to evaluate its applicability in a wide set of challenging situations. We also re-analyze four previously problematic real-world case studies. We find mixtures outperform K+A methods in many cases, particularly spiked line transect data (i.e., where detectability drops rapidly at small distances) and larger sample sizes. We recommend that current standard model selection methods for distance sampling detection functions are extended to include mixture models in the candidate set. PMID:25793744
dos Santos, Sandra C.; Teixeira, Miguel C.; Dias, Paulo J.; Sá-Correia, Isabel
2014-01-01
Multidrug/Multixenobiotic resistance (MDR/MXR) is a widespread phenomenon with clinical, agricultural and biotechnological implications, where MDR/MXR transporters that are presumably able to catalyze the efflux of multiple cytotoxic compounds play a key role in the acquisition of resistance. However, although these proteins have been traditionally considered drug exporters, the physiological function of MDR/MXR transporters and the exact mechanism of their involvement in resistance to cytotoxic compounds are still open to debate. In fact, the wide range of structurally and functionally unrelated substrates that these transporters are presumably able to export has puzzled researchers for years. The discussion has now shifted toward the possibility of at least some MDR/MXR transporters exerting their effect as the result of a natural physiological role in the cell, rather than through the direct export of cytotoxic compounds, while the hypothesis that MDR/MXR transporters may have evolved in nature for other purposes than conferring chemoprotection has been gaining momentum in recent years. This review focuses on the drug transporters of the Major Facilitator Superfamily (MFS; drug:H+ antiporters) in the model yeast Saccharomyces cerevisiae. New insights into the natural roles of these transporters are described and discussed, focusing on the knowledge obtained or suggested by post-genomic research. The new information reviewed here provides clues into the unexpectedly complex roles of these transporters, including a proposed indirect regulation of the stress response machinery and control of membrane potential and/or internal pH, with a special emphasis on a genome-wide view of the regulation and evolution of MDR/MXR-MFS transporters. PMID:24847282
Augmented approach to desirability function based on MM estimator
NASA Astrophysics Data System (ADS)
Midi, Habshah; Mustafa, Mohd Shafie; Fitrianto, Anuar
2013-04-01
The desirability function approach is commonly used in industry to tackle multiple response optimization problems. The shortcoming of this approach is that the variability in each predicted response is ignored. It is now evident that the actual response may fall outside the acceptable region even though the predicted response at the optimal solution has a high overall desirability score. An augmented approach to the desirability function (AADF) is put forward to rectify this problem. Nevertheless the AADF is easily affected by outliers since the AADF is constructed based on the Ordinary Least Squares (OLS) estimate which is not resistant to outliers. As an alternative, we propose a robust MM-estimator to estimate the parameters of the Response Surface Model (RSM) and incorporated the estimated parameters in the augmented approach framework. A numerical example is presented to assess the performance of the AADF-MM based method. The numerical results signify that the AADF-MM based is more efficient than the AADF-OLS based method.
Chemogenetic approach to model hypofrontality.
Peña, Ike Dela; Shi, Wei-Xing
2016-08-01
Clinical evidence suggests that the prefrontal cortex (PFC) is hypofunctional in disorders including schizophrenia, drug addiction, and attention-deficit/hyperactivity disorder (ADHD). In schizophrenia, hypofrontality has been further suggested to cause both the negative and cognitive symptoms, and overactivity of dopamine neurons that project to subcortical areas. The latter may contribute to the development of positive symptoms of the disorder. Nevertheless, what causes hypofrontality and how it alters dopamine transmission in subcortical structures remain unclear due, in part, to the difficulty in modeling hypofrontality using previous techniques (e.g. PFC lesioning, focal cooling, repeated treatment with psychotomimetic drugs). We propose that the use of designer receptors exclusively activated by designer drugs (DREADDs) chemogenetic technique will allow precise interrogations of PFC functions. Combined with electrophysiological recordings, we can investigate the effects of PFC hypofunction on activity of dopamine neurons. Importantly, from a drug target discovery perspective, the use of DREADDs will enable us to examine whether chemogenetically enhancing PFC activity will reverse the behavioral abnormalities associated with PFC hypofunction and dopamine neuron overactivity, and also explore druggable targets for the treatment of schizophrenia and other disorders associated with abnormalities via modulation of the G-protein coupled receptor signaling pathway. In conclusion, the use of the DREADDs technique has several advantages over other previously employed strategies to simulate PFC hypofunction not only in terms of disease modeling but also from the viewpoint of drug target discovery. PMID:27372868
Functional Error Models to Accelerate Nested Sampling
NASA Astrophysics Data System (ADS)
Josset, L.; Elsheikh, A. H.; Demyanov, V.; Lunati, I.
2014-12-01
The main challenge in groundwater problems is the reliance on large numbers of unknown parameters with wide rage of associated uncertainties. To translate this uncertainty to quantities of interest (for instance the concentration of pollutant in a drinking well), a large number of forward flow simulations is required. To make the problem computationally tractable, Josset et al. (2013, 2014) introduced the concept of functional error models. It consists in two elements: a proxy model that is cheaper to evaluate than the full physics flow solver and an error model to account for the missing physics. The coupling of the proxy model and the error models provides reliable predictions that approximate the full physics model's responses. The error model is tailored to the problem at hand by building it for the question of interest. It follows a typical approach in machine learning where both the full physics and proxy models are evaluated for a training set (subset of realizations) and the set of responses is used to construct the error model using functional data analysis. Once the error model is devised, a prediction of the full physics response for a new geostatistical realization can be obtained by computing the proxy response and applying the error model. We propose the use of functional error models in a Bayesian inference context by combining it to the Nested Sampling (Skilling 2006; El Sheikh et al. 2013, 2014). Nested Sampling offers a mean to compute the Bayesian Evidence by transforming the multidimensional integral into a 1D integral. The algorithm is simple: starting with an active set of samples, at each iteration, the sample with the lowest likelihood is kept aside and replaced by a sample of higher likelihood. The main challenge is to find this sample of higher likelihood. We suggest a new approach: first the active set is sampled, both proxy and full physics models are run and the functional error model is build. Then, at each iteration of the Nested
A Functional Analytic Approach to Group Psychotherapy
ERIC Educational Resources Information Center
Vandenberghe, Luc
2009-01-01
This article provides a particular view on the use of Functional Analytical Psychotherapy (FAP) in a group therapy format. This view is based on the author's experiences as a supervisor of Functional Analytical Psychotherapy Groups, including groups for women with depression and groups for chronic pain patients. The contexts in which this approach…
Work Functions for Models of Scandate Surfaces
NASA Technical Reports Server (NTRS)
Mueller, Wolfgang
1997-01-01
The electronic structure, surface dipole properties, and work functions of scandate surfaces have been investigated using the fully relativistic scattered-wave cluster approach. Three different types of model surfaces are considered: (1) a monolayer of Ba-Sc-O on W(100), (2) Ba or BaO adsorbed on Sc2O3 + W, and (3) BaO on SC2O3 + WO3. Changes in the work function due to Ba or BaO adsorption on the different surfaces are calculated by employing the depolarization model of interacting surface dipoles. The largest work function change and the lowest work function of 1.54 eV are obtained for Ba adsorbed on the Sc-O monolayer on W(100). The adsorption of Ba on Sc2O3 + W does not lead to a low work function, but the adsorption of BaO results in a work function of about 1.6-1.9 eV. BaO adsorbed on Sc2O3 + WO3, or scandium tungstates, may also lead to low work functions.
Career Exploration Program: A Composite Systematic Functional Objective Model.
ERIC Educational Resources Information Center
Mohamed, Othman
The composite systematic functional objective career exploration program model integrates various career development theoretical approaches. These approaches emphasize self-concept, life values, personality, the environment, and academic achievement and training as separate functions in explaining career development. Current social development in…
Translation: Towards a Critical-Functional Approach
ERIC Educational Resources Information Center
Sadeghi, Sima; Ketabi, Saeed
2010-01-01
The controversy over the place of translation in the teaching of English as a Foreign Language (EFL) is a thriving field of inquiry. Many older language teaching methodologies such as the Direct Method, the Audio-lingual Method, and Natural and Communicative Approaches, tended to either neglect the role of translation, or prohibit it entirely as a…
Functional Approaches to Written Text: Classroom Applications.
ERIC Educational Resources Information Center
Miller, Tom, Ed.
Noting that little in language can be understood without taking into consideration the wider picture of communicative purpose, content, context, and audience, this book address practical uses of various approaches to discourse analysis. Several assumptions run through the chapters: knowledge is socially constructed; the manner in which language…
Linearized Functional Minimization for Inverse Modeling
Wohlberg, Brendt; Tartakovsky, Daniel M.; Dentz, Marco
2012-06-21
Heterogeneous aquifers typically consist of multiple lithofacies, whose spatial arrangement significantly affects flow and transport. The estimation of these lithofacies is complicated by the scarcity of data and by the lack of a clear correlation between identifiable geologic indicators and attributes. We introduce a new inverse-modeling approach to estimate both the spatial extent of hydrofacies and their properties from sparse measurements of hydraulic conductivity and hydraulic head. Our approach is to minimize a functional defined on the vectors of values of hydraulic conductivity and hydraulic head fields defined on regular grids at a user-determined resolution. This functional is constructed to (i) enforce the relationship between conductivity and heads provided by the groundwater flow equation, (ii) penalize deviations of the reconstructed fields from measurements where they are available, and (iii) penalize reconstructed fields that are not piece-wise smooth. We develop an iterative solver for this functional that exploits a local linearization of the mapping from conductivity to head. This approach provides a computationally efficient algorithm that rapidly converges to a solution. A series of numerical experiments demonstrates the robustness of our approach.
Transfer function modeling of damping mechanisms in distributed parameter models
NASA Technical Reports Server (NTRS)
Slater, J. C.; Inman, D. J.
1994-01-01
This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.
Kim, Sunghee; Kim, Ki Chul; Lee, Seung Woo; Jang, Seung Soon
2016-07-27
Understanding the thermodynamic stability and redox properties of oxygen functional groups on graphene is critical to systematically design stable graphene-based positive electrode materials with high potential for lithium-ion battery applications. In this work, we study the thermodynamic and redox properties of graphene functionalized with carbonyl and hydroxyl groups, and the evolution of these properties with the number, types and distribution of functional groups by employing the density functional theory method. It is found that the redox potential of the functionalized graphene is sensitive to the types, number, and distribution of oxygen functional groups. First, the carbonyl group induces higher redox potential than the hydroxyl group. Second, more carbonyl groups would result in higher redox potential. Lastly, the locally concentrated distribution of the carbonyl group is more beneficial to have higher redox potential compared to the uniformly dispersed distribution. In contrast, the distribution of the hydroxyl group does not affect the redox potential significantly. Thermodynamic investigation demonstrates that the incorporation of carbonyl groups at the edge of graphene is a promising strategy for designing thermodynamically stable positive electrode materials with high redox potentials. PMID:27412373
Statistical approaches and software for clustering islet cell functional heterogeneity
Wills, Quin F.; Boothe, Tobias; Asadi, Ali; Ao, Ziliang; Warnock, Garth L.; Kieffer, Timothy J.
2016-01-01
ABSTRACT Worldwide efforts are underway to replace or repair lost or dysfunctional pancreatic β-cells to cure diabetes. However, it is unclear what the final product of these efforts should be, as β-cells are thought to be heterogeneous. To enable the analysis of β-cell heterogeneity in an unbiased and quantitative way, we developed model-free and model-based statistical clustering approaches, and created new software called TraceCluster. Using an example data set, we illustrate the utility of these approaches by clustering dynamic intracellular Ca2+ responses to high glucose in ∼300 simultaneously imaged single islet cells. Using feature extraction from the Ca2+ traces on this reference data set, we identified 2 distinct populations of cells with β-like responses to glucose. To the best of our knowledge, this report represents the first unbiased cluster-based analysis of human β-cell functional heterogeneity of simultaneous recordings. We hope that the approaches and tools described here will be helpful for those studying heterogeneity in primary islet cells, as well as excitable cells derived from embryonic stem cells or induced pluripotent cells. PMID:26909740
Evaluating face trustworthiness: a model based approach
Baron, Sean G.; Oosterhof, Nikolaas N.
2008-01-01
Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response—as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic—strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension. PMID:19015102
Recent molecular approaches to understanding astrocyte function in vivo
Davila, David; Thibault, Karine; Fiacco, Todd A.; Agulhon, Cendra
2013-01-01
Astrocytes are a predominant glial cell type in the nervous systems, and are becoming recognized as important mediators of normal brain function as well as neurodevelopmental, neurological, and neurodegenerative brain diseases. Although numerous potential mechanisms have been proposed to explain the role of astrocytes in the normal and diseased brain, research into the physiological relevance of these mechanisms in vivo is just beginning. In this review, we will summarize recent developments in innovative and powerful molecular approaches, including knockout mouse models, transgenic mouse models, and astrocyte-targeted gene transfer/expression, which have led to advances in understanding astrocyte biology in vivo that were heretofore inaccessible to experimentation. We will examine the recently improved understanding of the roles of astrocytes – with an emphasis on astrocyte signaling – in the context of both the healthy and diseased brain, discuss areas where the role of astrocytes remains debated, and suggest new research directions. PMID:24399932
Quantum thermodynamics: a nonequilibrium Green's function approach.
Esposito, Massimiliano; Ochoa, Maicol A; Galperin, Michael
2015-02-27
We establish the foundations of a nonequilibrium theory of quantum thermodynamics for noninteracting open quantum systems strongly coupled to their reservoirs within the framework of the nonequilibrium Green's functions. The energy of the system and its coupling to the reservoirs are controlled by a slow external time-dependent force treated to first order beyond the quasistatic limit. We derive the four basic laws of thermodynamics and characterize reversible transformations. Stochastic thermodynamics is recovered in the weak coupling limit. PMID:25768745
ONION: Functional Approach for Integration of Lipidomics and Transcriptomics Data
Piwowar, Monika; Jurkowski, Wiktor
2015-01-01
To date, the massive quantity of data generated by high-throughput techniques has not yet met bioinformatics treatment required to make full use of it. This is partially due to a mismatch in experimental and analytical study design but primarily due to a lack of adequate analytical approaches. When integrating multiple data types e.g. transcriptomics and metabolomics, multidimensional statistical methods are currently the techniques of choice. Typical statistical approaches, such as canonical correlation analysis (CCA), that are applied to find associations between metabolites and genes are failing due to small numbers of observations (e.g. conditions, diet etc.) in comparison to data size (number of genes, metabolites). Modifications designed to cope with this issue are not ideal due to the need to add simulated data resulting in a lack of p-value computation or by pruning of variables hence losing potentially valid information. Instead, our approach makes use of verified or putative molecular interactions or functional association to guide analysis. The workflow includes dividing of data sets to reach the expected data structure, statistical analysis within groups and interpretation of results. By applying pathway and network analysis, data obtained by various platforms are grouped with moderate stringency to avoid functional bias. As a consequence CCA and other multivariate models can be applied to calculate robust statistics and provide easy to interpret associations between metabolites and genes to leverage understanding of metabolic response. Effective integration of lipidomics and transcriptomics is demonstrated on publically available murine nutrigenomics data sets. We are able to demonstrate that our approach improves detection of genes related to lipid metabolism, in comparison to applying statistics alone. This is measured by increased percentage of explained variance (95% vs. 75–80%) and by identifying new metabolite-gene associations related to lipid
Functional genomics approach to hypoxia signaling.
Seta, Karen A; Millhorn, David E
2004-02-01
Mammalian cells require a constant supply of oxygen to maintain energy balance, and sustained hypoxia can result in cell death. It is therefore not surprising that sophisticated adaptive mechanisms have evolved that enhance cell survival during hypoxia. During the past few years, there have been a growing number of reports on hypoxia-induced transcription of specific genes. In this review, we describe a unique experimental approach that utilizes focused cDNA libraries coupled to microarray analyses to identify hypoxia-responsive signal transduction pathways and genes that confer the hypoxia-tolerant phenotype. We have used the subtractive suppression hybridization (SSH) method to create a cDNA library enriched in hypoxia-regulated genes in oxygen-sensing pheochromocytoma cells and have used this library to create microarrays that allow us to examine hundreds of genes at a time. This library contains over 300 genes and expressed sequence tags upregulated by hypoxia, including tyrosine hydroxylase, vascular endothelial growth factor, and junB. Hypoxic regulation of these and other genes in the library has been confirmed by microarray, Northern blot, and real-time PCR analyses. Coupling focused SSH libraries with microarray analyses allows one to specifically study genes relevant to a phenotype of interest while reducing much of the biological noise associated with these types of studies. When used in conjunction with high-throughput, dye-based assays for cell survival and apoptosis, this approach offers a rapid method for discovering validated therapeutic targets for the treatment of cardiovascular disease, stroke, and tumors. PMID:14715686
A moving approach for the Vector Hysteron Model
NASA Astrophysics Data System (ADS)
Cardelli, E.; Faba, A.; Laudani, A.; Quondam Antonio, S.; Riganti Fulginei, F.; Salvini, A.
2016-04-01
A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.
Functional models of power electronic components for system studies
NASA Technical Reports Server (NTRS)
Tam, Kwa-Sur; Yang, Lifeng; Dravid, Narayan
1991-01-01
A novel approach to model power electronic circuits has been developed to facilitate simulation studies of system-level issues. The underlying concept for this approach is to develop an equivalent circuit, the functional model, that performs the same functions as the actual circuit but whose operation can be simulated by using larger time step size and the reduction in model complexity, the computation time required by a functional model is significantly shorter than that required by alternative approaches. The authors present this novel modeling approach and discuss the functional models of two major power electronic components, the DC/DC converter unit and the load converter, that are being considered by NASA for use in the Space Station Freedom electric power system. The validity of these models is established by comparing the simulation results with available experimental data and other simulation results obtained by using a more established modeling approach. The usefulness of this approach is demonstrated by incorporating these models into a power system model and simulating the system responses and interactions between components under various conditions.
ERIC Educational Resources Information Center
And Others; deLannoy, Peter
1996-01-01
Describes an integrated approach to teaching a biochemistry laboratory focusing on the relationship between the three-dimensional structure of a macromolecule and its function. RNA is chosen as the model system. Discusses curriculum and student assessment. (AIM)
Interactively Open Autonomy Unifies Two Approaches to Function
NASA Astrophysics Data System (ADS)
Collier, John
2004-08-01
Functionality is essential to any form of anticipation beyond simple directedness at an end. In the literature on function in biology, there are two distinct approaches. One, the etiological view, places the origin of function in selection, while the other, the organizational view, individuates function by organizational role. Both approaches have well-known advantages and disadvantages. I propose a reconciliation of the two approaches, based in an interactivist approach to the individuation and stability of organisms. The approach was suggested by Kant in the Critique of Judgment, but since it requires, on his account, the identification a new form of causation, it has not been accessible by analytical techniques. I proceed by construction of the required concept to fit certain design requirements. This construction builds on concepts introduced in my previous four talks to these meetings.
Sturmian function approach and {bar N}N bound states
Yan, Y.; Tegen, R.; Gutsche, T.; Faessler, A.
1997-09-01
A suitable numerical approach based on Sturmian functions is employed to solve the {bar N}N bound state problem for local and nonlocal potentials. The approach accounts for both the strong short-range nuclear potential and the long-range Coulomb force and provides directly the wave function of protonium and {bar N}N deep bound states with complex eigenvalues E=E{sub R}{minus}i({Gamma}/2). The spectrum of {bar N}N bound states has two parts, the atomic states bound by several keV, and the deep bound states which are bound by several hundred MeV. The observed very small hyperfine splitting of the 1s level and the 1s and 2p decay widths are reasonably well reproduced by both the Paris and Bonn potentials (supplemented with a microscopically derived quark annihilation potential), although there are differences in magnitude and level ordering. We present further arguments for the identification of the {sup 13}PF{sub 2} deep bound state with the exotic tensor meson f{sub 2}(1520). Both investigated models can accommodate the f{sub 2}(1520) but differ greatly in the total number of levels and in their ordering. The model based on the Paris potential predicts the {sup 13}P{sub 0} level slightly below 1.1 GeV while the model based on the Bonn potential puts this state below 0.8 GeV. It remains to be seen if this state can be identified with a scalar partner of the f{sub 2}(1520). {copyright} {ital 1997} {ital The American Physical Society}
A Semantic Modeling Approach to Metadata.
ERIC Educational Resources Information Center
Brasethvik, Terje
1998-01-01
Explores problems in information sharing; discusses the concept of metadata; illustrates its use on the World Wide Web, as well as other related approaches; and presents an approach to information sharing that uses a semantic modeling language (referent model language) as the basis for expressing semantics of information and designing metadata…
Model compilation: An approach to automated model derivation
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo
1990-01-01
An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.
A new approach to turbulence modeling
NASA Technical Reports Server (NTRS)
Perot, B.; Moin, P.
1996-01-01
A new approach to Reynolds averaged turbulence modeling is proposed which has a computational cost comparable to two equation models but a predictive capability approaching that of Reynolds stress transport models. This approach isolates the crucial information contained within the Reynolds stress tensor, and solves transport equations only for a set of 'reduced' variables. In this work, Direct Numerical Simulation (DNS) data is used to analyze the nature of these newly proposed turbulence quantities and the source terms which appear in their respective transport equations. The physical relevance of these quantities is discussed and some initial modeling results for turbulent channel flow are presented.
A Unified Approach to Modeling Multidisciplinary Interactions
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Bhatia, Kumar G.
2000-01-01
There are a number of existing methods to transfer information among various disciplines. For a multidisciplinary application with n disciplines, the traditional methods may be required to model (n(exp 2) - n) interactions. This paper presents a unified three-dimensional approach that reduces the number of interactions from (n(exp 2) - n) to 2n by using a computer-aided design model. The proposed modeling approach unifies the interactions among various disciplines. The approach is independent of specific discipline implementation, and a number of existing methods can be reformulated in the context of the proposed unified approach. This paper provides an overview of the proposed unified approach and reformulations for two existing methods. The unified approach is specially tailored for application environments where the geometry is created and managed through a computer-aided design system. Results are presented for a blended-wing body and a high-speed civil transport.
Defining and Applying a Functionality Approach to Intellectual Disability
ERIC Educational Resources Information Center
Luckasson, R.; Schalock, R. L.
2013-01-01
Background: The current functional models of disability do not adequately incorporate significant changes of the last three decades in our understanding of human functioning, and how the human functioning construct can be applied to clinical functions, professional practices and outcomes evaluation. Methods: The authors synthesise current…
Questionnaire of executive function for dancers: an ecological approach.
Wong, Alina; Rodríguez, Mabel; Quevedo, Liliana; Fernández de Cossío, Lourdes; Borges, Ariel; Reyes, Alicia; Corral, Roberto; Blanco, Florentino; Alvarez, Miguel
2012-09-01
There is a current debate about the ecological validity of executive function (EF) tests. Consistent with the verisimilitude approach, this research proposes the ballet executive scale (BES), a self-rating questionnaire that assimilates idiosyncratic executive behaviors of classical dance community. The BES was administrated to 149 adolescents, students of the Cuban Ballet School. Results present a Cronbach's alpha coefficient of .80 and a split-half Spearman-Brown coefficient r (SB) = .81. An exploratory factor analysis describes a bifactorial pattern of EF dimensions, with a self-regulation component, which explains more than 40% of variance, and a Developmental component, which accounts for more than 20% of variance. The questionnaire's total scores fit linear regression models with two external criteria of academic records, confirming concurrent validity. These findings support the hypothesis that the internalization of specific contextual cultural meanings has a mediating influence in the development of EF. PMID:21266371
Multicomponent Equilibrium Models for Testing Geothermometry Approaches
Cooper, D. Craig; Palmer, Carl D.; Smith, Robert W.; McLing, Travis L.
2013-02-01
Geothermometry is an important tool for estimating deep reservoir temperature from the geochemical composition of shallower and cooler waters. The underlying assumption of geothermometry is that the waters collected from shallow wells and seeps maintain a chemical signature that reflects equilibrium in the deeper reservoir. Many of the geothermometers used in practice are based on correlation between water temperatures and composition or using thermodynamic calculations based a subset (typically silica, cations or cation ratios) of the dissolved constituents. An alternative approach is to use complete water compositions and equilibrium geochemical modeling to calculate the degree of disequilibrium (saturation index) for large number of potential reservoir minerals as a function of temperature. We have constructed several “forward” geochemical models using The Geochemist’s Workbench to simulate the change in chemical composition of reservoir fluids as they migrate toward the surface. These models explicitly account for the formation (mass and composition) of a steam phase and equilibrium partitioning of volatile components (e.g., CO2, H2S, and H2) into the steam as a result of pressure decreases associated with upward fluid migration from depth. We use the synthetic data generated from these simulations to determine the advantages and limitations of various geothermometry and optimization approaches for estimating the likely conditions (e.g., temperature, pCO2) to which the water was exposed in the deep subsurface. We demonstrate the magnitude of errors that can result from boiling, loss of volatiles, and analytical error from sampling and instrumental analysis. The estimated reservoir temperatures for these scenarios are also compared to conventional geothermometers. These results can help improve estimation of geothermal resource temperature during exploration and early development.
A general approach to association using cluster partition functions
NASA Astrophysics Data System (ADS)
Hendriks, E. M.; Walsh, J.; van Bergen, A. R. D.
1997-06-01
A systematic and fundamental approach to associating mixtures is presented. It is shown how the thermodynamic functions may be computed starting from a partition function based on the cluster concept such as occurs in chemical theory. The theory provides a basis for and an extension of the existing chemical theory of (continuous) association. It is applicable to arbitrary association schemes. Analysis of separate cases is not necessary. The assumptions that were made to allow the development were chosen such as to make the principle of reactivity valid. It is this same principle that links various theories: the chemical theory of continuous association, the lattice fluid hydrogen bonding model, and first-order perturbation theory. The equivalence between these theories in appropriate limits is shown in a general and rigorous way. The theory is believed to provide a practical framework for engineering modeling work. Binary interaction parameters can be incorporated. The association scheme is accounted for by a set of generic equations, which should facilitate robust implementation in computer programs.
Nonrelativistic approaches derived from point-coupling relativistic models
Lourenco, O.; Dutra, M.; Delfino, A.; Sa Martins, J. S.
2010-03-15
We construct nonrelativistic versions of relativistic nonlinear hadronic point-coupling models, based on new normalized spinor wave functions after small component reduction. These expansions give us energy density functionals that can be compared to their relativistic counterparts. We show that the agreement between the nonrelativistic limit approach and the Skyrme parametrizations becomes strongly dependent on the incompressibility of each model. We also show that the particular case A=B=0 (Walecka model) leads to the same energy density functional of the Skyrme parametrizations SV and ZR2, while the truncation scheme, up to order {rho}{sup 3}, leads to parametrizations for which {sigma}=1.
Modelling functional effects of muscle geometry.
van der Linden, B J; Koopman, H F; Grootenboer, H J; Huijing, P A
1998-04-01
Muscle architecture is an important aspect of muscle functioning. Hence, geometry and material properties of muscle have great influence on the force-length characteristics of muscle. We compared experimental results for the gastrocnemius medialis muscle (GM) of the rat to model results of simple geometric models such as a planimetric model and three-dimensional versions of this model. The capabilities of such models to adequately calculate muscle geometry and force-length characteristics were investigated. The planimetric model with elastic aponeurosis predicted GM muscle geometry well: maximal differences are 6, 1, 4 and 6% for fiber length, aponeurosis length, fiber angle and aponeurosis angle respectively. A slanted cylinder model with circular fiber cross-section did not predict muscle geometry as well as the planimetric model, whereas the geometry results of a second slanted cylinder model were identical to the planimetric model. It is concluded that the planimetric model is capable of adequately calculating the muscle geometry over the muscle length range studied. However, for modelling of force-length characteristics more complex models are needed, as none of the models yielded results sufficiently close to experimental data. Modelled force-length characteristics showed an overestimation of muscle optimum length by 2 mm with respect to experimental data, and the force at the ascending limb of the length force curve was underestimated. The models presented neglect important aspects such as non-linear geometry of muscle, certain passive material properties and mechanical interactions of fibers. These aspects may be responsible for short-comings in the modelling. It is argued that, considering the inability to adequately model muscle length-force characteristics for an isolated maximally activated (in situ) muscle, it is to be expected that prediction will fail for muscle properties in conditions of complex movement with many interacting factors. Therefore
Inverse Modeling Via Linearized Functional Minimization
NASA Astrophysics Data System (ADS)
Barajas-Solano, D. A.; Wohlberg, B.; Vesselinov, V. V.; Tartakovsky, D. M.
2014-12-01
We present a novel parameter estimation methodology for transient models of geophysical systems with uncertain, spatially distributed, heterogeneous and piece-wise continuous parameters.The methodology employs a bayesian approach to propose an inverse modeling problem for the spatial configuration of the model parameters.The likelihood of the configuration is formulated using sparse measurements of both model parameters and transient states.We propose using total variation regularization (TV) as the prior reflecting the heterogeneous, piece-wise continuity assumption on the parameter distribution.The maximum a posteriori (MAP) estimator of the parameter configuration is then computed by minimizing the negative bayesian log-posterior using a linearized functional minimization approach. The computation of the MAP estimator is a large-dimensional nonlinear minimization problem with two sources of nonlinearity: (1) the TV operator, and (2) the nonlinear relation between states and parameters provided by the model's governing equations.We propose a a hybrid linearized functional minimization (LFM) algorithm in two stages to efficiently treat both sources of nonlinearity.The relation between states and parameters is linearized, resulting in a linear minimization sub-problem equipped with the TV operator; this sub-problem is then minimized using the Alternating Direction Method of Multipliers (ADMM). The methodology is illustrated with a transient saturated groundwater flow application in a synthetic domain, stimulated by external point-wise loadings representing aquifer pumping, together with an array of discrete measurements of hydraulic conductivity and transient measurements of hydraulic head.We show that our inversion strategy is able to recover the overall large-scale features of the parameter configuration, and that the reconstruction is improved by the addition of transient information of the state variable.
Matrix model approach to cosmology
NASA Astrophysics Data System (ADS)
Chaney, A.; Lu, Lei; Stern, A.
2016-03-01
We perform a systematic search for rotationally invariant cosmological solutions to toy matrix models. These models correspond to the bosonic sector of Lorentzian Ishibashi, Kawai, Kitazawa and Tsuchiya (IKKT)-type matrix models in dimensions d less than ten, specifically d =3 and d =5 . After taking a continuum (or commutative) limit they yield d -1 dimensional Poisson manifolds. The manifolds have a Lorentzian induced metric which can be associated with closed, open, or static space-times. For d =3 , we obtain recursion relations from which it is possible to generate rotationally invariant matrix solutions which yield open universes in the continuum limit. Specific examples of matrix solutions have also been found which are associated with closed and static two-dimensional space-times in the continuum limit. The solutions provide for a resolution of cosmological singularities, at least within the context of the toy matrix models. The commutative limit reveals other desirable features, such as a solution describing a smooth transition from an initial inflation to a noninflationary era. Many of the d =3 solutions have analogues in higher dimensions. The case of d =5 , in particular, has the potential for yielding realistic four-dimensional cosmologies in the continuum limit. We find four-dimensional de Sitter d S4 or anti-de Sitter AdS4 solutions when a totally antisymmetric term is included in the matrix action. A nontrivial Poisson structure is attached to these manifolds which represents the lowest order effect of noncommutativity. For the case of AdS4 , we find one particular limit where the lowest order noncommutativity vanishes at the boundary, but not in the interior.
Combining Formal and Functional Approaches to Topic Structure
ERIC Educational Resources Information Center
Zellers, Margaret; Post, Brechtje
2012-01-01
Fragmentation between formal and functional approaches to prosodic variation is an ongoing problem in linguistic research. In particular, the frameworks of the Phonetics of Talk-in-Interaction (PTI) and Empirical Phonology (EP) take very different theoretical and methodological approaches to this kind of variation. We argue that it is fruitful to…
Roth, Jason L.; Capel, Paul D.
2012-01-01
Crop agriculture occupies 13 percent of the conterminous United States. Agricultural management practices, such as crop and tillage types, affect the hydrologic flow paths through the landscape. Some agricultural practices, such as drainage and irrigation, create entirely new hydrologic flow paths upon the landscapes where they are implemented. These hydrologic changes can affect the magnitude and partitioning of water budgets and sediment erosion. Given the wide degree of variability amongst agricultural settings, changes in the magnitudes of hydrologic flow paths and sediment erosion induced by agricultural management practices commonly are difficult to characterize, quantify, and compare using only field observations. The Water Erosion Prediction Project (WEPP) model was used to simulate two landscape characteristics (slope and soil texture) and three agricultural management practices (land cover/crop type, tillage type, and selected agricultural land management practices) to evaluate their effects on the water budgets of and sediment yield from agricultural lands. An array of sixty-eight 60-year simulations were run, each representing a distinct natural or agricultural scenario with various slopes, soil textures, crop or land cover types, tillage types, and select agricultural management practices on an isolated 16.2-hectare field. Simulations were made to represent two common agricultural climate regimes: arid with sprinkler irrigation and humid. These climate regimes were constructed with actual climate and irrigation data. The results of these simulations demonstrate the magnitudes of potential changes in water budgets and sediment yields from lands as a result of landscape characteristics and agricultural practices adopted on them. These simulations showed that variations in landscape characteristics, such as slope and soil type, had appreciable effects on water budgets and sediment yields. As slopes increased, sediment yields increased in both the arid and
HABITAT MODELING APPROACHES FOR RESTORATION SITE SELECTION
Numerous modeling approaches have been used to develop predictive models of species-environment and species-habitat relationships. These models have been used in conservation biology and habitat or species management, but their application to restoration efforts has been minimal...
An Instructional Approach to Modeling in Microevolution.
ERIC Educational Resources Information Center
Thompson, Steven R.
1988-01-01
Describes an approach to teaching population genetics and evolution and some of the ways models can be used to enhance understanding of the processes being studied. Discusses the instructional plan, and the use of models including utility programs and analysis with models. Provided are a basic program and sample program outputs. (CW)
Functional renormalization group - a new approach to frustrated quantum magnetism
NASA Astrophysics Data System (ADS)
Reuther, Johannes
The experimental and theoretical investigation of quantum spin systems has become one of the central disciplines of contemporary condensed matter physics. From an experimental viewpoint, the field has been significantly fueled by the recent synthesis of novel strongly correlated materials with exotic magnetic or quantum paramagnetic ground states. From a theoretical perspective, however, the numerical treatment of realistic models for quantum magnetism in two and three spatial dimensions still constitutes a serious challenge. This particularly applies to frustrated systems, which complicate the employment of established methods. This talk intends to propagate the pseudofermion functional renormalization group (PFFRG) as a novel approach to determine large size ground state correlations of a wide class of spin Hamiltonians. Using a diagrammatic pseudofermion representation for quantum spin models, the PFFRG performs systematic summations in all two-particle fermionic interaction channels, capturing the correct balance between classical magnetic ordering and quantum fluctuations. Numerical results for various frustrated spin models on different 2D and 3D lattices are reviewed, and benchmarked against other methods if available.
Function Model for Community Health Service Information
NASA Astrophysics Data System (ADS)
Yang, Peng; Pan, Feng; Liu, Danhong; Xu, Yongyong
In order to construct a function model of community health service (CHS) information for development of CHS information management system, Integration Definition for Function Modeling (IDEF0), an IEEE standard which is extended from Structured Analysis and Design(SADT) and now is a widely used function modeling method, was used to classifying its information from top to bottom. The contents of every level of the model were described and coded. Then function model for CHS information, which includes 4 super-classes, 15 classes and 28 sub-classed of business function, 43 business processes and 168 business activities, was established. This model can facilitate information management system development and workflow refinement.
Social learning in Models and Cases - an Interdisciplinary Approach
NASA Astrophysics Data System (ADS)
Buhl, Johannes; De Cian, Enrica; Carrara, Samuel; Monetti, Silvia; Berg, Holger
2016-04-01
Our paper follows an interdisciplinary understanding of social learning. We contribute to the literature on social learning in transition research by bridging case-oriented research and modelling-oriented transition research. We start by describing selected theories on social learning in innovation, diffusion and transition research. We present theoretical understandings of social learning in techno-economic and agent-based modelling. Then we elaborate on empirical research on social learning in transition case studies. We identify and synthetize key dimensions of social learning in transition case studies. In the following we bridge between more formal and generalising modelling approaches towards social learning processes and more descriptive, individualising case study approaches by interpreting the case study analysis into a visual guide on functional forms of social learning typically identified in the cases. We then try to exemplarily vary functional forms of social learning in integrated assessment models. We conclude by drawing the lessons learned from the interdisciplinary approach - methodologically and empirically.
Uniqueness of place: uniqueness of models. The FLEX modelling approach
NASA Astrophysics Data System (ADS)
Fenicia, F.; Savenije, H. H. G.; Wrede, S.; Schoups, G.; Pfister, L.
2009-04-01
The current practice in hydrological modelling is to make use of model structures that are fixed and a-priori defined. However, for a model to reflect uniqueness of place while maintaining parsimony, it is necessary to be flexible in its architecture. We have developed a new approach for the development and testing of hydrological models, named the FLEX approach. This approach allows the formulation of alternative model structures that vary in configuration and complexity, and uses an objective method for testing and comparing model performance. We have tested this approach on three headwater catchments in Luxembourg with marked differences in hydrological response, where we have generated 15 alternative model structures. Each of the three catchments is best represented by a different model architecture. Our results clearly show that uniqueness of place necessarily leads to uniqueness of models.
Challenges in structural approaches to cell modeling.
Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A
2016-07-31
Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field. PMID:27255863
Improving Treatment Integrity through a Functional Approach to Intervention Support
ERIC Educational Resources Information Center
Liaupsin, Carl J.
2015-01-01
A functional approach to intervention planning has been shown to be effective in reducing problem behaviors and promoting appropriate behaviors in children and youth with behavior disorders. When function-based intervention plans are not successful, it is often due to issues of treatment integrity in which teachers omit or do not sufficiently…
Challenges and opportunities for integrating lake ecosystem modelling approaches
Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto, Jr.; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.
2010-01-01
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative
NASA Astrophysics Data System (ADS)
Cecchet, F.; Lis, D.; Caudano, Y.; Mani, A. A.; Peremans, A.; Champagne, B.; Guthmuller, J.
2012-03-01
The knowledge of the first hyperpolarizability tensor elements of molecular groups is crucial for a quantitative interpretation of the sum frequency generation (SFG) activity of thin organic films at interfaces. Here, the SFG response of the terminal methyl group of a dodecanethiol (DDT) monolayer has been interpreted on the basis of calculations performed at the density functional theory (DFT) level of approximation. In particular, DFT calculations have been carried out on three classes of models for the aliphatic chains. The first class of models consists of aliphatic chains, containing from 3 to 12 carbon atoms, in which only one methyl group can freely vibrate, while the rest of the chain is frozen by a strong overweight of its C and H atoms. This enables us to localize the probed vibrational modes on the methyl group. In the second class, only one methyl group is frozen, while the entire remaining chain is allowed to vibrate. This enables us to analyse the influence of the aliphatic chain on the methyl stretching vibrations. Finally, the dodecanethiol (DDT) molecule is considered, for which the effects of two dielectrics, i.e. n-hexane and n-dodecane, are investigated. Moreover, DDT calculations are also carried out by using different exchange-correlation (XC) functionals in order to assess the DFT approximations. Using the DFT IR vectors and Raman tensors, the SFG spectrum of DDT has been simulated and the orientation of the methyl group has then been deduced and compared with that obtained using an analytical approach based on a bond additivity model. This analysis shows that when using DFT molecular properties, the predicted orientation of the terminal methyl group tends to converge as a function of the alkyl chain length and that the effects of the chain as well as of the dielectric environment are small. Instead, a more significant difference is observed when comparing the DFT-based results with those obtained from the analytical approach, thus indicating
A Hierarchical Systems Approach to Model Validation
NASA Astrophysics Data System (ADS)
Easterbrook, S. M.
2011-12-01
Existing approaches to the question of how climate models should be evaluated tend to rely on either philosophical arguments about the status of models as scientific tools, or on empirical arguments about how well runs from a given model match observational data. These have led to quantitative approaches expressed in terms of model bias or forecast skill, and ensemble approaches where models are assessed according to the extent to which the ensemble brackets the observational data. Unfortunately, such approaches focus the evaluation on models per se (or more specifically, on the simulation runs they produce) as though the models can be isolated from their context. Such approach may overlook a number of important aspects of the use of climate models: - the process by which models are selected and configured for a given scientific question. - the process by which model outputs are selected, aggregated and interpreted by a community of expertise in climatology. - the software fidelity of the models (i.e. whether the running code is actually doing what the modellers think it's doing). - the (often convoluted) history that begat a given model, along with the modelling choices long embedded in the code. - variability in the scientific maturity of different model components within a coupled system. These omissions mean that quantitative approaches cannot assess whether a model produces the right results for the wrong reasons, or conversely, the wrong results for the right reasons (where, say the observational data is problematic, or the model is configured to be unlike the earth system for a specific reason). Hence, we argue that it is a mistake to think that validation is a post-hoc process to be applied to an individual "finished" model, to ensure it meets some criteria for fidelity to the real world. We are therefore developing a framework for model validation that extends current approaches down into the detailed codebase and the processes by which the code is built
An approach to solving large reliability models
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.
1988-01-01
This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).
Dynamic geometry, brain function modeling, and consciousness.
Roy, Sisir; Llinás, Rodolfo
2008-01-01
Pellionisz and Llinás proposed, years ago, a geometric interpretation towards understanding brain function. This interpretation assumes that the relation between the brain and the external world is determined by the ability of the central nervous system (CNS) to construct an internal model of the external world using an interactive geometrical relationship between sensory and motor expression. This approach opened new vistas not only in brain research but also in understanding the foundations of geometry itself. The approach named tensor network theory is sufficiently rich to allow specific computational modeling and addressed the issue of prediction, based on Taylor series expansion properties of the system, at the neuronal level, as a basic property of brain function. It was actually proposed that the evolutionary realm is the backbone for the development of an internal functional space that, while being purely representational, can interact successfully with the totally different world of the so-called "external reality". Now if the internal space or functional space is endowed with stochastic metric tensor properties, then there will be a dynamic correspondence between events in the external world and their specification in the internal space. We shall call this dynamic geometry since the minimal time resolution of the brain (10-15 ms), associated with 40 Hz oscillations of neurons and their network dynamics, is considered to be responsible for recognizing external events and generating the concept of simultaneity. The stochastic metric tensor in dynamic geometry can be written as five-dimensional space-time where the fifth dimension is a probability space as well as a metric space. This extra dimension is considered an imbedded degree of freedom. It is worth noticing that the above-mentioned 40 Hz oscillation is present both in awake and dream states where the central difference is the inability of phase resetting in the latter. This framework of dynamic
Shell Model in a First Principles Approach
Navratil, P; Nogga, A; Lloyd, R; Vary, J P; Ormand, W E; Barrett, B R
2004-01-08
We develop and apply an ab-initio approach to nuclear structure. Starting with the NN interaction, that fits two-body scattering and bound state data, and adding a theoretical NNN potential, we evaluate nuclear properties in a no-core approach. For presently feasible no-core model spaces, we evaluate an effective Hamiltonian in a cluster approach which is guaranteed to provide exact answers for sufficiently large model spaces and/or sufficiently large clusters. A number of recent applications are surveyed including an initial application to exotic multiquark systems.
Accurate definition of brain regions position through the functional landmark approach.
Thirion, Bertrand; Varoquaux, Gaël; Poline, Jean-Baptiste
2010-01-01
In many application of functional Magnetic Resonance Imaging (fMRI), including clinical or pharmacological studies, the definition of the location of the functional activity between subjects is crucial. While current acquisition and normalization procedures improve the accuracy of the functional signal localization, it is also important to ensure that functional foci detection yields accurate results, and reflects between-subject variability. Here we introduce a fast functional landmark detection procedure, that explicitly models the spatial variability of activation foci in the observed population. We compare this detection approach to standard statistical maps peak extraction procedures: we show that it yields more accurate results on simulations, and more reproducible results on a large cohort of subjects. These results demonstrate that explicit functional landmark modeling approaches are more effective than standard statistical mapping for brain functional focus detection. PMID:20879321
Selectionist and Evolutionary Approaches to Brain Function: A Critical Appraisal
Fernando, Chrisantha; Szathmáry, Eörs; Husbands, Phil
2012-01-01
We consider approaches to brain dynamics and function that have been claimed to be Darwinian. These include Edelman’s theory of neuronal group selection, Changeux’s theory of synaptic selection and selective stabilization of pre-representations, Seung’s Darwinian synapse, Loewenstein’s synaptic melioration, Adam’s selfish synapse, and Calvin’s replicating activity patterns. Except for the last two, the proposed mechanisms are selectionist but not truly Darwinian, because no replicators with information transfer to copies and hereditary variation can be identified in them. All of them fit, however, a generalized selectionist framework conforming to the picture of Price’s covariance formulation, which deliberately was not specific even to selection in biology, and therefore does not imply an algorithmic picture of biological evolution. Bayesian models and reinforcement learning are formally in agreement with selection dynamics. A classification of search algorithms is shown to include Darwinian replicators (evolutionary units with multiplication, heredity, and variability) as the most powerful mechanism for search in a sparsely occupied search space. Examples are given of cases where parallel competitive search with information transfer among the units is more efficient than search without information transfer between units. Finally, we review our recent attempts to construct and analyze simple models of true Darwinian evolutionary units in the brain in terms of connectivity and activity copying of neuronal groups. Although none of the proposed neuronal replicators include miraculous mechanisms, their identification remains a challenge but also a great promise. PMID:22557963
Is protein classification necessary? Towards alternative approaches to function annotation
Petrey, Donald; Honig, Barry
2009-01-01
The current non-redundant protein sequence database contains over seven million entries and the number of individual functional domains is significantly larger than this value. The vast quantity of data associated with these proteins poses enormous challenges to any attempt at function annotation. Classification of proteins into sequence and structural groups has been widely used as an approach to simplifying the problem. In this article we question such strategies. We describe how the multi-functionality and structural diversity of even closely related proteins confounds efforts to assign function based on overall sequence or structural similarity. Rather, we suggest that strategies that avoid classification may offer a more robust approach to protein function annotation. PMID:19269161
Heterogeneous Factor Analysis Models: A Bayesian Approach.
ERIC Educational Resources Information Center
Ansari, Asim; Jedidi, Kamel; Dube, Laurette
2002-01-01
Developed Markov Chain Monte Carlo procedures to perform Bayesian inference, model checking, and model comparison in heterogeneous factor analysis. Tested the approach with synthetic data and data from a consumption emotion study involving 54 consumers. Results show that traditional psychometric methods cannot fully capture the heterogeneity in…
Facet Modelling: An Approach to Flexible and Integrated Conceptual Modelling.
ERIC Educational Resources Information Center
Opdahl, Andreas L.; Sindre, Guttorm
1997-01-01
Identifies weaknesses of conceptual modelling languages for the problem domain of information systems (IS) development. Outlines an approach called facet modelling of real-world problem domains to deal with the complexity of contemporary analysis problems. Shows how facet models can be defined and visualized; discusses facet modelling in relation…
Robust, Adaptive Functional Regression in Functional Mixed Model Framework
Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.
2012-01-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient
Filtered density function approach for reactive transport in groundwater
NASA Astrophysics Data System (ADS)
Suciu, Nicolae; Schüler, Lennart; Attinger, Sabine; Knabner, Peter
2016-04-01
Spatial filtering may be used in coarse-grained simulations (CGS) of reactive transport in groundwater, similar to the large eddy simulations (LES) in turbulence. The filtered density function (FDF), stochastically equivalent to a probability density function (PDF), provides a statistical description of the sub-grid, unresolved, variability of the concentration field. Besides closing the chemical source terms in the transport equation for the mean concentration, like in LES-FDF methods, the CGS-FDF approach aims at quantifying the uncertainty over the whole hierarchy of heterogeneity scales exhibited by natural porous media. Practically, that means estimating concentration PDFs on coarse grids, at affordable computational costs. To cope with the high dimensionality of the problem in case of multi-component reactive transport and to reduce the numerical diffusion, FDF equations are solved by particle methods. But, while trajectories of computational particles are modeled as stochastic processes indexed by time, the concentration's heterogeneity is modeled as a random field, with multi-dimensional, spatio-temporal sets of indices. To overcome this conceptual inconsistency, we consider FDFs/PDFs of random species concentrations weighted by conserved scalars and we show that their evolution equations can be formulated as Fokker-Planck equations describing stochastically equivalent processes in concentration-position spaces. Numerical solutions can then be approximated by the density in the concentration-position space of an ensemble of computational particles governed by the associated Itô equations. Instead of sequential particle methods we use a global random walk (GRW) algorithm, which is stable, free of numerical diffusion, and practically insensitive to the increase of the number of particles. We illustrate the general FDF approach and the GRW numerical solution for a reduced complexity problem consisting of the transport of a single scalar in groundwater
Models of Protocellular Structure, Function and Evolution
NASA Technical Reports Server (NTRS)
New, Michael H.; Pohorille, Andrew; Szostak, Jack W.; Keefe, Tony; Lanyi, Janos K.; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
In the absence of any record of protocells, the most direct way to test our understanding, of the origin of cellular life is to construct laboratory models that capture important features of protocellular systems. Such efforts are currently underway in a collaborative project between NASA-Ames, Harvard Medical School and University of California. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures. The centerpiece of this project is a method for the in vitro evolution of protein enzymes toward arbitrary catalytic targets. A similar approach has already been developed for nucleic acids in which a small number of functional molecules are selected from a large, random population of candidates. The selected molecules are next vastly multiplied using the polymerase chain reaction.
Stormwater infiltration trenches: a conceptual modelling approach.
Freni, Gabriele; Mannina, Giorgio; Viviani, Gaspare
2009-01-01
In recent years, limitations linked to traditional urban drainage schemes have been pointed out and new approaches are developing introducing more natural methods for retaining and/or disposing of stormwater. These mitigation measures are generally called Best Management Practices or Sustainable Urban Drainage System and they include practices such as infiltration and storage tanks in order to reduce the peak flow and retain part of the polluting components. The introduction of such practices in urban drainage systems entails an upgrade of existing modelling frameworks in order to evaluate their efficiency in mitigating the impact of urban drainage systems on receiving water bodies. While storage tank modelling approaches are quite well documented in literature, some gaps are still present about infiltration facilities mainly dependent on the complexity of the involved physical processes. In this study, a simplified conceptual modelling approach for the simulation of the infiltration trenches is presented. The model enables to assess the performance of infiltration trenches. The main goal is to develop a model that can be employed for the assessment of the mitigation efficiency of infiltration trenches in an integrated urban drainage context. Particular care was given to the simulation of infiltration structures considering the performance reduction due to clogging phenomena. The proposed model has been compared with other simplified modelling approaches and with a physically based model adopted as benchmark. The model performed better compared to other approaches considering both unclogged facilities and the effect of clogging. On the basis of a long-term simulation of six years of rain data, the performance and the effectiveness of an infiltration trench measure are assessed. The study confirmed the important role played by the clogging phenomenon on such infiltration structures. PMID:19587416
New approach to folding with the Coulomb wave function
Blokhintsev, L. D.; Savin, D. A.; Kadyrov, A. S.; Mukhamedzhanov, A. M.
2015-05-15
Due to the long-range character of the Coulomb interaction theoretical description of low-energy nuclear reactions with charged particles still remains a formidable task. One way of dealing with the problem in an integral-equation approach is to employ a screened Coulomb potential. A general approach without screening requires folding of kernels of the integral equations with the Coulomb wave. A new method of folding a function with the Coulomb partial waves is presented. The partial-wave Coulomb function both in the configuration and momentum representations is written in the form of separable series. Each term of the series is represented as a product of a factor depending only on the Coulomb parameter and a function depending on the spatial variable in the configuration space and the momentum variable if the momentum representation is used. Using a trial function, the method is demonstrated to be efficient and reliable.
A multiscale modeling approach to adhesive contact
NASA Astrophysics Data System (ADS)
Fan, KangQi; Wang, WeiDong; Zhu, YingMin; Zhang, XiuYan
2011-09-01
In order to model the adhesive contact across different length scales, a multiscale approach is developed and used to study the adhesive contact behaviors between a rigid cylinder and an elastic face-centered cubic (FCC) substrate. The approach combines an atomistic treatment of the interfacial region with an elastic mechanics method description of the continuum region. The two regions are connected by a coupling region where nodes of the continuum region are refined to atoms of the atomistic region. Moreover, the elastic constants of FCC crystals are obtained directly from the Lennard-Jones potential to describe the elastic response characteristics of the continuum region, which ensures the consistency of material proprieties between atomistic and continuum regions. The multiscale approach is examined by comparing it with the pure MD simulation, and the results indicate that the multiscale modeling approach agrees well with the MD method in studying the adhesive contact behaviors.
Graphical Approach to Model Reduction for Nonlinear Biochemical Networks
Holland, David O.; Krainak, Nicholas C.; Saucerman, Jeffrey J.
2011-01-01
Model reduction is a central challenge to the development and analysis of multiscale physiology models. Advances in model reduction are needed not only for computational feasibility but also for obtaining conceptual insights from complex systems. Here, we introduce an intuitive graphical approach to model reduction based on phase plane analysis. Timescale separation is identified by the degree of hysteresis observed in phase-loops, which guides a “concentration-clamp” procedure for estimating explicit algebraic relationships between species equilibrating on fast timescales. The primary advantages of this approach over Jacobian-based timescale decomposition are that: 1) it incorporates nonlinear system dynamics, and 2) it can be easily visualized, even directly from experimental data. We tested this graphical model reduction approach using a 25-variable model of cardiac β1-adrenergic signaling, obtaining 6- and 4-variable reduced models that retain good predictive capabilities even in response to new perturbations. These 6 signaling species appear to be optimal “kinetic biomarkers” of the overall β1-adrenergic pathway. The 6-variable reduced model is well suited for integration into multiscale models of heart function, and more generally, this graphical model reduction approach is readily applicable to a variety of other complex biological systems. PMID:21901136
Thilaga, M; Vijayalakshmi, R; Nadarajan, R; Nandagopal, D
2016-06-01
The complex nature of neuronal interactions of the human brain has posed many challenges to the research community. To explore the underlying mechanisms of neuronal activity of cohesive brain regions during different cognitive activities, many innovative mathematical and computational models are required. This paper presents a novel Common Functional Pattern Mining approach to demonstrate the similar patterns of interactions due to common behavior of certain brain regions. The electrode sites of EEG-based functional brain network are modeled as a set of transactions and node-based complex network measures as itemsets. These itemsets are transformed into a graph data structure called Functional Pattern Graph. By mining this Functional Pattern Graph, the common functional patterns due to specific brain functioning can be identified. The empirical analyses show the efficiency of the proposed approach in identifying the extent to which the electrode sites (transactions) are similar during various cognitive load states. PMID:27401999
Towards new approaches in phenological modelling
NASA Astrophysics Data System (ADS)
Chmielewski, Frank-M.; Götz, Klaus-P.; Rawel, Harshard M.; Homann, Thomas
2014-05-01
Modelling of phenological stages is based on temperature sums for many decades, describing both the chilling and the forcing requirement of woody plants until the beginning of leafing or flowering. Parts of this approach go back to Reaumur (1735), who originally proposed the concept of growing degree-days. Now, there is a growing body of opinion that asks for new methods in phenological modelling and more in-depth studies on dormancy release of woody plants. This requirement is easily understandable if we consider the wide application of phenological models, which can even affect the results of climate models. To this day, in phenological models still a number of parameters need to be optimised on observations, although some basic physiological knowledge of the chilling and forcing requirement of plants is already considered in these approaches (semi-mechanistic models). Limiting, for a fundamental improvement of these models, is the lack of knowledge about the course of dormancy in woody plants, which cannot be directly observed and which is also insufficiently described in the literature. Modern metabolomic methods provide a solution for this problem and allow both, the validation of currently used phenological models as well as the development of mechanistic approaches. In order to develop this kind of models, changes of metabolites (concentration, temporal course) must be set in relation to the variability of environmental (steering) parameters (weather, day length, etc.). This necessarily requires multi-year (3-5 yr.) and high-resolution (weekly probes between autumn and spring) data. The feasibility of this approach has already been tested in a 3-year pilot-study on sweet cherries. Our suggested methodology is not only limited to the flowering of fruit trees, it can be also applied to tree species of the natural vegetation, where even greater deficits in phenological modelling exist.
Bovell, Adonis Miguel; Warncke, Kurt
2013-02-26
Ethanolamine ammonia-lyase (EAL) is a 5'-deoxyadenosylcobalamin-dependent bacterial enzyme that catalyzes the deamination of the short-chain vicinal amino alcohols, aminoethanol and (S)- and (R)-2-aminopropanol. The coding sequence for EAL is located within the 17-gene eut operon, which encodes the broad spectrum of proteins that comprise the ethanolamine utilization (eut) metabolosome suborganelle structure. A high-resolution structure of the ∼500 kDa EAL [(EutB-EutC)₂]₃ oligomer from Escherichia coli has been determined by X-ray crystallography, but high-resolution spectroscopic determinations of reactant intermediate-state structures and detailed kinetic and thermodynamic studies of EAL have been conducted for the Salmonella typhimurium enzyme. Therefore, a statistically robust homology model for the S. typhimurium EAL is constructed from the E. coli structure. The model structure is used to describe the hierarchy of EutB and EutC subunit interactions that construct the native EAL oligomer and, specifically, to address the long-standing challenge of reconstitution of the functional oligomer from isolated, purified subunits. Model prediction that the (EutB₂)₃ oligomer assembly will occur from isolated EutB, and that this hexameric structure will template the formation of the complete, native [(EutB-EutC)₂]₃ oligomer, is verified by biochemical methods. Prediction that cysteine residues on the exposed subunit-subunit contact surfaces of isolated EutB and EutC will interfere with assembly by cystine formation is verified by activating effects of disulfide reducing agents. Angstrom-scale congruence of the reconstituted and native EAL in the active site region is shown by electron paramagnetic resonance spectroscopy. Overall, the hierarchy of subunit interactions and microscopic features of the contact surfaces, which are revealed by the homology model, guide and provide a rationale for a refined genetic and biochemical approach to reconstitution of the
Bootstrapped models for intrinsic random functions
Campbell, K.
1987-01-01
The use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process, and the fact that this function has to be estimated from the data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the ''bootstrap'' in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as of their ''kriging variance,'' provide a reasonable picture of the variability introduced by imperfect estimation of the generalized covariance function.
Bootstrapped models for intrinsic random functions
Campbell, K.
1988-08-01
Use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process. The fact that this function has to be estimated from data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the bootstrap in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as their kriging variance, provide a reasonable picture of variability introduced by imperfect estimation of the generalized covariance function.
A functional approach to emotion in autonomous systems.
Sanz, Ricardo; Hernández, Carlos; Gómez, Jaime; Hernando, Adolfo
2010-01-01
The construction of fully effective systems seems to pass through the proper exploitation of goal-centric self-evaluative capabilities that let the system teleologically self-manage. Emotions seem to provide this kind of functionality to biological systems and hence the interest in emotion for function sustainment in artificial systems performing in changing and uncertain environments; far beyond the media hullabaloo of displaying human-like emotion-laden faces in robots. This chapter provides a brief analysis of the scientific theories of emotion and presents an engineering approach for developing technology for robust autonomy by implementing functionality inspired in that of biological emotions. PMID:20020352
Questionnaire of Executive Function for Dancers: An Ecological Approach
ERIC Educational Resources Information Center
Wong, Alina; Rodriguez, Mabel; Quevedo, Liliana; de Cossio, Lourdes Fernandez; Borges, Ariel; Reyes, Alicia; Corral, Roberto; Blanco, Florentino; Alvarez, Miguel
2012-01-01
There is a current debate about the ecological validity of executive function (EF) tests. Consistent with the verisimilitude approach, this research proposes the Ballet Executive Scale (BES), a self-rating questionnaire that assimilates idiosyncratic executive behaviors of classical dance community. The BES was administrated to 149 adolescents,…
From Equation to Inequality Using a Function-Based Approach
ERIC Educational Resources Information Center
Verikios, Petros; Farmaki, Vassiliki
2010-01-01
This article presents features of a qualitative research study concerning the teaching and learning of school algebra using a function-based approach in a grade 8 class, of 23 students, in 26 lessons, in a state school of Athens, in the school year 2003-2004. In this article, we are interested in the inequality concept and our aim is to…
Kelly, Mollie L; Chernoff, Jonathan
2012-04-01
p21-activated kinases are a family of highly conserved protein serine/threonine kinases that are increasingly recognized as playing essential roles in a variety of key signaling processes. Genetic analyses in mice, using constitutive or regulated gene disruption, have provided important new insights into PAK function. In this paper, we review the genetic analysis of all six PAK genes in mice. These data address the singular and redundant functions of the various PAK genes and suggest therapeutic possibilities for small molecule PAK inhibitors or activators. PMID:23162740
A New Mixed Model Based on the Velocity Structure Function
NASA Astrophysics Data System (ADS)
Brun, Christophe; Friedrich, Rainer; Da Silva, Carlos B.; Métais, Olivier
We propose a new mixed model for Large Eddy-Simulation based on the 3D spatial velocity increment. This approach blends the non-linear properties of the Increment model (Brun & Friedrich (2001)) with the eddy viscosity characteristics of the Structure Function model (Métais & Lesieur (1992)). The behaviour of this subgrid scale model is studied both via a priori tests of a plane jet at ReH=3000 and Large Eddy-Simulation of a round jet at ReD=25000. This approach allows to describe both forward and backward energy transfer encountered in transitional shear flows.
Approaches for functional analysis of flagellar proteins in African trypanosomes
Oberholzer, Michael; Lopez, Miguel A.; Ralston, Katherine S.; Hill, Kent L.
2013-01-01
The eukaryotic flagellum is a highly conserved organelle serving motility, sensory and transport functions. Although genetic, genomic and proteomic studies have led to the identification of hundreds of flagellar and putative flagellar proteins, precisely how these proteins function individually and collectively to drive flagellum motility and other functions remains to be determined. In this chapter we provide an overview of tools and approaches available for studying flagellum protein function in the protozoan parasite Trypanosoma brucei. We begin by outlining techniques for in vitro cultivation of both T. brucei lifecycle stages, as well as transfection protocols for the delivery of DNA constructs. We then describe specific assays used to assess flagellum function including flagellum preparation and quantitative motility assays. We conclude the chapter with a description of molecular genetic approaches for manipulating gene function. In summary, the availability of potent molecular tools, as well as the health and economic relevance of T. brucei as a pathogen, combine to make the parasite an attractive and integral experimental system for the functional analysis of flagellar proteins. PMID:20409810
Nuclear collective excitations: A relativistic density functional approach
NASA Astrophysics Data System (ADS)
Piekarewicz, J.
2015-08-01
Density functional theory provides the most promising, and likely unique, microscopic framework to describe nuclear systems ranging from finite nuclei to neutron stars. Properly optimized energy density functionals define a new paradigm in nuclear theory where predictive capability is possible and uncertainty quantification is demanded. Moreover, density functional theory offers a consistent approach to the linear response of the nuclear ground state. In this paper, we review the fundamental role played by nuclear collective modes in uncovering novel excitations and in guiding the optimization of the density functional. Indeed, without collective excitations the determination of the density functional remains incomplete. Without collective excitations, the equation of state of neutron-rich matter continues to be poorly constrained. We conclude with a discussion of some of the remaining challenges in this field and propose a path forward to address these challenges.
Models of Protocellular Structure, Function and Evolution
NASA Technical Reports Server (NTRS)
New, Michael H.; Pohorille, Andrew; Szostak, Jack W.; Keefe, Tony; Lanyi, Janos K.
2001-01-01
In the absence of any record of protocells, the most direct way to test our understanding of the origin of cellular life is to construct laboratory models that capture important features of protocellular systems. Such efforts are currently underway in a collaborative project between NASA-Ames, Harvard Medical School and University of California. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures. The centerpiece of this project is a method for the in vitro evolution of protein enzymes toward arbitrary catalytic targets. A similar approach has already been developed for nucleic acids in which a small number of functional molecules are selected from a large, random population of candidates. The selected molecules are next vastly multiplied using the polymerase chain reaction. A mutagenic approach, in which the sequences of selected molecules are randomly altered, can yield further improvements in performance or alterations of specificities. Unfortunately, the catalytic potential of nucleic acids is rather limited. Proteins are more catalytically capable but cannot be directly amplified. In the new technique, this problem is circumvented by covalently linking each protein of the initial, diverse, pool to the RNA sequence that codes for it. Then, selection is performed on the proteins, but the nucleic acids are replicated. Additional information is contained in the original extended abstract.
Järvinen, Anna; Ng, Rowena; Bellugi, Ursula
2015-11-01
Williams syndrome (WS) is a neurogenetic disorder that is saliently characterized by a unique social phenotype, most notably associated with a dramatically increased affinity and approachability toward unfamiliar people. Despite a recent proliferation of studies into the social profile of WS, the underpinnings of the pro-social predisposition are poorly understood. To this end, the present study was aimed at elucidating approach behavior of individuals with WS contrasted with typical development (TD) by employing a multidimensional design combining measures of autonomic arousal, social functioning, and two levels of approach evaluations. Given previous evidence suggesting that approach behaviors of individuals with WS are driven by a desire for social closeness, approachability tendencies were probed across two levels of social interaction: talking versus befriending. The main results indicated that while overall level of approachability did not differ between groups, an important qualitative between-group difference emerged across the two social interaction contexts: whereas individuals with WS demonstrated a similar willingness to approach strangers across both experimental conditions, TD individuals were significantly more willing to talk to than to befriend strangers. In WS, high approachability to positive faces across both social interaction levels was further associated with more normal social functioning. A novel finding linked autonomic responses with willingness to befriend negative faces in the WS group: elevated autonomic responsivity was associated with increased affiliation to negative face stimuli, which may represent an autonomic correlate of approach behavior in WS. Implications for underlying organization of the social brain are discussed. PMID:26459097
Modeling superhelical DNA: recent analytical and dynamic approaches.
Schlick, T
1995-04-01
During the past year, a variety of diverse and complementary approaches have been presented for modeling superhelical DNA, offering new physical and biological insights into fundamental functional processes of DNA. Analytical approaches have probed deeper into the effects of entropy and thermal fluctuations on DNA structure and on various topological constraints induced by DNA-binding proteins. In tandem, new kinetic approaches--by molecular, Langevin and Brownian dynamics, as well as extensions of elastic-rod theory--have begun to offer dynamic information associated with supercoiling. Such dynamic approaches, along with other equilibrium studies, are refining the basic elastic-rod and polymer framework and incorporating more realistic treatments of salt and sequence-specific features. These collective advances in modeling large DNA molecules, in concert with technological innovations, are pointing to an exciting interplay between theory and experiment on the horizon. PMID:7648328
Towards a Multiscale Approach to Cybersecurity Modeling
Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay; Halappanavar, Mahantesh; Oler, Kiri J.; Joslyn, Cliff A.
2013-11-12
We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example of a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.
A Conceptual Modeling Approach for OLAP Personalization
NASA Astrophysics Data System (ADS)
Garrigós, Irene; Pardillo, Jesús; Mazón, Jose-Norberto; Trujillo, Juan
Data warehouses rely on multidimensional models in order to provide decision makers with appropriate structures to intuitively analyze data with OLAP technologies. However, data warehouses may be potentially large and multidimensional structures become increasingly complex to be understood at a glance. Even if a departmental data warehouse (also known as data mart) is used, these structures would be also too complex. As a consequence, acquiring the required information is more costly than expected and decision makers using OLAP tools may get frustrated. In this context, current approaches for data warehouse design are focused on deriving a unique OLAP schema for all analysts from their previously stated information requirements, which is not enough to lighten the complexity of the decision making process. To overcome this drawback, we argue for personalizing multidimensional models for OLAP technologies according to the continuously changing user characteristics, context, requirements and behaviour. In this paper, we present a novel approach to personalizing OLAP systems at the conceptual level based on the underlying multidimensional model of the data warehouse, a user model and a set of personalization rules. The great advantage of our approach is that a personalized OLAP schema is provided for each decision maker contributing to better satisfy their specific analysis needs. Finally, we show the applicability of our approach through a sample scenario based on our CASE tool for data warehouse development.
Post-16 Biology--Some Model Approaches?
ERIC Educational Resources Information Center
Lock, Roger
1997-01-01
Outlines alternative approaches to the teaching of difficult concepts in A-level biology which may help student learning by making abstract ideas more concrete and accessible. Examples include models, posters, and poems for illustrating meiosis, mitosis, genetic mutations, and protein synthesis. (DDR)
A Functional Developmental Approach to Autism Spectrum Disorders.
ERIC Educational Resources Information Center
Greenspan, Stanley I.; Wieder, Serena
1999-01-01
This article describes a dynamic, developmental model to be used to guide assessment and intervention in children with autism. The Developmental, Individual-Difference, Relationship-Based model conceptualizes the child's functional emotional developmental capacities, individual differences in sensory processing and modulation, motor planning and…
Tensor renormalization group approach to classical dimer models
NASA Astrophysics Data System (ADS)
Roychowdhury, Krishanu; Huang, Ching-Yu
2015-05-01
We analyze classical dimer models on a square and a triangular lattice using a tensor network representation of the dimers. The correlation functions are numerically calculated using the recently developed "tensor renormalization group" (TRG) technique. The partition function for the dimer problem can be calculated exactly by the Pfaffian method, which is used here as a platform for comparing the numerical results. The TRG approach turns out to be a powerful tool for describing gapped systems with exponentially decaying correlations very efficiently due to its fast convergence. This is the case for the dimer model on the triangular lattice. However, the convergence becomes very slow and unstable in the case of the square lattice where the model has algebraically decaying correlations. We highlight these aspects with numerical simulations and critically appraise the robustness of the TRG approach by contrasting the results for small and large system sizes against the exact calculations. Furthermore, we benchmark our TRG results with the classical Monte Carlo method.
Semitransparent one-dimensional potential: a Green's function approach
NASA Astrophysics Data System (ADS)
Maldonado-Villamizar, F. H.
2015-06-01
We study the unstable harmonic oscillator and the unstable linear potential in the presence of the point potential, which is the superposition of the Dirac δ (x) and its derivative {{δ }\\prime }(x). Using the physical boundary conditions for the Green's function we derive for both systems the resonance poles and the resonance wave functions. The matching conditions for the resonance wave functions coincide with those obtained by the self-adjoint extensions of the point potentials and also by the modelling of the {{δ }\\prime }(x) function. We find that, with our definitions, the pure b{{δ }\\prime }(x) barrier is semi-transparent independent of the value of b.
Limitations of Discrete Stereology: Steps Toward a More Functional Approach
NASA Astrophysics Data System (ADS)
Proussevitch, A. A.; Sahagian, D. L.; Jutzeler, M.
2012-12-01
Stereology is a statistical and mathematical means to obtain 3D information (such as size, shape, and spatial orientation statistical distributions) from observed 2D cross-section cuts through a volume containing many embedded objects. Examples are SEM imagery of voids in a volcanic rock or tephra, objects in an X-ray tomographic slice, a thin section, a polished section of granite, a planar outcrop of welded volcanic pyroclasts, or sizing of igneous, sedimentary and metamorphic formations from maps. There are three possible approaches to addressing the stereology formulation: 1. Rough approximation using binned data conversion, i.e. discrete stereology. (BAD) 2. Semi-functional data deconvolution, i.e. hybrid of discrete and functional stereology. (BETTER) 3. Solution with 2D-3D functional transformation, i.e. functional stereology (the next step) (BEST). Discrete Stereology: Historically, stereology has been limited to observations of object sizes grouped into discrete bins, or what we now call "discrete" stereology. This approach suffers from severe limitations when applied to natural materials. The most serious of which are exponential error propagation and bias introduced by small numbers of objects in the extremities of the size distribution, and compounded non-spherical shapes and preferred spatial orientations. These limitations do not allow for accurate size distributions of pyroclastic materials, vesicles, and crystals, except for impractically large sample populations. Semi-Functional Stereology: In order to improve the method, a simple first step already taken is "semi-functional" stereology. It combines both discrete object sizing and pre-defined functions of 2D and 3D distributions. Discrete binned observational data is represented by a histogram from which a best fit function for 2D distribution is assigned. This function is then discretized and a 3D distribution is derived from that as in discrete stereology. This approach eliminates some problems
Building Water Models: A Different Approach
2015-01-01
Simplified classical water models are currently an indispensable component in practical atomistic simulations. Yet, despite several decades of intense research, these models are still far from perfect. Presented here is an alternative approach to constructing widely used point charge water models. In contrast to the conventional approach, we do not impose any geometry constraints on the model other than the symmetry. Instead, we optimize the distribution of point charges to best describe the “electrostatics” of the water molecule. The resulting “optimal” 3-charge, 4-point rigid water model (OPC) reproduces a comprehensive set of bulk properties significantly more accurately than commonly used rigid models: average error relative to experiment is 0.76%. Close agreement with experiment holds over a wide range of temperatures. The improvements in the proposed model extend beyond bulk properties: compared to common rigid models, predicted hydration free energies of small molecules using OPC are uniformly closer to experiment, with root-mean-square error <1 kcal/mol. PMID:25400877
An Evolutionary Computation Approach to Examine Functional Brain Plasticity.
Roy, Arnab; Campbell, Colin; Bernier, Rachel A; Hillary, Frank G
2016-01-01
One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength
An Evolutionary Computation Approach to Examine Functional Brain Plasticity
Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.
2016-01-01
One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength
An object-oriented approach to energy-economic modeling
Wise, M.A.; Fox, J.A.; Sands, R.D.
1993-12-01
In this paper, the authors discuss the experiences in creating an object-oriented economic model of the U.S. energy and agriculture markets. After a discussion of some central concepts, they provide an overview of the model, focusing on the methodology of designing an object-oriented class hierarchy specification based on standard microeconomic production functions. The evolution of the model from the class definition stage to programming it in C++, a standard object-oriented programming language, will be detailed. The authors then discuss the main differences between writing the object-oriented program versus a procedure-oriented program of the same model. Finally, they conclude with a discussion of the advantages and limitations of the object-oriented approach based on the experience in building energy-economic models with procedure-oriented approaches and languages.
Quasielastic scattering with the relativistic Green’s function approach
Meucci, Andrea; Giusti, Carlotta
2015-05-15
A relativistic model for quasielastic (QE) lepton-nucleus scattering is presented. The effects of final-state interactions (FSI) between the ejected nucleon and the residual nucleus are described in the relativistic Green’s function (RGF) model where FSI are consistently described with exclusive scattering using a complex optical potential. The results of the model are compared with experimental results of electron and neutrino scattering.
Elements of a function analytic approach to probability.
Ghanem, Roger Georges; Red-Horse, John Robert
2008-02-01
We first provide a detailed motivation for using probability theory as a mathematical context in which to analyze engineering and scientific systems that possess uncertainties. We then present introductory notes on the function analytic approach to probabilistic analysis, emphasizing the connections to various classical deterministic mathematical analysis elements. Lastly, we describe how to use the approach as a means to augment deterministic analysis methods in a particular Hilbert space context, and thus enable a rigorous framework for commingling deterministic and probabilistic analysis tools in an application setting.
Accuracy of functional surfaces on comparatively modeled protein structures
Zhao, Jieling; Dundas, Joe; Kachalo, Sema; Ouyang, Zheng; Liang, Jie
2012-01-01
Identification and characterization of protein functional surfaces are important for predicting protein function, understanding enzyme mechanism, and docking small compounds to proteins. As the rapid speed of accumulation of protein sequence information far exceeds that of structures, constructing accurate models of protein functional surfaces and identify their key elements become increasingly important. A promising approach is to build comparative models from sequences using known structural templates such as those obtained from structural genome projects. Here we assess how well this approach works in modeling binding surfaces. By systematically building three-dimensional comparative models of proteins using Modeller, we determine how well functional surfaces can be accurately reproduced. We use an alpha shape based pocket algorithm to compute all pockets on the modeled structures, and conduct a large-scale computation of similarity measurements (pocket RMSD and fraction of functional atoms captured) for 26,590 modeled enzyme protein structures. Overall, we find that when the sequence fragment of the binding surfaces has more than 45% identity to that of the tempalte protein, the modeled surfaces have on average an RMSD of 0.5 Å, and contain 48% or more of the binding surface atoms, with nearly all of the important atoms in the signatures of binding pockets captured. PMID:21541664
A hybrid modeling approach for option pricing
NASA Astrophysics Data System (ADS)
Hajizadeh, Ehsan; Seifi, Abbas
2011-11-01
The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.
Generating functional approach to Bose-Einstein correlations
Suzuki, N.; Biyajima, M.; Andreev, I.V.
1997-11-01
Bose-Einstein correlations are considered in the presence of M independent chaotic sources and a coherent source. Our approach is an extension of the formulation in the quantum optics given by Glauber and Lachs. The generating functional (GF) of Bose-Einstein correlation (BEC) functions is derived, and higher order BEC functions are obtained from the GF. A diagrammatic representation for cumulants is made. The number M is explicitly contained in our formulation, which is different from that given by Cramer {ital et al.} The possibility of estimating the number M from the analysis of BEC functions and cumulants is pointed out. Moreover, source size dependence of multiplicity distributions is shown in a simplified case. {copyright} {ital 1997} {ital The American Physical Society}
New approaches to enhance active steering system functionalities: preliminary results
NASA Astrophysics Data System (ADS)
Serarslan, Benan
2014-09-01
An important development of the steering systems in general is active steering systems like active front steering and steer-by-wire systems. In this paper the current functional possibilities in application of active steering systems are explored. A new approach and additional functionalities are presented that can be implemented to the active steering systems without additional hardware such as new sensors and electronic control units. Commercial active steering systems are controlling the steering angle depending on the driving situation only. This paper introduce methods for enhancing active steering system functionalities depending not only on the driving situation but also vehicle parameters like vehicle mass, tyre and road condition. In this regard, adaptation of the steering ratio as a function of above mentioned vehicle parameters is presented with examples. With some selected vehicle parameter changes, the reduction of the undesired influences on vehicle dynamics of these parameter changes has been demonstrated theoretically with simulations and with real-time driving measurements.
Computational approaches for rational design of proteins with novel functionalities
Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul
2012-01-01
Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes. PMID:24688643
A subgrid based approach for morphodynamic modelling
NASA Astrophysics Data System (ADS)
Volp, N. D.; van Prooijen, B. C.; Pietrzak, J. D.; Stelling, G. S.
2016-07-01
To improve the accuracy and the efficiency of morphodynamic simulations, we present a subgrid based approach for a morphodynamic model. This approach is well suited for areas characterized by sub-critical flow, like in estuaries, coastal areas and in low land rivers. This new method uses a different grid resolution to compute the hydrodynamics and the morphodynamics. The hydrodynamic computations are carried out with a subgrid based, two-dimensional, depth-averaged model. This model uses a coarse computational grid in combination with a subgrid. The subgrid contains high resolution bathymetry and roughness information to compute volumes, friction and advection. The morphodynamic computations are carried out entirely on a high resolution grid, the bed grid. It is key to find a link between the information defined on the different grids in order to guaranty the feedback between the hydrodynamics and the morphodynamics. This link is made by using a new physics-based interpolation method. The method interpolates water levels and velocities from the coarse grid to the high resolution bed grid. The morphodynamic solution improves significantly when using the subgrid based method compared to a full coarse grid approach. The Exner equation is discretised with an upwind method based on the direction of the bed celerity. This ensures a stable solution for the Exner equation. By means of three examples, it is shown that the subgrid based approach offers a significant improvement at a minimal computational cost.
A Bayesian Shrinkage Approach for AMMI Models.
da Silva, Carlos Pereira; de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves; Balestre, Marcio
2015-01-01
Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior
A Bayesian Shrinkage Approach for AMMI Models
de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves
2015-01-01
Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior
Green-function approach for scattering quantum walks
Andrade, F. M.; Luz, M. G. E. da
2011-10-15
In this work a Green-function approach for scattering quantum walks is developed. The exact formula has the form of a sum over paths and always can be cast into a closed analytic expression for arbitrary topologies and position-dependent quantum amplitudes. By introducing the step and path operators, it is shown how to extract any information about the system from the Green function. The method's relevant features are demonstrated by discussing in detail an example, a general diamond-shaped graph.
Bayesian non-parametrics and the probabilistic approach to modelling
Ghahramani, Zoubin
2013-01-01
Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman’s coalescent, Dirichlet diffusion trees and Wishart processes. PMID:23277609
Fuzzy set approach to quality function deployment: An investigation
NASA Technical Reports Server (NTRS)
Masud, Abu S. M.
1992-01-01
The final report of the 1992 NASA/ASEE Summer Faculty Fellowship at the Space Exploration Initiative Office (SEIO) in Langley Research Center is presented. Quality Function Deployment (QFD) is a process, focused on facilitating the integration of the customer's voice in the design and development of a product or service. Various input, in the form of judgements and evaluations, are required during the QFD analyses. All the input variables in these analyses are treated as numeric variables. The purpose of the research was to investigate how QFD analyses can be performed when some or all of the input variables are treated as linguistic variables with values expressed as fuzzy numbers. The reason for this consideration is that human judgement, perception, and cognition are often ambiguous and are better represented as fuzzy numbers. Two approaches for using fuzzy sets in QFD have been proposed. In both cases, all the input variables are considered as linguistic variables with values indicated as linguistic expressions. These expressions are then converted to fuzzy numbers. The difference between the two approaches is due to how the QFD computations are performed with these fuzzy numbers. In Approach 1, the fuzzy numbers are first converted to their equivalent crisp scores and then the QFD computations are performed using these crisp scores. As a result, the output of this approach are crisp numbers, similar to those in traditional QFD. In Approach 2, all the QFD computations are performed with the fuzzy numbers and the output are fuzzy numbers also. Both the approaches have been explained with the help of illustrative examples of QFD application. Approach 2 has also been applied in a QFD application exercise in SEIO, involving a 'mini moon rover' design. The mini moon rover is a proposed tele-operated vehicle that will traverse and perform various tasks, including autonomous operations, on the moon surface. The output of the moon rover application exercise is a
Neurocomputing approaches to modelling of drying process dynamics
Kaminski, W.; Strumillo, P.; Tomczak, E.
1998-07-01
The application of artificial neural networks to mathematical modeling of drying kinetics, degradation kinetics and smoothing of experimental data is discussed in the paper. A theoretical foundation of drying process description by means of artificial neural networks is presented. Two network types are proposed for drying process modelling, namely the multilayer perceptron network and the radial basis functions network. These were validated experimentally for fresh green peals and diced potatoes which represent diverse food products. Network training procedures based on experimental data are explained. Additionally, the proposed neural network modelling approach is tested on drying experiments of silica gel saturated with ascorbic acid solution.
Quantum cluster approach to the spinful Haldane-Hubbard model
NASA Astrophysics Data System (ADS)
Wu, Jingxiang; Faye, Jean Paul Latyr; Sénéchal, David; Maciejko, Joseph
2016-02-01
We study the spinful fermionic Haldane-Hubbard model at half-filling using a combination of quantum cluster methods: cluster perturbation theory, the variational cluster approximation, and cluster dynamical mean-field theory. We explore possible zero-temperature phases of the model as a function of onsite repulsive interaction strength and next-nearest-neighbor hopping amplitude and phase. Our approach allows us to access the regime of intermediate interaction strength, where charge fluctuations are significant and effective spin model descriptions may not be justified. Our approach also improves upon mean-field solutions of the Haldane-Hubbard model by retaining local quantum fluctuations and treating them nonperturbatively. We find a correlated topological Chern insulator for weak interactions and a topologically trivial Néel antiferromagnetic insulator for strong interactions. For intermediate interactions, we find that topologically nontrivial Néel antiferromagnetic insulating phases and/or a topologically nontrivial nonmagnetic insulating phase may be stabilized.
A Multi-Level Model of Moral Functioning Revisited
ERIC Educational Resources Information Center
Reed, Don Collins
2009-01-01
The model of moral functioning scaffolded in the 2008 "JME" Special Issue is here revisited in response to three papers criticising that volume. As guest editor of that Special Issue I have formulated the main body of this response, concerning the dynamic systems approach to moral development, the problem of moral relativism and the role of…
Systems Engineering Interfaces: A Model Based Approach
NASA Technical Reports Server (NTRS)
Fosse, Elyse; Delp, Christopher
2013-01-01
Currently: Ops Rev developed and maintains a framework that includes interface-specific language, patterns, and Viewpoints. Ops Rev implements the framework to design MOS 2.0 and its 5 Mission Services. Implementation de-couples interfaces and instances of interaction Future: A Mission MOSE implements the approach and uses the model based artifacts for reviews. The framework extends further into the ground data layers and provides a unified methodology.
Algebraic operator approach to gas kinetic models
NASA Astrophysics Data System (ADS)
Il'ichov, L. V.
1997-02-01
Some general properties of the linear Boltzmann kinetic equation are used to present it in the form ∂ tϕ = - Â†Âϕ with the operators ÂandÂ† possessing some nontrivial algebraic properties. When applied to the Keilson-Storer kinetic model, this method gives an example of quantum ( q-deformed) Lie algebra. This approach provides also a natural generalization of the “kangaroo model”.
Exact Approach to Inflationary Universe Models
NASA Astrophysics Data System (ADS)
del Campo, Sergio
In this chapter we introduce a study of inflationary universe models that are characterized by a single scalar inflation field . The study of these models is based on two dynamical equations: one corresponding to the Klein-Gordon equation for the inflaton field and the other to a generalized Friedmann equation. After describing the kinematics and dynamics of the models under the Hamilton-Jacobi scheme, we determine in some detail scalar density perturbations and relic gravitational waves. We also introduce the study of inflation under the hierarchy of the slow-roll parameters together with the flow equations. We apply this approach to the modified Friedmann equation that we call the Friedmann-Chern-Simons equation, characterized by F(H) = H^2- α H4, and the brane-world inflationary models expressed by the modified Friedmann equation.
Muñoz-Martínez, Amanda M; Coletti, Juan Pablo
2015-01-01
Abstract Functional Analytic Psychotherapy (FAP) is a therapeutic approach developed in
Gauge-invariant Green function dynamics: A unified approach
Swiecicki, Sylvia D. Sipe, J.E.
2013-11-15
We present a gauge-invariant description of Green function dynamics introduced by means of a generalized Peirels phase involving an arbitrary differentiable path in space–time. Two other approaches to formulating a gauge-invariant description of systems, the Green function treatment of Levanda and Fleurov [M. Levanda, V. Fleurov, J. Phys.: Condens. Matter 6 (1994) 7889] and the usual multipolar expansion for an atom, are shown to arise as special cases of our formalism. We argue that the consideration of paths in the generalized Peirels phase that do not lead to introduction of an effective gauge-invariant Hamiltonian with polarization and magnetization fields may prove useful for the treatment of the response of materials with short electron correlation lengths. -- Highlights: •Peirels phase for an arbitrary path in space–time established. •Gauge-invariant Green functions and the Power–Zienau–Wooley transformation connected. •Limitations on possible polarization and magnetization fields established.
Controlled Chemistry Approach to the Oxo-Functionalization of Graphene.
Eigler, Siegfried
2016-05-17
Graphene is the best-studied 2D material available. However, its production is still challenging and the quality depends on the preparation procedure. Now, more than a decade after the outstanding experiments conducted on graphene, the most successful wet-chemical approach to graphene and functionalized graphene is based on the oxidation of graphite. Graphene oxide has been known for more than a century; however, the structure bears variable large amounts of lattice defects that render the development of a controlled chemistry impossible. The controlled oxo-functionalization of graphene avoids the formation of defects within the σ-framework of carbon atoms, making the synthesis of specific molecular architectures possible. The scope of this review is to introduce the field of oxo-functionalizing graphene. In particular, the differences between GO and oxo-functionalized graphene are described in detail. Moreover analytical methods that allow determining lattice defects and functional groups are introduced followed by summarizing the current state of controlled oxo-functionalization of graphene. PMID:26990805
Development of a structured approach for decomposition of complex systems on a functional basis
NASA Astrophysics Data System (ADS)
Yildirim, Unal; Felician Campean, I.
2014-07-01
The purpose of this paper is to present the System State Flow Diagram (SSFD) as a structured and coherent methodology to decompose a complex system on a solution- independent functional basis. The paper starts by reviewing common function modelling frameworks in literature and discusses practical requirements of the SSFD in the context of the current literature and current approaches in industry. The proposed methodology is illustrated through the analysis of a case study: design analysis of a generic Bread Toasting System (BTS).
Approaches to modelling hydrology and ecosystem interactions
NASA Astrophysics Data System (ADS)
Silberstein, Richard P.
2014-05-01
As the pressures of industry, agriculture and mining on groundwater resources increase there is a burgeoning un-met need to be able to capture these multiple, direct and indirect stresses in a formal framework that will enable better assessment of impact scenarios. While there are many catchment hydrological models and there are some models that represent ecological states and change (e.g. FLAMES, Liedloff and Cook, 2007), these have not been linked in any deterministic or substantive way. Without such coupled eco-hydrological models quantitative assessments of impacts from water use intensification on water dependent ecosystems under changing climate are difficult, if not impossible. The concept would include facility for direct and indirect water related stresses that may develop around mining and well operations, climate stresses, such as rainfall and temperature, biological stresses, such as diseases and invasive species, and competition such as encroachment from other competing land uses. Indirect water impacts could be, for example, a change in groundwater conditions has an impact on stream flow regime, and hence aquatic ecosystems. This paper reviews previous work examining models combining ecology and hydrology with a view to developing a conceptual framework linking a biophysically defensable model that combines ecosystem function with hydrology. The objective is to develop a model capable of representing the cumulative impact of multiple stresses on water resources and associated ecosystem function.
NARX prediction of some rare chaotic flows: Recurrent fuzzy functions approach
NASA Astrophysics Data System (ADS)
Goudarzi, Sobhan; Jafari, Sajad; Moradi, Mohammad Hassan; Sprott, J. C.
2016-02-01
The nonlinear and dynamic accommodating capability of time domain models makes them a useful representation of chaotic time series for analysis, modeling and prediction. This paper is devoted to the modeling and prediction of chaotic time series with hidden attractors using a nonlinear autoregressive model with exogenous inputs (NARX) based on a novel recurrent fuzzy functions (RFFs) approach. Case studies of recently introduced chaotic systems with hidden attractors plus classical chaotic systems demonstrate that the proposed modeling methodology exhibits better prediction performance from different viewpoints (short term and long term) compared to some other existing methods.
Modeling of human artery tissue with probabilistic approach.
Xiong, Linfei; Chui, Chee-Kong; Fu, Yabo; Teo, Chee-Leong; Li, Yao
2015-04-01
Accurate modeling of biological soft tissue properties is vital for realistic medical simulation. Mechanical response of biological soft tissue always exhibits a strong variability due to the complex microstructure and different loading conditions. The inhomogeneity in human artery tissue is modeled with a computational probabilistic approach by assuming that the instantaneous stress at a specific strain varies according to normal distribution. Material parameters of the artery tissue which are modeled with a combined logarithmic and polynomial energy equation are represented by a statistical function with normal distribution. Mean and standard deviation of the material parameters are determined using genetic algorithm (GA) and inverse mean-value first-order second-moment (IMVFOSM) method, respectively. This nondeterministic approach was verified using computer simulation based on the Monte-Carlo (MC) method. Cumulative distribution function (CDF) of the MC simulation corresponds well with that of the experimental stress-strain data and the probabilistic approach is further validated using data from other studies. By taking into account the inhomogeneous mechanical properties of human biological tissue, the proposed method is suitable for realistic virtual simulation as well as an accurate computational approach for medical device validation. PMID:25748681
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Density functional calculations on model tyrosyl radicals.
Himo, F; Gräslund, A; Eriksson, L A
1997-01-01
A gradient-corrected density functional theory approach (PWP86) has been applied, together with large basis sets (IGLO-III), to investigate the structure and hyperfine properties of model tyrosyl free radicals. In nature, these radicals are observed in, e.g., the charge transfer pathways in photosystem II (PSII) and in ribonucleotide reductases (RNRs). By comparing spin density distributions and proton hyperfine couplings with experimental data, it is confirmed that the tyrosyl radicals present in the proteins are neutral. It is shown that hydrogen bonding to the phenoxyl oxygen atom, when present, causes a reduction in spin density on O and a corresponding increase on C4. Calculated proton hyperfine coupling constants for the beta-protons show that the alpha-carbon is rotated 75-80 degrees out of the plane of the ring in PSII and Salmonella typhimurium RNR, but only 20-30 degrees in, e.g., Escherichia coli, mouse, herpes simplex, and bacteriophage T4-induced RNRs. Furthermore, based on the present calculations, we have revised the empirical parameters used in the experimental determination of the oxygen spin density in the tyrosyl radical in E. coli RNR and of the ring carbon spin densities, from measured hyperfine coupling constants. Images FIGURE 1 FIGURE 5 PMID:9083661
Datamining approaches for modeling tumor control probability
Naqa, Issam El; Deasy, Joseph O.; Mu, Yi; Huang, Ellen; Hope, Andrew J.; Lindsay, Patricia E.; Apte, Aditya; Alaly, James; Bradley, Jeffrey D.
2016-01-01
Background Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Material and methods Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Results Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs = 0.68 on leave-one-out testing compared to logistic regression (rs = 0.4), Poisson-based TCP (rs = 0.33), and cell kill equivalent uniform dose model (rs = 0.17). Conclusions The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications. PMID:20192878
Thomas, Holly N; Thurston, Rebecca C
2016-05-01
A satisfying sex life is an important component of overall well-being, but sexual dysfunction is common, especially in midlife women. The aim of this review is (a) to define sexual function and dysfunction, (b) to present theoretical models of female sexual response, (c) to examine longitudinal studies of how sexual function changes during midlife, and (d) to review treatment options. Four types of female sexual dysfunction are currently recognized: Female Orgasmic Disorder, Female Sexual Interest/Arousal Disorder, Genito-Pelvic Pain/Penetration Disorder, and Substance/Medication-Induced Sexual Dysfunction. However, optimal sexual function transcends the simple absence of dysfunction. A biopsychosocial approach that simultaneously considers physical, psychological, sociocultural, and interpersonal factors is necessary to guide research and clinical care regarding women's sexual function. Most longitudinal studies reveal an association between advancing menopause status and worsening sexual function. Psychosocial variables, such as availability of a partner, relationship quality, and psychological functioning, also play an integral role. Future directions for research should include deepening our understanding of how sexual function changes with aging and developing safe and effective approaches to optimizing women's sexual function with aging. Overall, holistic, biopsychosocial approaches to women's sexual function are necessary to fully understand and treat this key component of midlife women's well-being. PMID:27013288
Functionalized Congener Approach to the Design of Ligands for G Protein–Coupled Receptors (GPCRs)
Jacobson, Kenneth A.
2009-01-01
Functionalized congeners, in which a chemically functionalized chain is incorporated at an insensitive site on a pharmacophore, have been designed from the agonist and antagonist ligands of various G protein–coupled receptors (GPCRs). These chain extensions enable a conjugation strategy for detecting and characterizing GPCR structure and function and pharmacological modulation. The focus in many studies of functionalized congeners has been on two families of GPCRs: those responding to extracellular purines and pyrimidines—i.e., adenosine receptors (ARs) and P2Y nucleotide receptors. Functionalized congeners of small-molecule as ligands for other GPCRs and non-G protein coupled receptors have also been designed. For example, among biogenic amine neurotransmitter receptors, muscarinic acetylcholine receptor antagonists and adrenergic receptor ligands have been studied with a functionalized congener approach. Adenosine A1, A2A, and A3 receptor functionalized congeners have yielded macromolecular conjugates, irreversibly binding AR ligands for receptor inactivation and crosslinking, radioactive probes that use prosthetic groups, immobilized ligands for affinity chromatography, and dual-acting ligands that function as binary drugs. Poly(amidoamine) dendrimers have served as nanocarriers for covalently conjugated AR functionalized congeners. Rational methods of ligand design derived from molecular modeling and templates have been included in these studies. Thus, the design of novel ligands, both small molecules and macromolecular conjugates, for studying the chemical and biological properties of GPCRs have been developed with this approach, has provided researchers with a strategy that is more versatile than the classical medicinal chemical approaches. PMID:19405524
NASA Astrophysics Data System (ADS)
Khajehei, S.; Madadgar, S.; Moradkhani, H.
2014-12-01
The reliability and accuracy of hydrological predictions are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model parameters and model structure. To reduce the total uncertainty in hydrological applications, one approach is to reduce the uncertainty in meteorological forcing by using the statistical methods based on the conditional probability density functions (pdf). However, one of the requirements for current methods is to assume the Gaussian distribution for the marginal distribution of the observed and modeled meteorology. Here we propose a Bayesian approach based on Copula functions to develop the conditional distribution of precipitation forecast needed in deriving a hydrologic model for a sub-basin in the Columbia River Basin. Copula functions are introduced as an alternative approach in capturing the uncertainties related to meteorological forcing. Copulas are multivariate joint distribution of univariate marginal distributions, which are capable to model the joint behavior of variables with any level of correlation and dependency. The method is applied to the monthly forecast of CPC with 0.25x0.25 degree resolution to reproduce the PRISM dataset over 1970-2000. Results are compared with Ensemble Pre-Processor approach as a common procedure used by National Weather Service River forecast centers in reproducing observed climatology during a ten-year verification period (2000-2010).
Bioactive Functions of Milk Proteins: a Comparative Genomics Approach.
Sharp, Julie A; Modepalli, Vengama; Enjapoori, Ashwanth Kumar; Bisana, Swathi; Abud, Helen E; Lefevre, Christophe; Nicholas, Kevin R
2014-12-01
The composition of milk includes factors required to provide appropriate nutrition for the growth of the neonate. However, it is now clear that milk has many functions and comprises bioactive molecules that play a central role in regulating developmental processes in the young while providing a protective function for both the suckled young and the mammary gland during the lactation cycle. Identifying these bioactives and their physiological function in eutherians can be difficult and requires extensive screening of milk components that may function to improve well-being and options for prevention and treatment of disease. New animal models with unique reproductive strategies are now becoming increasingly relevant to search for these factors. PMID:26115887
Meson wave function from holographic models
Vega, Alfredo; Schmidt, Ivan; Branz, Tanja; Gutsche, Thomas; Lyubovitskij, Valery E.
2009-09-01
We consider the light-front wave function for the valence quark state of mesons using the AdS/CFT correspondence, as has been suggested by Brodsky and Teramond. Two kinds of wave functions, obtained in different holographic Soft-Wall models, are discussed.
A functional approach to movement analysis and error identification in sports and physical education
Hossner, Ernst-Joachim; Schiebl, Frank; Göhner, Ulrich
2015-01-01
In a hypothesis-and-theory paper, a functional approach to movement analysis in sports is introduced. In this approach, contrary to classical concepts, it is not anymore the “ideal” movement of elite athletes that is taken as a template for the movements produced by learners. Instead, movements are understood as the means to solve given tasks that in turn, are defined by to-be-achieved task goals. A functional analysis comprises the steps of (1) recognizing constraints that define the functional structure, (2) identifying sub-actions that subserve the achievement of structure-dependent goals, (3) explicating modalities as specifics of the movement execution, and (4) assigning functions to actions, sub-actions and modalities. Regarding motor-control theory, a functional approach can be linked to a dynamical-system framework of behavioral shaping, to cognitive models of modular effect-related motor control as well as to explicit concepts of goal setting and goal achievement. Finally, it is shown that a functional approach is of particular help for sports practice in the context of structuring part practice, recognizing functionally equivalent task solutions, finding innovative technique alternatives, distinguishing errors from style, and identifying root causes of movement errors. PMID:26441717
Hossner, Ernst-Joachim; Schiebl, Frank; Göhner, Ulrich
2015-01-01
In a hypothesis-and-theory paper, a functional approach to movement analysis in sports is introduced. In this approach, contrary to classical concepts, it is not anymore the "ideal" movement of elite athletes that is taken as a template for the movements produced by learners. Instead, movements are understood as the means to solve given tasks that in turn, are defined by to-be-achieved task goals. A functional analysis comprises the steps of (1) recognizing constraints that define the functional structure, (2) identifying sub-actions that subserve the achievement of structure-dependent goals, (3) explicating modalities as specifics of the movement execution, and (4) assigning functions to actions, sub-actions and modalities. Regarding motor-control theory, a functional approach can be linked to a dynamical-system framework of behavioral shaping, to cognitive models of modular effect-related motor control as well as to explicit concepts of goal setting and goal achievement. Finally, it is shown that a functional approach is of particular help for sports practice in the context of structuring part practice, recognizing functionally equivalent task solutions, finding innovative technique alternatives, distinguishing errors from style, and identifying root causes of movement errors. PMID:26441717
Das, Sayoni; Lee, David; Sillitoe, Ian; Dawson, Natalie L.; Lees, Jonathan G.; Orengo, Christine A.
2015-01-01
Motivation: Computational approaches that can predict protein functions are essential to bridge the widening function annotation gap especially since <1.0% of all proteins in UniProtKB have been experimentally characterized. We present a domain-based method for protein function classification and prediction of functional sites that exploits functional sub-classification of CATH superfamilies. The superfamilies are sub-classified into functional families (FunFams) using a hierarchical clustering algorithm supervised by a new classification method, FunFHMMer. Results: FunFHMMer generates more functionally coherent groupings of protein sequences than other domain-based protein classifications. This has been validated using known functional information. The conserved positions predicted by the FunFams are also found to be enriched in known functional residues. Moreover, the functional annotations provided by the FunFams are found to be more precise than other domain-based resources. FunFHMMer currently identifies 110 439 FunFams in 2735 superfamilies which can be used to functionally annotate > 16 million domain sequences. Availability and implementation: All FunFam annotation data are made available through the CATH webpages (http://www.cathdb.info). The FunFHMMer webserver (http://www.cathdb.info/search/by_funfhmmer) allows users to submit query sequences for assignment to a CATH FunFam. Contact: sayoni.das.12@ucl.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26139634
Garcia-Aldea, David; Alvarellos, J. E.
2008-02-15
We propose a kinetic energy density functional scheme with nonlocal terms based on the von Weizsaecker functional, instead of the more traditional approach where the nonlocal terms have the structure of the Thomas-Fermi functional. The proposed functionals recover the exact kinetic energy and reproduce the linear response function of homogeneous electron systems. In order to assess their quality, we have tested the total kinetic energies as well as the kinetic energy density for atoms. The results show that these nonlocal functionals give as good results as the most sophisticated functionals in the literature. The proposed scheme for constructing the functionals means a step ahead in the field of fully nonlocal kinetic energy functionals, because they are capable of giving better local behavior than the semilocal functionals, yielding at the same time accurate results for total kinetic energies. Moreover, the functionals enjoy the possibility of being evaluated as a single integral in momentum space if an adequate reference density is defined, and then quasilinear scaling for the computational cost can be achieved.
A Facile Approach to Functionalize Cell Membrane-Coated Nanoparticles
Zhou, Hao; Fan, Zhiyuan; Lemons, Pelin K.; Cheng, Hao
2016-01-01
Convenient strategies to provide cell membrane-coated nanoparticles (CM-NPs) with multi-functionalities beyond the natural function of cell membranes would dramatically expand the application of this emerging class of nanomaterials. We have developed a facile approach to functionalize CM-NPs by chemically modifying live cell membranes prior to CM-NP fabrication using a bifunctional linker, succinimidyl-[(N-maleimidopropionamido)-polyethyleneglycol] ester (NHS-PEG-Maleimide). This method is particularly suitable to conjugate large bioactive molecules such as proteins on cell membranes as it establishes a strong anchorage and enable the control of linker length, a critical parameter for maximizing the function of anchored proteins. As a proof of concept, we show the conjugation of human recombinant hyaluronidase, PH20 (rHuPH20) on red blood cell (RBC) membranes and demonstrate that long linker (MW: 3400) is superior to short linker (MW: 425) for maintaining enzyme activity, while minimizing the changes to cell membranes. When the modified membranes were fabricated into RBC membrane-coated nanoparticles (RBCM-NPs), the conjugated rHuPH20 can assist NP diffusion more efficiently than free rHuPH20 in matrix-mimicking gels and the pericellular hyaluronic acid matrix of PC3 prostate cancer cells. After quenching the unreacted chemical groups with polyethylene glycol, we demonstrated that the rHuPH20 modification does not reduce the ultra-long blood circulation time of RBCM-NPs. Therefore, this surface engineering approach provides a platform to functionlize CM-NPs without sacrificing the natural function of cell membranes. PMID:27217834
A function-based approach to cockpit procedure aids
NASA Technical Reports Server (NTRS)
Phatak, Anil V.; Jain, Parveen; Palmer, Everett
1990-01-01
The objective of this research is to develop and test a cockpit procedural aid that can compose and present procedures that are appropriate for the given flight situation. The procedure would indicate the status of the aircraft engineering systems, and the environmental conditions. Prescribed procedures already exist for normal as well as for a number of non-normal and emergency situations, and can be presented to the crew using an interactive cockpit display. However, no procedures are prescribed or recommended for a host of plausible flight situations involving multiple malfunctions compounded by adverse environmental conditions. Under these circumstances, the cockpit procedural aid must review the prescribed procedures for the individual malfunction (when available), evaluate the alternatives or options, and present one or more composite procedures (prioritized or unprioritized) in response to the given situation. A top-down function-based conceptual approach towards composing and presenting cockpit procedures is being investigated. This approach is based upon the thought process that an operating crew must go through while attempting to meet the flight objectives given the current flight situation. In order to accomplish the flight objectives, certain critical functions must be maintained during each phase of the flight, using the appropriate procedures or success paths. The viability of these procedures depends upon the availability of required resources. If resources available are not sufficient to meet the requirements, alternative procedures (success paths) using the available resources must be constructed to maintain the critical functions and the corresponding objectives. If no success path exists that can satisfy the critical functions/objectives, then the next level of critical functions/objectives must be selected and the process repeated. Information is given in viewgraph form.
Nuclear Functions of Nucleolin through Global Proteomics and Interactomic Approaches.
Salvetti, Anna; Couté, Yohann; Epstein, Alberto; Arata, Loredana; Kraut, Alexandra; Navratil, Vincent; Bouvet, Philippe; Greco, Anna
2016-05-01
Nucleolin (NCL) is a major component of the cell nucleolus, which has the ability to rapidly shuttle to several other cells' compartments. NCL plays important roles in a variety of essential functions, among which are ribosome biogenesis, gene expression, and cell growth. However, the precise mechanisms underlying NCL functions are still unclear. Our study aimed to provide new information on NCL functions via the identification of its nuclear interacting partners. Using an interactomics approach, we identified 140 proteins co-purified with NCL, among which 100 of them were specifically found to be associated with NCL after RNase digestion. The functional classification of these proteins confirmed the prominent role of NCL in ribosome biogenesis and additionally revealed the possible involvement of nuclear NCL in several pre-mRNA processing pathways through its interaction with RNA helicases and proteins participating in pre-mRNA splicing, transport, or stability. NCL knockdown experiments revealed that NCL regulates the localization of EXOSC10 and the amount of ZC3HAV1, two components of the RNA exosome, further suggesting its involvement in the control of mRNA stability. Altogether, this study describes the first nuclear interactome of human NCL and provides the basis for further understanding the mechanisms underlying the essential functions of this nucleolar protein. PMID:27049334
Merging Digital Surface Models Implementing Bayesian Approaches
NASA Astrophysics Data System (ADS)
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Hierarchical Approaches for Systems Modeling in Cardiac Development
Gould, Russell A.; Aboulmouna, Lina M.; Varner, Jeffrey D.; Butcher, Jonathan T.
2013-01-01
Ordered cardiac morphogenesis and function is essential for all vertebrate life. The heart begins as a simple contractile tube, but quickly grows and morphs into a multi-chambered pumping organ, complete with valves, while maintaining regulation of blood flow and nutrient distribution. Though not identical, cardiac morphogenesis shares many molecular and morphological processes across vertebrate species. Quantitative data across multiple time and length scales have been gathered through decades of reductionist single variable analyses. These range from detailed molecular signaling pathways at the cellular levels to cardiac function at the tissue/organ levels. However, none of these components act in true isolation from others, and each, in turn, exhibits short- and long-range effects in both time and space. With the absence of a gene, entire signaling cascades and genetic profiles may be shifted, resulting in complex feedback mechanisms. Also taking into account local microenvironmental changes throughout development, it is apparent that a systems level approach is an essential resource to accelerate information generation concerning the functional relationships across multiple length scales (molecular data vs. physiological function) and structural development. In this review, we discuss relevant in vivo and in vitro experimental approaches, compare different computational frameworks for systems modeling, and the latest information about systems modeling of cardiac development. Lastly, we conclude with some important future directions for cardiac systems modeling. PMID:23463736
Energy function-based approaches to graph coloring.
Di Blas, A; Jagota, A; Hughey, R
2002-01-01
We describe an approach to optimization based on a multiple-restart quasi-Hopfield network where the only problem-specific knowledge is embedded in the energy function that the algorithm tries to minimize. We apply this method to three different variants of the graph coloring problem: the minimum coloring problem, the spanning subgraph k-coloring problem, and the induced subgraph k-coloring problem. Though Hopfield networks have been applied in the past to the minimum coloring problem, our encoding is more natural and compact than almost all previous ones. In particular, we use k-state neurons while almost all previous approaches use binary neurons. This reduces the number of connections in the network from (Nk)(2) to N(2) asymptotically and also circumvents a problem in earlier approaches, that of multiple colors being assigned to a single vertex. Experimental results show that our approach compares favorably with other algorithms, even nonneural ones specifically developed for the graph coloring problem. PMID:18244411
An ecological approach to language development: an alternative functionalism.
Dent, C H
1990-11-01
I argue for a new functionalist approach to language development, an ecological approach. A realist orientation is used that locates the causes of language development neither in the child nor in the language environment but in the functioning of perceptual systems that detect language-world relationships and use them to guide attention and action. The theory requires no concept of innateness, thus avoiding problems inherent in either the innate ideas or the genes-as-causal-programs explanations of the source of structure in language. An ecological explanation of language is discussed in relation to concepts and language, language as representation, problems in early word learning, metaphor, and syntactic development. Finally, problems incurred in using the idea of innateness are summarized: History prior to the chosen beginning point is ignored, data on organism-environment mutuality are not collected, and the explanation claims no effect of learning, which cannot be tested empirically. PMID:2286298
Sensorimotor integration for functional recovery and the Bobath approach.
Levin, Mindy F; Panturin, Elia
2011-04-01
Bobath therapy is used to treat patients with neurological disorders. Bobath practitioners use hands-on approaches to elicit and reestablish typical movement patterns through therapist-controlled sensorimotor experiences within the context of task accomplishment. One aspect of Bobath practice, the recovery of sensorimotor function, is reviewed within the framework of current motor control theories. We focus on the role of sensory information in movement production, the relationship between posture and movement and concepts related to motor recovery and compensation with respect to this therapeutic approach. We suggest that a major barrier to the evaluation of the therapeutic effectiveness of the Bobath concept is the lack of a unified framework for both experimental identification and treatment of neurological motor deficits. More conclusive analysis of therapeutic effectiveness requires the development of specific outcomes that measure movement quality. PMID:21628730
Data Mining Approaches for Modeling Complex Electronic Circuit Design Activities
Kwon, Yongjin; Omitaomu, Olufemi A; Wang, Gi-Nam
2008-01-01
A printed circuit board (PCB) is an essential part of modern electronic circuits. It is made of a flat panel of insulating materials with patterned copper foils that act as electric pathways for various components such as ICs, diodes, capacitors, resistors, and coils. The size of PCBs has been shrinking over the years, while the number of components mounted on these boards has increased considerably. This trend makes the design and fabrication of PCBs ever more difficult. At the beginning of design cycles, it is important to estimate the time to complete the steps required accurately, based on many factors such as the required parts, approximate board size and shape, and a rough sketch of schematics. Current approach uses multiple linear regression (MLR) technique for time and cost estimations. However, the need for accurate predictive models continues to grow as the technology becomes more advanced. In this paper, we analyze a large volume of historical PCB design data, extract some important variables, and develop predictive models based on the extracted variables using a data mining approach. The data mining approach uses an adaptive support vector regression (ASVR) technique; the benchmark model used is the MLR technique currently being used in the industry. The strengths of SVR for this data include its ability to represent data in high-dimensional space through kernel functions. The computational results show that a data mining approach is a better prediction technique for this data. Our approach reduces computation time and enhances the practical applications of the SVR technique.
Arrigoni, Enrico; Knap, Michael; Linden, Wolfgang von der
2011-07-01
Among the various numerical techniques to study the physics of strongly correlated quantum many-body systems, the self-energy functional approach (SFA) has become increasingly important. In its previous form, however, SFA is not applicable to Bose-Einstein condensation or superfluidity. In this paper, we show how to overcome this shortcoming. To this end, we identify an appropriate quantity, which we term D, that represents the correlation correction of the condensate order parameter, as it does the self-energy for Green's function. An appropriate functional is derived, which is stationary at the exact physical realization of D and of the self-energy. Its derivation is based on a functional-integral representation of the grand potential followed by an appropriate sequence of Legendre transformations. The approach is not perturbative and, therefore, applicable to a wide range of models with local interactions. We show that the variational cluster approach based on the extended self-energy functional is equivalent to the ''pseudoparticle'' approach proposed in Phys. Rev. B 83, 134507 (2011). We present results for the superfluid density in the two-dimensional Bose-Hubbard model, which shows a remarkable agreement with those of quantum-Monte-Carlo calculations.
A multiscale approach for modeling crystalline solids
NASA Astrophysics Data System (ADS)
Cuitiño, Alberto M.; Stainier, Laurent; Wang, Guofeng; Strachan, Alejandro; Çağin, Tahir; Goddard, William A.; Ortiz, Michael
2001-05-01
In this paper we present a modeling approach to bridge the atomistic with macroscopic scales in crystalline materials. The methodology combines identification and modeling of the controlling unit processes at microscopic level with the direct atomistic determination of fundamental material properties. These properties are computed using a many body Force Field derived from ab initio quantum-mechanical calculations. This approach is exercised to describe the mechanical response of high-purity Tantalum single crystals, including the effect of temperature and strain-rate on the hardening rate. The resulting atomistically informed model is found to capture salient features of the behavior of these crystals such as: the dependence of the initial yield point on temperature and strain rate; the presence of a marked stage I of easy glide, specially at low temperatures and high strain rates; the sharp onset of stage II hardening and its tendency to shift towards lower strains, and eventually disappear, as the temperature increases or the strain rate decreases; the parabolic stage II hardening at low strain rates or high temperatures; the stage II softening at high strain rates or low temperatures; the trend towards saturation at high strains; the temperature and strain-rate dependence of the saturation stress; and the orientation dependence of the hardening rate.
Modeling Negotiation by a Paticipatory Approach
NASA Astrophysics Data System (ADS)
Torii, Daisuke; Ishida, Toru; Bousquet, François
In a participatory approach by social scientists, role playing games (RPG) are effectively used to understand real thinking and behavior of stakeholders, but RPG is not sufficient to handle a dynamic process like negotiation. In this study, a participatory simulation where user-controlled avatars and autonomous agents coexist is introduced to the participatory approach for modeling negotiation. To establish a modeling methodology of negotiation, we have tackled the following two issues. First, for enabling domain experts to concentrate interaction design for participatory simulation, we have adopted the architecture in which an interaction layer controls agents and have defined three types of interaction descriptions (interaction protocol, interaction scenario and avatar control scenario) to be described. Second, for enabling domain experts and stakeholders to capitalize on participatory simulation, we have established a four-step process for acquiring negotiation model: 1) surveys and interviews to stakeholders, 2) RPG, 3) interaction design, and 4) participatory simulation. Finally, we discussed our methodology through a case study of agricultural economics in the northeast Thailand.
Nonequilibrium Green's Function approach to time-resolved photoabsorption
NASA Astrophysics Data System (ADS)
Stefanucci, Gianluca; Perfetto, Enrico; Uimonen, Anna-Maija; van Leeuwen, Robert
We propose a nonequilibrium Green's function (NEGF) approach to calculate the time-resolved absorption spectrum of nanoscale systems. We can deal with arbitrary shape, intensity, duration and relative delay of the pump and probe fields and include ionization processes as well as hybridization effects due to surfaces. We present numerical simulations of atomic systems using different approximate self-energies and show that electron correlations are pivotal to reproduce important qualitative features. E.P. and G.S. acknowledge funding by MIUR FIRB Grant No. RBFR12SW0J. R.v.L. thanks the Academy of Finland for support.
The fruits of a functional approach for psychological science.
Stewart, Ian
2016-02-01
The current paper introduces relational frame theory (RFT) as a functional contextual approach to complex human behaviour and examines how this theory has contributed to our understanding of several key phenomena in psychological science. I will first briefly outline the philosophical foundation of RFT and then examine its conceptual basis and core concepts. Thereafter, I provide an overview of the empirical findings and applications that RFT has stimulated in a number of key domains such as language development, linguistic generativity, rule-following, analogical reasoning, intelligence, theory of mind, psychopathology and implicit cognition. PMID:26103949
Defining Genome Maintenance Pathways using Functional Genomic Approaches
Bansbach, Carol E.; Cortez, David
2011-01-01
Genome maintenance activities including DNA repair, cell division cycle control, and checkpoint signaling pathways preserve genome integrity and prevent disease. Defects in these pathways cause birth defects, neurodegeneration, premature aging, and cancer. Recent technical advances in functional genomic approaches such as expression profiling, proteomics, and RNA interference (RNAi) technologies have rapidly expanded our knowledge of the proteins that work in these pathways. In this review, we examine the use of these high-throughput methodologies in higher eukaryotic organisms for the interrogation of genome maintenance activities. PMID:21787120
Gauge-invariant Green function dynamics: A unified approach
NASA Astrophysics Data System (ADS)
Swiecicki, Sylvia D.; Sipe, J. E.
2013-11-01
We present a gauge-invariant description of Green function dynamics introduced by means of a generalized Peirels phase involving an arbitrary differentiable path in space-time. Two other approaches to formulating a gauge-invariant description of systems, the Green function treatment of Levanda and Fleurov [M. Levanda, V. Fleurov, J. Phys.: Condens. Matter 6 (1994) 7889] and the usual multipolar expansion for an atom, are shown to arise as special cases of our formalism. We argue that the consideration of paths in the generalized Peirels phase that do not lead to introduction of an effective gauge-invariant Hamiltonian with polarization and magnetization fields may prove useful for the treatment of the response of materials with short electron correlation lengths.
Novel metal resistance genes from microorganisms: a functional metagenomic approach.
González-Pastor, José E; Mirete, Salvador
2010-01-01
Most of the known metal resistance mechanisms are based on studies of cultured microorganisms, and the abundant uncultured fraction could be an important source of genes responsible for uncharacterized resistance mechanisms. A functional metagenomic approach was selected to recover metal resistance genes from the rhizosphere microbial community of an acid-mine drainage (AMD)-adapted plant, Erica andevalensis, from Rio Tinto, Spain. A total of 13 nickel resistant clones were isolated and analyzed, encoding hypothetical or conserved hypothetical proteins of uncertain functions, or well-characterized proteins, but not previously reported to be related to nickel resistance. The resistance clones were classified into two groups according to their nickel accumulation properties: those preventing or those favoring metal accumulation. Two clones encoding putative ABC transporter components and a serine O-acetyltransferase were found as representatives of each group, respectively. PMID:20830571
Current Approaches on Viral Infection: Proteomics and Functional Validations
Zheng, Jie; Tan, Boon Huan; Sugrue, Richard; Tang, Kai
2012-01-01
Viruses could manipulate cellular machinery to ensure their continuous survival and thus become parasites of living organisms. Delineation of sophisticated host responses upon virus infection is a challenging task. It lies in identifying the repertoire of host factors actively involved in the viral infectious cycle and characterizing host responses qualitatively and quantitatively during viral pathogenesis. Mass spectrometry based proteomics could be used to efficiently study pathogen-host interactions and virus-hijacked cellular signaling pathways. Moreover, direct host and viral responses upon infection could be further investigated by activity-based functional validation studies. These approaches involve drug inhibition of secretory pathway, immunofluorescence staining, dominant negative mutant of protein target, real-time PCR, small interfering siRNA-mediated knockdown, and molecular cloning studies. In this way, functional validation could gain novel insights into the high-content proteomic dataset in an unbiased and comprehensive way. PMID:23162545
Langevin approach to noise modelling of bipolar microwave transistors
NASA Astrophysics Data System (ADS)
Patti, F.; Miceli, V.; Spagnolo, B.
2000-04-01
We present a new approach to study the complete stochastic properties of fluctuations of the output current of microwave transistors. We obtain the π-hybrid model of bipolar microwave transistors with the noise internal sources starting from experimental on-wafer measurements of the scattering and noise parameters. We derive the stochastic differential equations of the Giacoletto model for different loads and source admittances. We give the analytical temporal behavior of the second moment of the output current, assuming particular given correlation functions between the internal noise sources.
On extended thermonuclear functions through pathway model
NASA Astrophysics Data System (ADS)
Kumar, Dilip
when α → 1. The beauty of the result is that these different families of three different functional forms are covered through the pathway parameter α. In a physical set up if f (x) in (3) is the stable or limiting form, the Maxwell-Boltzmann approach to thermonuclear functions, then f (x) in (1) and (2) will contain a large variety of unstable or chaotic situations which will all tend to (3) in the limit. Thus we get a clear idea of all the stable and unstable situations around the Maxwell-Boltzmann approach. Thus the current theory is given a mathematical extension and physical interpretations can be found to situations in (1) and (2). Incidently Tsallis statistics is a special case of (1) for γ = 0, a = 1, δ = 1, η = 1. The Beck-Cohen superstatistics, discussed in current statistical mechanics literature is a special case of (2) for a = 1, η = 1, α > 1. The main purpose of the present paper is to investigate in some more detail, mathematically, the extended thermonuclear functions for Maxwell-Boltzmann statistics and in the cut-off case. The extended thermonuclear functions will be evaluated in closed form for all convenient values of the parameter by means of residue calculus. A comparison of the standard thermonuclear functions with the extended thermonuclear functions is also done. The results and derivations in this paper are new and these will be of interest to physicists, mathematicians, probabilists, and statisticians.
Vertebrate Membrane Proteins: Structure, Function, and Insights from Biophysical Approaches
MÜLLER, DANIEL J.; WU, NAN; PALCZEWSKI, KRZYSZTOF
2008-01-01
Membrane proteins are key targets for pharmacological intervention because they are vital for cellular function. Here, we analyze recent progress made in the understanding of the structure and function of membrane proteins with a focus on rhodopsin and development of atomic force microscopy techniques to study biological membranes. Membrane proteins are compartmentalized to carry out extra- and intracellular processes. Biological membranes are densely populated with membrane proteins that occupy approximately 50% of their volume. In most cases membranes contain lipid rafts, protein patches, or paracrystalline formations that lack the higher-order symmetry that would allow them to be characterized by diffraction methods. Despite many technical difficulties, several crystal structures of membrane proteins that illustrate their internal structural organization have been determined. Moreover, high-resolution atomic force microscopy, near-field scanning optical microscopy, and other lower resolution techniques have been used to investigate these structures. Single-molecule force spectroscopy tracks interactions that stabilize membrane proteins and those that switch their functional state; this spectroscopy can be applied to locate a ligand-binding site. Recent development of this technique also reveals the energy landscape of a membrane protein, defining its folding, reaction pathways, and kinetics. Future development and application of novel approaches during the coming years should provide even greater insights to the understanding of biological membrane organization and function. PMID:18321962
Green's function approach for quantum graphs: An overview
NASA Astrophysics Data System (ADS)
Andrade, Fabiano M.; Schmidt, A. G. M.; Vicentini, E.; Cheng, B. K.; da Luz, M. G. E.
2016-08-01
Here we review the many aspects and distinct phenomena associated to quantum dynamics on general graph structures. For so, we discuss such class of systems under the energy domain Green's function (G) framework. This approach is particularly interesting because G can be written as a sum over classical-like paths, where local quantum effects are taken into account through the scattering matrix elements (basically, transmission and reflection amplitudes) defined on each one of the graph vertices. Hence, the exact G has the functional form of a generalized semiclassical formula, which through different calculation techniques (addressed in detail here) always can be cast into a closed analytic expression. It allows to solve exactly arbitrary large (although finite) graphs in a recursive and fast way. Using the Green's function method, we survey many properties of open and closed quantum graphs as scattering solutions for the former and eigenspectrum and eigenstates for the latter, also considering quasi-bound states. Concrete examples, like cube, binary trees and Sierpiński-like topologies are presented. Along the work, possible distinct applications using the Green's function methods for quantum graphs are outlined.
Forward and reverse transfer function model synthesis
NASA Technical Reports Server (NTRS)
Houghton, J. R.
1985-01-01
A process for synthesizing a mathematical model for a linear mechanical system using the forward and reverse Fourier transform functions is described. The differential equation for a system model is given. The Bode conversion of the differential equation, and the frequency and time-domain optimization matching of the model to the forward and reverse transform functions using the geometric simplex method of Nelder and Mead (1965) are examined. The effect of the window function on the linear mechanical system is analyzed. The model is applied to two examples; in one the signal damps down before the end of the time window and in the second the signal has significant energy at the end of the time window.
Modeling for fairness: A Rawlsian approach.
Diekmann, Sven; Zwart, Sjoerd D
2014-06-01
In this paper we introduce the overlapping design consensus for the construction of models in design and the related value judgments. The overlapping design consensus is inspired by Rawls' overlapping consensus. The overlapping design consensus is a well-informed, mutual agreement among all stakeholders based on fairness. Fairness is respected if all stakeholders' interests are given due and equal attention. For reaching such fair agreement, we apply Rawls' original position and reflective equilibrium to modeling. We argue that by striving for the original position, stakeholders expel invalid arguments, hierarchies, unwarranted beliefs, and bargaining effects from influencing the consensus. The reflective equilibrium requires that stakeholders' beliefs cohere with the final agreement and its justification. Therefore, the overlapping design consensus is not only an agreement to decisions, as most other stakeholder approaches, it is also an agreement to their justification and that this justification is consistent with each stakeholders' beliefs. For supporting fairness, we argue that fairness qualifies as a maxim in modeling. We furthermore distinguish values embedded in a model from values that are implied by its context of application. Finally, we conclude that for reaching an overlapping design consensus communication about properties of and values related to a model is required. PMID:25051870
Takahashi, Kou; Kong, Qiongman; Lin, Yuchen; Stouffer, Nathan; Schulte, Delanie A; Lai, Liching; Liu, Qibing; Chang, Ling-Chu; Dominguez, Sky; Xing, Xuechao; Cuny, Gregory D; Hodgetts, Kevin J; Glicksman, Marcie A; Lin, Chien-Liang Glenn
2015-03-01
Glutamatergic systems play a critical role in cognitive functions and are known to be defective in Alzheimer's disease (AD) patients. Previous literature has indicated that glial glutamate transporter EAAT2 plays an essential role in cognitive functions and that loss of EAAT2 protein is a common phenomenon observed in AD patients and animal models. In the current study, we investigated whether restored EAAT2 protein and function could benefit cognitive functions and pathology in APPSw,Ind mice, an animal model of AD. A transgenic mouse approach via crossing EAAT2 transgenic mice with APPSw,Ind. mice and a pharmacological approach using a novel EAAT2 translational activator, LDN/OSU-0212320, were conducted. Findings from both approaches demonstrated that restored EAAT2 protein function significantly improved cognitive functions, restored synaptic integrity, and reduced amyloid plaques. Importantly, the observed benefits were sustained one month after compound treatment cessation, suggesting that EAAT2 is a potential disease modifier with therapeutic potential for AD. PMID:25711212
Combinatorial Approach to Modeling Quantum Systems
NASA Astrophysics Data System (ADS)
Kornyak, Vladimir V.
2016-02-01
Using the fact that any linear representation of a group can be embedded into permutations, we propose a constructive description of quantum behavior that provides, in particular, a natural explanation of the appearance of complex numbers and unitarity in the formalism of the quantum mechanics. In our approach, the quantum behavior can be explained by the fundamental impossibility to trace the identity of the indistinguishable objects in their evolution. Any observation only provides information about the invariant relations between such objects. The trajectory of a quantum system is a sequence of unitary evolutions interspersed with observations—non-unitary projections. We suggest a scheme to construct combinatorial models of quantum evolution. The principle of selection of the most likely trajectories in such models via the large numbers approximation leads in the continuum limit to the principle of least action with the appropriate Lagrangians and deterministic evolution equations
Nuclear level density: Shell-model approach
NASA Astrophysics Data System (ADS)
Sen'kov, Roman; Zelevinsky, Vladimir
2016-06-01
Knowledge of the nuclear level density is necessary for understanding various reactions, including those in the stellar environment. Usually the combinatorics of a Fermi gas plus pairing is used for finding the level density. Recently a practical algorithm avoiding diagonalization of huge matrices was developed for calculating the density of many-body nuclear energy levels with certain quantum numbers for a full shell-model Hamiltonian. The underlying physics is that of quantum chaos and intrinsic thermalization in a closed system of interacting particles. We briefly explain this algorithm and, when possible, demonstrate the agreement of the results with those derived from exact diagonalization. The resulting level density is much smoother than that coming from conventional mean-field combinatorics. We study the role of various components of residual interactions in the process of thermalization, stressing the influence of incoherent collision-like processes. The shell-model results for the traditionally used parameters are also compared with standard phenomenological approaches.
Modelling approaches for bio-manufacturing operations.
Chhatre, Sunil
2013-01-01
Fast and cost-effective methods are needed to reduce the time and money needed for drug commercialisation and to determine the risks involved in adopting specific manufacturing strategies. Simulations offer one such approach for exploring design spaces before significant process development is carried out and can be used from the very earliest development stages through to scale-up and optimisation of operating conditions and resource deployment patterns both before and after plant start-up. The advantages this brings in terms of financial savings can be considerable, but to achieve these requires a full appreciation of the complexities of processes and how best to represent them mathematically within the context of in silico software. This chapter provides a summary of some of the work that has been carried out in the areas of mathematical modelling and discrete event simulations for production, recovery and purification operations when designing bio-pharmaceutical processes, looking at both financial and technical modelling. PMID:23183689
A Functional Approach to Deconvolve Dynamic Neuroimaging Data
Jiang, Ci-Ren; Aston, John A. D.; Wang, Jane-Ling
2016-01-01
Positron emission tomography (PET) is an imaging technique which can be used to investigate chemical changes in human biological processes such as cancer development or neurochemical reactions. Most dynamic PET scans are currently analyzed based on the assumption that linear first-order kinetics can be used to adequately describe the system under observation. However, there has recently been strong evidence that this is not the case. To provide an analysis of PET data which is free from this compartmental assumption, we propose a nonparametric deconvolution and analysis model for dynamic PET data based on functional principal component analysis. This yields flexibility in the possible deconvolved functions while still performing well when a linear compartmental model setup is the true data generating mechanism. As the deconvolution needs to be performed on only a relative small number of basis functions rather than voxel by voxel in the entire three-dimensional volume, the methodology is both robust to typical brain imaging noise levels while also being computationally efficient. The new methodology is investigated through simulations in both one-dimensional functions and 2D images and also applied to a neuroimaging study whose goal is the quantification of opioid receptor concentration in the brain. PMID:27226673
Understanding human functioning using graphical models
2010-01-01
Background Functioning and disability are universal human experiences. However, our current understanding of functioning from a comprehensive perspective is limited. The development of the International Classification of Functioning, Disability and Health (ICF) on the one hand and recent developments in graphical modeling on the other hand might be combined and open the door to a more comprehensive understanding of human functioning. The objective of our paper therefore is to explore how graphical models can be used in the study of ICF data for a range of applications. Methods We show the applicability of graphical models on ICF data for different tasks: Visualization of the dependence structure of the data set, dimension reduction and comparison of subpopulations. Moreover, we further developed and applied recent findings in causal inference using graphical models to estimate bounds on intervention effects in an observational study with many variables and without knowing the underlying causal structure. Results In each field, graphical models could be applied giving results of high face-validity. In particular, graphical models could be used for visualization of functioning in patients with spinal cord injury. The resulting graph consisted of several connected components which can be used for dimension reduction. Moreover, we found that the differences in the dependence structures between subpopulations were relevant and could be systematically analyzed using graphical models. Finally, when estimating bounds on causal effects of ICF categories on general health perceptions among patients with chronic health conditions, we found that the five ICF categories that showed the strongest effect were plausible. Conclusions Graphical Models are a flexible tool and lend themselves for a wide range of applications. In particular, studies involving ICF data seem to be suited for analysis using graphical models. PMID:20149230
Fast Geometric Consensus Approach for Protein Model Quality Assessment
Adamczak, Rafal; Pillardy, Jaroslaw; Vallat, Brinda K.
2011-01-01
Abstract Model quality assessment (MQA) is an integral part of protein structure prediction methods that typically generate multiple candidate models. The challenge lies in ranking and selecting the best models using a variety of physical, knowledge-based, and geometric consensus (GC)-based scoring functions. In particular, 3D-Jury and related GC methods assume that well-predicted (sub-)structures are more likely to occur frequently in a population of candidate models, compared to incorrectly folded fragments. While this approach is very successful in the context of diversified sets of models, identifying similar substructures is computationally expensive since all pairs of models need to be superimposed using MaxSub or related heuristics for structure-to-structure alignment. Here, we consider a fast alternative, in which structural similarity is assessed using 1D profiles, e.g., consisting of relative solvent accessibilities and secondary structures of equivalent amino acid residues in the respective models. We show that the new approach, dubbed 1D-Jury, allows to implicitly compare and rank N models in O(N) time, as opposed to quadratic complexity of 3D-Jury and related clustering-based methods. In addition, 1D-Jury avoids computationally expensive 3D superposition of pairs of models. At the same time, structural similarity scores based on 1D profiles are shown to correlate strongly with those obtained using MaxSub. In terms of the ability to select the best models as top candidates 1D-Jury performs on par with other GC methods. Other potential applications of the new approach, including fast clustering of large numbers of intermediate structures generated by folding simulations, are discussed as well. PMID:21244273
Mining Functional Modules in Heterogeneous Biological Networks Using Multiplex PageRank Approach.
Li, Jun; Zhao, Patrick X
2016-01-01
Identification of functional modules/sub-networks in large-scale biological networks is one of the important research challenges in current bioinformatics and systems biology. Approaches have been developed to identify functional modules in single-class biological networks; however, methods for systematically and interactively mining multiple classes of heterogeneous biological networks are lacking. In this paper, we present a novel algorithm (called mPageRank) that utilizes the Multiplex PageRank approach to mine functional modules from two classes of biological networks. We demonstrate the capabilities of our approach by successfully mining functional biological modules through integrating expression-based gene-gene association networks and protein-protein interaction networks. We first compared the performance of our method with that of other methods using simulated data. We then applied our method to identify the cell division cycle related functional module and plant signaling defense-related functional module in the model plant Arabidopsis thaliana. Our results demonstrated that the mPageRank method is effective for mining sub-networks in both expression-based gene-gene association networks and protein-protein interaction networks, and has the potential to be adapted for the discovery of functional modules/sub-networks in other heterogeneous biological networks. The mPageRank executable program, source code, the datasets and results of the presented two case studies are publicly and freely available at http://plantgrn.noble.org/MPageRank/. PMID:27446133
Mining Functional Modules in Heterogeneous Biological Networks Using Multiplex PageRank Approach
Li, Jun; Zhao, Patrick X.
2016-01-01
Identification of functional modules/sub-networks in large-scale biological networks is one of the important research challenges in current bioinformatics and systems biology. Approaches have been developed to identify functional modules in single-class biological networks; however, methods for systematically and interactively mining multiple classes of heterogeneous biological networks are lacking. In this paper, we present a novel algorithm (called mPageRank) that utilizes the Multiplex PageRank approach to mine functional modules from two classes of biological networks. We demonstrate the capabilities of our approach by successfully mining functional biological modules through integrating expression-based gene-gene association networks and protein-protein interaction networks. We first compared the performance of our method with that of other methods using simulated data. We then applied our method to identify the cell division cycle related functional module and plant signaling defense-related functional module in the model plant Arabidopsis thaliana. Our results demonstrated that the mPageRank method is effective for mining sub-networks in both expression-based gene-gene association networks and protein-protein interaction networks, and has the potential to be adapted for the discovery of functional modules/sub-networks in other heterogeneous biological networks. The mPageRank executable program, source code, the datasets and results of the presented two case studies are publicly and freely available at http://plantgrn.noble.org/MPageRank/. PMID:27446133
Modeling autism: a systems biology approach
2012-01-01
Autism is the fastest growing developmental disorder in the world today. The prevalence of autism in the US has risen from 1 in 2500 in 1970 to 1 in 88 children today. People with autism present with repetitive movements and with social and communication impairments. These impairments can range from mild to profound. The estimated total lifetime societal cost of caring for one individual with autism is $3.2 million US dollars. With the rapid growth in this disorder and the great expense of caring for those with autism, it is imperative for both individuals and society that techniques be developed to model and understand autism. There is increasing evidence that those individuals diagnosed with autism present with highly diverse set of abnormalities affecting multiple systems of the body. To this date, little to no work has been done using a whole body systems biology approach to model the characteristics of this disorder. Identification and modelling of these systems might lead to new and improved treatment protocols, better diagnosis and treatment of the affected systems, which might lead to improved quality of life by themselves, and, in addition, might also help the core symptoms of autism due to the potential interconnections between the brain and nervous system with all these other systems being modeled. This paper first reviews research which shows that autism impacts many systems in the body, including the metabolic, mitochondrial, immunological, gastrointestinal and the neurological. These systems interact in complex and highly interdependent ways. Many of these disturbances have effects in most of the systems of the body. In particular, clinical evidence exists for increased oxidative stress, inflammation, and immune and mitochondrial dysfunction which can affect almost every cell in the body. Three promising research areas are discussed, hierarchical, subgroup analysis and modeling over time. This paper reviews some of the systems disturbed in autism and
Green's function approach to edge states in transition metal dichalcogenides
NASA Astrophysics Data System (ADS)
Farmanbar, Mojtaba; Amlaki, Taher; Brocks, Geert
2016-05-01
The semiconducting two-dimensional transition metal dichalcogenides MX 2 show an abundance of one-dimensional metallic edges and grain boundaries. Standard techniques for calculating edge states typically model nanoribbons, and require the use of supercells. In this paper, we formulate a Green's function technique for calculating edge states of (semi-)infinite two-dimensional systems with a single well-defined edge or grain boundary. We express Green's functions in terms of Bloch matrices, constructed from the solutions of a quadratic eigenvalue equation. The technique can be applied to any localized basis representation of the Hamiltonian. Here, we use it to calculate edge states of MX 2 monolayers by means of tight-binding models. Aside from the basic zigzag and armchair edges, we study edges with a more general orientation, structurally modifed edges, and grain boundaries. A simple three-band model captures an important part of the edge electronic structures. An 11-band model comprising all valence orbitals of the M and X atoms is required to obtain all edge states with energies in the MX 2 band gap. Here, states of odd symmetry with respect to a mirror plane through the layer of M atoms have a dangling-bond character, and tend to pin the Fermi level.
Functional Analysis of Jasmonates in Rice through Mutant Approaches
Dhakarey, Rohit; Kodackattumannil Peethambaran, Preshobha; Riemann, Michael
2016-01-01
Jasmonic acid, one of the major plant hormones, is, unlike other hormones, a lipid-derived compound that is synthesized from the fatty acid linolenic acid. It has been studied intensively in many plant species including Arabidopsis thaliana, in which most of the enzymes participating in its biosynthesis were characterized. In the past 15 years, mutants and transgenic plants affected in the jasmonate pathway became available in rice and facilitate studies on the functions of this hormone in an important crop. Those functions are partially conserved compared to other plant species, and include roles in fertility, response to mechanical wounding and defense against herbivores. However, new and surprising functions have also been uncovered by mutant approaches, such as a close link between light perception and the jasmonate pathway. This was not only useful to show a phenomenon that is unique to rice but also helped to establish this role in plant species where such links are less obvious. This review aims to provide an overview of currently available rice mutants and transgenic plants in the jasmonate pathway and highlights some selected roles of jasmonate in this species, such as photomorphogenesis, and abiotic and biotic stress. PMID:27135235
A Wigner Monte Carlo approach to density functional theory
NASA Astrophysics Data System (ADS)
Sellier, J. M.; Dimov, I.
2014-08-01
In order to simulate quantum N-body systems, stationary and time-dependent density functional theories rely on the capacity of calculating the single-electron wave-functions of a system from which one obtains the total electron density (Kohn-Sham systems). In this paper, we introduce the use of the Wigner Monte Carlo method in ab-initio calculations. This approach allows time-dependent simulations of chemical systems in the presence of reflective and absorbing boundary conditions. It also enables an intuitive comprehension of chemical systems in terms of the Wigner formalism based on the concept of phase-space. Finally, being based on a Monte Carlo method, it scales very well on parallel machines paving the way towards the time-dependent simulation of very complex molecules. A validation is performed by studying the electron distribution of three different systems, a Lithium atom, a Boron atom and a hydrogenic molecule. For the sake of simplicity, we start from initial conditions not too far from equilibrium and show that the systems reach a stationary regime, as expected (despite no restriction is imposed in the choice of the initial conditions). We also show a good agreement with the standard density functional theory for the hydrogenic molecule. These results demonstrate that the combination of the Wigner Monte Carlo method and Kohn-Sham systems provides a reliable computational tool which could, eventually, be applied to more sophisticated problems.
Functional Analysis of Jasmonates in Rice through Mutant Approaches.
Dhakarey, Rohit; Kodackattumannil Peethambaran, Preshobha; Riemann, Michael
2016-01-01
Jasmonic acid, one of the major plant hormones, is, unlike other hormones, a lipid-derived compound that is synthesized from the fatty acid linolenic acid. It has been studied intensively in many plant species including Arabidopsis thaliana, in which most of the enzymes participating in its biosynthesis were characterized. In the past 15 years, mutants and transgenic plants affected in the jasmonate pathway became available in rice and facilitate studies on the functions of this hormone in an important crop. Those functions are partially conserved compared to other plant species, and include roles in fertility, response to mechanical wounding and defense against herbivores. However, new and surprising functions have also been uncovered by mutant approaches, such as a close link between light perception and the jasmonate pathway. This was not only useful to show a phenomenon that is unique to rice but also helped to establish this role in plant species where such links are less obvious. This review aims to provide an overview of currently available rice mutants and transgenic plants in the jasmonate pathway and highlights some selected roles of jasmonate in this species, such as photomorphogenesis, and abiotic and biotic stress. PMID:27135235
A Reparametrization Approach for Dynamic Space-Time Models
Lee, Hyeyoung; Ghosh, Sujit K.
2009-01-01
Researchers in diverse areas such as environmental and health sciences are increasingly working with data collected across space and time. The space-time processes that are generally used in practice are often complicated in the sense that the auto-dependence structure across space and time is non-trivial, often non-separable and non-stationary in space and time. Moreover, the dimension of such data sets across both space and time can be very large leading to computational difficulties due to numerical instabilities. Hence, space-time modeling is a challenging task and in particular parameter estimation based on complex models can be problematic due to the curse of dimensionality. We propose a novel reparametrization approach to fit dynamic space-time models which allows the use of a very general form for the spatial covariance function. Our modeling contribution is to present an unconstrained reparametrization method for a covariance function within dynamic space-time models. A major benefit of the proposed unconstrained reparametrization method is that we are able to implement the modeling of a very high dimensional covariance matrix that automatically maintains the positive definiteness constraint. We demonstrate the applicability of our proposed reparametrized dynamic space-time models for a large data set of total nitrate concentrations. PMID:21593998
Stochastic model updating utilizing Bayesian approach and Gaussian process model
NASA Astrophysics Data System (ADS)
Wan, Hua-Ping; Ren, Wei-Xin
2016-03-01
Stochastic model updating (SMU) has been increasingly applied in quantifying structural parameter uncertainty from responses variability. SMU for parameter uncertainty quantification refers to the problem of inverse uncertainty quantification (IUQ), which is a nontrivial task. Inverse problem solved with optimization usually brings about the issues of gradient computation, ill-conditionedness, and non-uniqueness. Moreover, the uncertainty present in response makes the inverse problem more complicated. In this study, Bayesian approach is adopted in SMU for parameter uncertainty quantification. The prominent strength of Bayesian approach for IUQ problem is that it solves IUQ problem in a straightforward manner, which enables it to avoid the previous issues. However, when applied to engineering structures that are modeled with a high-resolution finite element model (FEM), Bayesian approach is still computationally expensive since the commonly used Markov chain Monte Carlo (MCMC) method for Bayesian inference requires a large number of model runs to guarantee the convergence. Herein we reduce computational cost in two aspects. On the one hand, the fast-running Gaussian process model (GPM) is utilized to approximate the time-consuming high-resolution FEM. On the other hand, the advanced MCMC method using delayed rejection adaptive Metropolis (DRAM) algorithm that incorporates local adaptive strategy with global adaptive strategy is employed for Bayesian inference. In addition, we propose the use of the powerful variance-based global sensitivity analysis (GSA) in parameter selection to exclude non-influential parameters from calibration parameters, which yields a reduced-order model and thus further alleviates the computational burden. A simulated aluminum plate and a real-world complex cable-stayed pedestrian bridge are presented to illustrate the proposed framework and verify its feasibility.
Permafrost, climate, and change: predictive modelling approach.
NASA Astrophysics Data System (ADS)
Anisimov, O.
2003-04-01
Predicted by GCMs enhanced warming of the Arctic will lead to discernible impacts on permafrost and northern environment. Mathematical models of different complexity forced by scenarios of climate change may be used to predict such changes. Permafrost models that are currently in use may be divided into four groups: index-based models (e.g. frost index model, N-factor model); models of intermediate complexity based on equilibrium simplified solution of the Stephan problem ("Koudriavtcev's" model and its modifications), and full-scale comprehensive dynamical models. New approach of stochastic modelling came into existence recently and has good prospects for the future. Important task is to compare the ability of the models that are different in complexity, concept, and input data requirements to capture the major impacts of changing climate on permafrost. A progressive increase in the depth of seasonal thawing (often referred to as the active-layer thickness, ALT) could be a relatively short-term reaction to climatic warming. At regional and local scales, it may produce substantial effects on vegetation, soil hydrology and runoff, as the water storage capacity of near-surface permafrost will be changed. Growing public concerns are associated with the impacts that warming of permafrost may have on engineered infrastructure built upon it. At the global scale, increase of ALT could facilitate further climatic change if more greenhouse gases are released when the upper layer of the permafrost thaws. Since dynamic permafrost models require complete set of forcing data that is not readily available on the circumpolar scale, they could be used most effectively in regional studies, while models of intermediate complexity are currently best tools for the circumpolar assessments. Set of five transient scenarios of climate change for the period 1980 - 2100 has been constructed using outputs from GFDL, NCAR, CCC, HadCM, and ECHAM-4 models. These GCMs were selected in the course
A comprehensive approach to age-dependent dosimetric modeling
Leggett, R.W.; Cristy, M.; Eckerman, K.F.
1986-01-01
In the absence of age-specific biokinetic models, current retention models of the International Commission on Radiological Protection (ICRP) frequently are used as a point of departure for evaluation of exposures to the general population. These models were designed and intended for estimation of long-term integrated doses to the adult worker. Their format and empirical basis preclude incorporation of much valuable physiological information and physiologically reasonable assumptions that could be used in characterizing the age-specific behavior of radioelements in humans. In this paper we discuss a comprehensive approach to age-dependent dosimetric modeling in which consideration is given not only to changes with age in masses and relative geometries of body organs and tissues but also to best available physiological and radiobiological information relating to the age-specific biobehavior of radionuclides. This approach is useful in obtaining more accurate estimates of long-term dose commitments as a function of age at intake, but it may be particularly valuable in establishing more accurate estimates of dose rate as a function of age. Age-specific dose rates are needed for a proper analysis of the potential effects on estimates or risk of elevated dose rates per unit intake in certain stages of life, elevated response per unit dose received during some stages of life, and age-specific non-radiogenic competing risks.
Toward a Multiscale Approach for Geodynamo Models
NASA Astrophysics Data System (ADS)
Marcotte, F.; Dormy, E.
2014-12-01
The generation of the Earth's magnetic field by dynamo action in the liquid iron core is modeled by a large set of coupled, non-linear partial differential equations. Numerical models presently involve direct discretization of the geodynamo equations and allow to produce axial dipolar magnetic fields that are qualitatively comparable to the Earth's one, but whose dynamics remain considerably remote from the geophysical regime. Indeed, due to the extreme values of the dimensionless numbers characterizing the Earth's core dynamics, the relevant regime remains far beyond the reach of direct numerical simulation - so far that one cannot simply rely on the increase in computational power. Simplification of the governing equations is not straightforward. In particular, the importance of return flow from the thin Ekman layers located at the inner core and core-mantle boundaries into the main flow prevents one from purely suppressing the viscous dissipation term in Navier-Stokes equation even in the limiting case where inertia is neglected. Therefore more advanced models are needed, which require prior mathematical treatment of the equations of magnetohydrodynamics. The one-dimensional structure of most viscous and magnetic layers demonstrates the possibility of huge computational savings by means of multiscale techniques. In our approach, asymptotic matching is applied on simplified problems such as the Proudman-Stewartson flow to solve for the viscous shear layers while keeping the mainstream resolved on a coarse grid.
Hubbard operator density functional theory for Fermionic lattice models
NASA Astrophysics Data System (ADS)
Cheng, Zhengqian; Marianetti, Chris
We formulate an effective action as a functional of Hubbard operator densities whose stationary point delivers all local static information of the interacting lattice model. Using the variational principle, we get a self-consistent equation for Hubbard operator densities. The computational cost of our approach is set by diagonalizing the local Fock space. We apply our method to the one and two band Hubbard model (including crystal field and on-site exchange) in infinite dimensions where the exact solution is known. Excellent agreement is obtained for the one-band model. In the two-band model, good agreement is obtained in the metallic region of the phase diagram in addition to the metal-insulator transition. While our approach does not address frequency dependent observables, it has a negligible computational cost as compared to dynamical mean field theory and could be highly applicable in the context total energies of strongly correlated materials and molecules.
Chromatin organization in pluripotent cells: emerging approaches to study and disrupt function
Lopes Novo, Clara
2016-01-01
Translating the vast amounts of genomic and epigenomic information accumulated on the linear genome into three-dimensional models of nuclear organization is a current major challenge. In response to this challenge, recent technological innovations based on chromosome conformation capture methods in combination with increasingly powerful functional approaches have revealed exciting insights into key aspects of genome regulation. These findings have led to an emerging model where the genome is folded and compartmentalized into highly conserved topological domains that are further divided into functional subdomains containing physical loops that bring cis-regulatory elements to close proximity. Targeted functional experiments, largely based on designable DNA-binding proteins, have begun to define the major architectural proteins required to establish and maintain appropriate genome regulation. Here, we focus on the accessible and well-characterized system of pluripotent cells to review the functional role of chromatin organization in regulating pluripotency, differentiation and reprogramming. PMID:26206085
A new tropospheric mapping function based on ECMWF models
NASA Astrophysics Data System (ADS)
Biancale, Richard; Dupuy, Stephanie; Soudarin, Laurent
The tropospheric propagation delay of Earth-satellite tracking data (from electromagnetic or optical signals) is generally corrected in two steps: 1) computing the zenithal dry and wet delays at the station, 2) applying a mapping function to pull them down at the elevation needed. Considering that zenithal delays can be well computed from ground pressure, temperature and humidity data through hydrostatic theory, or can be integrated from ECMWF multiple layer models (for instance), or at least can be adjusted in orbit processing, we turned our attention more specifically to the validity of the mapping function. Starting on one hand from a few maps of the ECMWF meteorological model of pressure, temperature and humidity available each 6h in 91 isobaric layers we reconstructed first the dry and wet tropospheric delays at each grid point for several azimuth and elevation angles. On the other hand we computed the same delays from a Marini-type mapping function based on the integrated zenithal delays computed themselves from the same ECMWF models. An adequacy was searched between both approaches which led us to adjust all coefficients of the dry and wet mapping functions. We propose here to describe our approach and to present the dry and wet mapping functions obtained with some tests with real data.
Semiparametric Stochastic Modeling of the Rate Function in Longitudinal Studies
Zhu, Bin; Taylor, Jeremy M.G.; Song, Peter X.-K.
2011-01-01
In longitudinal biomedical studies, there is often interest in the rate functions, which describe the functional rates of change of biomarker profiles. This paper proposes a semiparametric approach to model these functions as the realizations of stochastic processes defined by stochastic differential equations. These processes are dependent on the covariates of interest and vary around a specified parametric function. An efficient Markov chain Monte Carlo algorithm is developed for inference. The proposed method is compared with several existing methods in terms of goodness-of-fit and more importantly the ability to forecast future functional data in a simulation study. The proposed methodology is applied to prostate-specific antigen profiles for illustration. Supplementary materials for this paper are available online. PMID:22423170
Electron Systems Out of Equilibrium: Nonequilibrium Green's Function Approach
NASA Astrophysics Data System (ADS)
Špička, Václav Velický, Bedřich Kalvová, Anděla
2015-10-01
This review deals with the state of the art and perspectives of description of non-equilibrium many body systems using the non-equilibrium Green's function (NGF) method. The basic aim is to describe time evolution of the many-body system from its initial state over its transient dynamics to its long time asymptotic evolution. First, we discuss basic aims of transport theories to motivate the introduction of the NGF techniques. Second, this article summarizes the present view on construction of the electron transport equations formulated within the NGF approach to non-equilibrium. We discuss incorporation of complex initial conditions to the NGF formalism, and the NGF reconstruction theorem, which serves as a tool to derive simplified kinetic equations. Three stages of evolution of the non-equilibrium, the first described by the full NGF description, the second by a Non-Markovian Generalized Master Equation and the third by a Markovian Master Equation will be related to each other.
Promoting return of function in multiple sclerosis: An integrated approach
Gacias, Mar; Casaccia, Patrizia
2013-01-01
Multiple sclerosis is a disease characterized by inflammatory demyelination, axonal degeneration and progressive brain atrophy. Most of the currently available disease modifying agents proved to be very effective in managing the relapse rate, however progressive neuronal damage continues to occur and leads to progressive accumulation of irreversible disability. For this reason, any therapeutic strategy aimed at restoration of function must take into account not only immunomodulation, but also axonal protection and new myelin formation. We further highlight the importance of an holistic approach, which considers the variability of therapeutic responsiveness as the result of the interplay between genetic differences and the epigenome, which is in turn affected by gender, age and differences in life style including diet, exercise, smoking and social interaction. PMID:24363985
Promoting return of function in multiple sclerosis: An integrated approach.
Gacias, Mar; Casaccia, Patrizia
2013-10-01
Multiple sclerosis is a disease characterized by inflammatory demyelination, axonal degeneration and progressive brain atrophy. Most of the currently available disease modifying agents proved to be very effective in managing the relapse rate, however progressive neuronal damage continues to occur and leads to progressive accumulation of irreversible disability. For this reason, any therapeutic strategy aimed at restoration of function must take into account not only immunomodulation, but also axonal protection and new myelin formation. We further highlight the importance of an holistic approach, which considers the variability of therapeutic responsiveness as the result of the interplay between genetic differences and the epigenome, which is in turn affected by gender, age and differences in life style including diet, exercise, smoking and social interaction. PMID:24363985
Electron systems out of equilibrium: Nonequilibrium Green's function approach
NASA Astrophysics Data System (ADS)
Špička, Václav; Velický, Bedřich; Kalvová, Anděla
2014-07-01
This review deals with the state of the art and perspectives of description of nonequilibrium many-body systems using the nonequilibrium Green's function (NGF) method. The basic aim is to describe time evolution of the many-body system from its initial state over its transient dynamics to its long time asymptotic evolution. First, we discuss basic aims of transport theories to motivate the introduction of the NGF techniques. Second, this article summarizes the present view on construction of the electron transport equations formulated within the NGF approach to nonequilibrium. We discuss incorporation of complex initial conditions to the NGF formalism, and the NGF reconstruction theorem, which serves as a tool to derive simplified kinetic equations. Three stages of evolution of the nonequilibrium, the first described by the full NGF description, the second by a non-Markovian generalized master equation and the third by a Markovian master equation will be related to each other.
Approach to combined-function magnets via symplectic slicing
NASA Astrophysics Data System (ADS)
Titze, M.
2016-05-01
In this article we describe how to obtain symplectic "slice" maps for combined-function magnets, by using a method of generating functions. A feature of this method is that one can use an unexpanded and unsplit Hamiltonian. From such a slice map we obtain a first-order map which is symplectic at the closed orbit. We also obtain a symplectic kick map. Both results were implemented into the widely used program MAD-X to regain, in particular, the twiss parameters for the sliced model of the Proton Synchrotron at CERN. In addition, we obtain recursion equations for symplectic maps of general time-dependent Hamiltonians, which might be useful even beyond the scope of accelerator physics.
Crossover from BCS to Bose superconductivity: A functional integral approach
Randeria, M.; Sa de Melo, C.A.R.; Engelbrecht, J.R.
1993-04-01
We use a functional integral formulation to study the crossover from cooperative Cooper pairing to the formation and condensation of tightly bound pairs in a 3D continuum model of fermions with attractive interactions. The inadequacy of a saddle point approximation with increasing coupling is pointed out, and the importance of temporal (quantum) fluctuations for normal state properties at intermediate and strong coupling is emphasized. In addition to recovering the Nozieres-Schmitt-Pink interpolation scheme for T{sub c}, and the Leggett variational results for T = 0, we also present results for evolution of the time-dependent Ginzburg-Landau equation and collective mode spectrum as a function of the coupling.
Optimization approaches to nonlinear model predictive control
Biegler, L.T. . Dept. of Chemical Engineering); Rawlings, J.B. . Dept. of Chemical Engineering)
1991-01-01
With the development of sophisticated methods for nonlinear programming and powerful computer hardware, it now becomes useful and efficient to formulate and solve nonlinear process control problems through on-line optimization methods. This paper explores and reviews control techniques based on repeated solution of nonlinear programming (NLP) problems. Here several advantages present themselves. These include minimization of readily quantifiable objectives, coordinated and accurate handling of process nonlinearities and interactions, and systematic ways of dealing with process constraints. We motivate this NLP-based approach with small nonlinear examples and present a basic algorithm for optimization-based process control. As can be seen this approach is a straightforward extension of popular model-predictive controllers (MPCs) that are used for linear systems. The statement of the basic algorithm raises a number of questions regarding stability and robustness of the method, efficiency of the control calculations, incorporation of feedback into the controller and reliable ways of handling process constraints. Each of these will be treated through analysis and/or modification of the basic algorithm. To highlight and support this discussion, several examples are presented and key results are examined and further developed. 74 refs., 11 figs.
An integrated approach to reservoir modeling
Donaldson, K. )
1993-08-01
The purpose of this research is to evaluate the usefulness of the following procedural and analytical methods in investigating the heterogeneity of the oil reserve for the Mississipian Big Injun Sandstone of the Granny Creek field, Clay and Roane counties, West Virginia: (1) relational database, (2) two-dimensional cross sections, (3) true three-dimensional modeling, (4) geohistory analysis, (5) a rule-based expert system, and (6) geographical information systems. The large data set could not be effectively integrated and interpreted without this approach. A relational database was designed to fully integrate three- and four-dimensional data. The database provides an effective means for maintaining and manipulating the data. A two-dimensional cross section program was designed to correlate stratigraphy, depositional environments, porosity, permeability, and petrographic data. This flexible design allows for additional four-dimensional data. Dynamic Graphics[sup [trademark
Measuring Psychometric Functions with the Diffusion Model
Ratcliff, Roger
2014-01-01
The diffusion decision model (Ratcliff, 1978) was used to examine discrimination for a range of perceptual tasks: numerosity discrimination, number discrimination, brightness discrimination, motion discrimination, speed discrimination, and length discrimination. The model produces a measure of the quality of the information that drives decision processes, a measure termed “drift rate” in the model. As drift rate varies across experimental conditions that differ in difficulty, a psychometric function that plots drift rate against difficulty can be constructed. Psychometric functions for the tasks in this article usually plot accuracy against difficulty, but for some levels of difficulty, accuracy can be at ceiling. The diffusion model extends the range of difficulty that can be evaluated because drift rates depend on response times (RTs) as well as accuracy and when RTs decrease across conditions that are all at ceiling in accuracy, then drift rates will distinguish among the conditions. Signal detection theory assumes that the variable driving performance is the z-transform of the accuracy value and somewhat surprisingly, this closely matches drift rate extracted from the diffusion model when accuracy is not at ceiling, but not sometimes when accuracy is high. Even though the functions are similar in the middle of the range, the interpretations of the variability in the models (e.g., perceptual variability, decision process variability) are incompatible. PMID:24446719
A Mixed Approach for Modeling Blood Flow in Brain Microcirculation
NASA Astrophysics Data System (ADS)
Lorthois, Sylvie; Peyrounette, Myriam; Davit, Yohan; Quintard, Michel; Groupe d'Etude sur les Milieux Poreux Team
2015-11-01
Consistent with its distribution and exchange functions, the vascular system of the human brain cortex is a superposition of two components. At small-scale, a homogeneous and space-filling mesh-like capillary network. At large scale, quasi-fractal branched veins and arteries. From a modeling perspective, this is the superposition of: (a) a continuum model resulting from the homogenization of slow transport in the small-scale capillary network; and (b) a discrete network approach describing fast transport in the arteries and veins, which cannot be homogenized because of their fractal nature. This problematic is analogous to fast conducting wells embedded in a reservoir rock in petroleum engineering. An efficient method to reduce the computational cost is to use relatively large grid blocks for the continuum model. This makes it difficult to accurately couple both components. We solve this issue by adapting the ``well model'' concept used in petroleum engineering to brain specific 3D situations. We obtain a unique linear system describing the discrete network, the continuum and the well model. Results are presented for realistic arterial and venous geometries. The mixed approach is compared with full network models including various idealized capillary networks of known permeability. ERC BrainMicroFlow GA615102.
Atom and Bond Fukui Functions and Matrices: A Hirshfeld-I Atoms-in-Molecule Approach.
Oña, Ofelia B; De Clercq, Olivier; Alcoba, Diego R; Torre, Alicia; Lain, Luis; Van Neck, Dimitri; Bultinck, Patrick
2016-09-19
The Fukui function is often used in its atom-condensed form by isolating it from the molecular Fukui function using a chosen weight function for the atom in the molecule. Recently, Fukui functions and matrices for both atoms and bonds separately were introduced for semiempirical and ab initio levels of theory using Hückel and Mulliken atoms-in-molecule models. In this work, a double partitioning method of the Fukui matrix is proposed within the Hirshfeld-I atoms-in-molecule framework. Diagonalizing the resulting atomic and bond matrices gives eigenvalues and eigenvectors (Fukui orbitals) describing the reactivity of atoms and bonds. The Fukui function is the diagonal element of the Fukui matrix and may be resolved in atom and bond contributions. The extra information contained in the atom and bond resolution of the Fukui matrices and functions is highlighted. The effect of the choice of weight function arising from the Hirshfeld-I approach to obtain atom- and bond-condensed Fukui functions is studied. A comparison of the results with those generated by using the Mulliken atoms-in-molecule approach shows low correlation between the two partitioning schemes. PMID:27381271
A Green's function approach to PIV Pressure estimates
NASA Astrophysics Data System (ADS)
Goushcha, Oleg; Ganatos, Peter; Elvin, Niell; Andreopoulos, Yiannis
2014-11-01
Spatial resolution of PIV data limits the ability to calculate the pressure along a solid boundary of a body immersed in a fluid and hence to accurately estimate the force exerted. Current methodologies solve numerically Navier-Stokes equations to calculate the pressure field from velocity data. An analytical approach has the potential of more accurate estimation of pressure in comparison to existing methods. A methodology has been developed to calculate the pressure distribution on the body in the flow by analytically solving the pressure Poisson Equation using a Green's function approach. The pressure is then extrapolated to the solid boundary resulting in an accurate pressure distribution and total net force on the boundary. This technique has been applied to the case of a flexible cantilever beam vibrating after interacting with a traveling vortex in an experimental setup to harvest energy from an air-flow. Time-resolved PIV has been used to acquire a two-dimensional velocity field which has been used to obtain a time-dependent pressure distribution acting on the surface of the beam and resultant forces. The analytical solution is compared to the force measured directly by a force sensor placed at the base of the beam as well as the power harvested. Sponsored by NSF Grant: CBET #1033117.
Functional epigenetic approach identifies frequently methylated genes in Ewing sarcoma.
Alholle, Abdullah; Brini, Anna T; Gharanei, Seley; Vaiyapuri, Sumathi; Arrigoni, Elena; Dallol, Ashraf; Gentle, Dean; Kishida, Takeshi; Hiruma, Toru; Avigad, Smadar; Grimer, Robert; Maher, Eamonn R; Latif, Farida
2013-11-01
Using a candidate gene approach we recently identified frequent methylation of the RASSF2 gene associated with poor overall survival in Ewing sarcoma (ES). To identify effective biomarkers in ES on a genome-wide scale, we used a functionally proven epigenetic approach, in which gene expression was induced in ES cell lines by treatment with a demethylating agent followed by hybridization onto high density gene expression microarrays. After following a strict selection criterion, 34 genes were selected for expression and methylation analysis in ES cell lines and primary ES. Eight genes (CTHRC1, DNAJA4, ECHDC2, NEFH, NPTX2, PHF11, RARRES2, TSGA14) showed methylation frequencies of>20% in ES tumors (range 24-71%), these genes were expressed in human bone marrow derived mesenchymal stem cells (hBMSC) and hypermethylation was associated with transcriptional silencing. Methylation of NPTX2 or PHF11 was associated with poorer prognosis in ES. In addition, six of the above genes also showed methylation frequency of>20% (range 36-50%) in osteosarcomas. Identification of these genes may provide insights into bone cancer tumorigenesis and development of epigenetic biomarkers for prognosis and detection of these rare tumor types. PMID:24005033
Kinetic Density Functional Theory: A Microscopic Approach to Fluid Mechanics
NASA Astrophysics Data System (ADS)
Umberto Marini Bettolo, Marconi; Simone, Melchionna
2014-10-01
In the present paper we give a brief summary of some recent theoretical advances in the treatment of inhomogeneous fluids and methods which have applications in the study of dynamical properties of liquids in situations of extreme confinement, such as nanopores, nanodevices, etc. The approach obtained by combining kinetic and density functional methods is microscopic, fully self-consistent and allows to determine both configurational and flow properties of dense fluids. The theory predicts the correct hydrodynamic behavior and provides a practical and numerical tool to determine how the transport properties are modified when the length scales of the confining channels are comparable with the size of the molecules. The applications range from the dynamics of simple fluids under confinement, to that of neutral binary mixtures and electrolytes where the theory in the limit of slow gradients reproduces the known phenomenological equations such as the Planck—Nernst—Poisson and the Smolochowski equations. The approach here illustrated allows for fast numerical solution of the evolution equations for the one-particle phase-space distributions by means of the weighted density lattice Boltzmann method and is particularly useful when one considers flows in complex geometries.
Predicting activity approach based on new atoms similarity kernel function.
Abu El-Atta, Ahmed H; Moussa, M I; Hassanien, Aboul Ella
2015-07-01
Drug design is a high cost and long term process. To reduce time and costs for drugs discoveries, new techniques are needed. Chemoinformatics field implements the informational techniques and computer science like machine learning and graph theory to discover the chemical compounds properties, such as toxicity or biological activity. This is done through analyzing their molecular structure (molecular graph). To overcome this problem there is an increasing need for algorithms to analyze and classify graph data to predict the activity of molecules. Kernels methods provide a powerful framework which combines machine learning with graph theory techniques. These kernels methods have led to impressive performance results in many several chemoinformatics problems like biological activity prediction. This paper presents a new approach based on kernel functions to solve activity prediction problem for chemical compounds. First we encode all atoms depending on their neighbors then we use these codes to find a relationship between those atoms each other. Then we use relation between different atoms to find similarity between chemical compounds. The proposed approach was compared with many other classification methods and the results show competitive accuracy with these methods. PMID:26117822
A genetic algorithms approach for altering the membership functions in fuzzy logic controllers
NASA Technical Reports Server (NTRS)
Shehadeh, Hana; Lea, Robert N.
1992-01-01
Through previous work, a fuzzy control system was developed to perform translational and rotational control of a space vehicle. This problem was then re-examined to determine the effectiveness of genetic algorithms on fine tuning the controller. This paper explains the problems associated with the design of this fuzzy controller and offers a technique for tuning fuzzy logic controllers. A fuzzy logic controller is a rule-based system that uses fuzzy linguistic variables to model human rule-of-thumb approaches to control actions within a given system. This 'fuzzy expert system' features rules that direct the decision process and membership functions that convert the linguistic variables into the precise numeric values used for system control. Defining the fuzzy membership functions is the most time consuming aspect of the controller design. One single change in the membership functions could significantly alter the performance of the controller. This membership function definition can be accomplished by using a trial and error technique to alter the membership functions creating a highly tuned controller. This approach can be time consuming and requires a great deal of knowledge from human experts. In order to shorten development time, an iterative procedure for altering the membership functions to create a tuned set that used a minimal amount of fuel for velocity vector approach and station-keep maneuvers was developed. Genetic algorithms, search techniques used for optimization, were utilized to solve this problem.
NASA Astrophysics Data System (ADS)
Mercaldo, M. T.; Rabuffo, I.; De Cesare, L.; Caramico D'Auria, A.
2016-04-01
In this work we study the quantum phase transition, the phase diagram and the quantum criticality induced by the easy-plane single-ion anisotropy in a d-dimensional quantum spin-1 XY model in absence of an external longitudinal magnetic field. We employ the two-time Green function method by avoiding the Anderson-Callen decoupling of spin operators at the same sites which is of doubtful accuracy. Following the original Devlin procedure we treat exactly the higher order single-site anisotropy Green functions and use Tyablikov-like decouplings for the exchange higher order ones. The related self-consistent equations appear suitable for an analysis of the thermodynamic properties at and around second order phase transition points. Remarkably, the equivalence between the microscopic spin model and the continuous O(2) -vector model with transverse-Ising model (TIM)-like dynamics, characterized by a dynamic critical exponent z=1, emerges at low temperatures close to the quantum critical point with the single-ion anisotropy parameter D as the non-thermal control parameter. The zero-temperature critic anisotropy parameter Dc is obtained for dimensionalities d > 1 as a function of the microscopic exchange coupling parameter and the related numerical data for different lattices are found to be in reasonable agreement with those obtained by means of alternative analytical and numerical methods. For d > 2, and in particular for d=3, we determine the finite-temperature critical line ending in the quantum critical point and the related TIM-like shift exponent, consistently with recent renormalization group predictions. The main crossover lines between different asymptotic regimes around the quantum critical point are also estimated providing a global phase diagram and a quantum criticality very similar to the conventional ones.
Reducing equifinality of hydrological models by integrating Functional Streamflow Disaggregation
NASA Astrophysics Data System (ADS)
Lüdtke, Stefan; Apel, Heiko; Nied, Manuela; Carl, Peter; Merz, Bruno
2014-05-01
A universal problem of the calibration of hydrological models is the equifinality of different parameter sets derived from the calibration of models against total runoff values. This is an intrinsic problem stemming from the quality of the calibration data and the simplified process representation by the model. However, discharge data contains additional information which can be extracted by signal processing methods. An analysis specifically developed for the disaggregation of runoff time series into flow components is the Functional Streamflow Disaggregation (FSD; Carl & Behrendt, 2008). This method is used in the calibration of an implementation of the hydrological model SWIM in a medium sized watershed in Thailand. FSD is applied to disaggregate the discharge time series into three flow components which are interpreted as base flow, inter-flow and surface runoff. In addition to total runoff, the model is calibrated against these three components in a modified GLUE analysis, with the aim to identify structural model deficiencies, assess the internal process representation and to tackle equifinality. We developed a model dependent (MDA) approach calibrating the model runoff components against the FSD components, and a model independent (MIA) approach comparing the FSD of the model results and the FSD of calibration data. The results indicate, that the decomposition provides valuable information for the calibration. Particularly MDA highlights and discards a number of standard GLUE behavioural models underestimating the contribution of soil water to river discharge. Both, MDA and MIA yield to a reduction of the parameter ranges by a factor up to 3 in comparison to standard GLUE. Based on these results, we conclude that the developed calibration approach is able to reduce the equifinality of hydrological model parameterizations. The effect on the uncertainty of the model predictions is strongest by applying MDA and shows only minor reductions for MIA. Besides
12 CFR 217.153 - Internal models approach (IMA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... methodology, the Board-regulated institution must demonstrate that the model produces a conservative estimate... 12 Banks and Banking 2 2014-01-01 2014-01-01 false Internal models approach (IMA). 217.153 Section... Measurement Approaches Risk-Weighted Assets for Equity Exposures § 217.153 Internal models approach (IMA)....
Transversity distribution functions in the valon model
NASA Astrophysics Data System (ADS)
Alizadeh Yazdi, Z.; Taghavi-Shahri, F.; Arash, F.; Zomorrodian, M. E.
2014-05-01
We use the valon model to calculate the transversity distribution functions inside the nucleon. Transversity distributions indicate the probability to find partons with spin aligned (antialigned) to the transversely polarized nucleon. The results are in good agreement with all available experimental data and also global fits.
Functional genes of non-model arthropods
Technology Transfer Automated Retrieval System (TEKTRAN)
Technology and bioinformatics facilitate studies of non-model organisms, including pest insects and insect biological control agents. A cDNA library prepared from a laboratory-reared colony of Lygus lineolaris male nymphs identified sequences that appeared to have known functions or close homologues...
Di Maggio, Jimena; Fernández, Carolina; Parodi, Elisa R; Diaz, M Soledad; Estrada, Vanina
2016-01-01
In this paper we address the formulation of two mechanistic water quality models that differ in the way the phytoplankton community is described. We carry out parameter estimation subject to differential-algebraic constraints and validation for each model and comparison between models performance. The first approach aggregates phytoplankton species based on their phylogenetic characteristics (Taxonomic group model) and the second one, on their morpho-functional properties following Reynolds' classification (Functional group model). The latter approach takes into account tolerance and sensitivity to environmental conditions. The constrained parameter estimation problems are formulated within an equation oriented framework, with a maximum likelihood objective function. The study site is Paso de las Piedras Reservoir (Argentina), which supplies water for consumption for 450,000 population. Numerical results show that phytoplankton morpho-functional groups more closely represent each species growth requirements within the group. Each model performance is quantitatively assessed by three diagnostic measures. Parameter estimation results for seasonal dynamics of the phytoplankton community and main biogeochemical variables for a one-year time horizon are presented and compared for both models, showing the functional group model enhanced performance. Finally, we explore increasing nutrient loading scenarios and predict their effect on phytoplankton dynamics throughout a one-year time horizon. PMID:26406877
New functional electrical stimulation approaches to standing and walking.
Mushahwar, Vivian K; Jacobs, Patrick L; Normann, Richard A; Triolo, Ronald J; Kleitman, Naomi
2007-09-01
Spinal cord injury (SCI) is a devastating neurological trauma that is prevalent predominantly in young individuals. Several interventions in the areas of neuroregeneration, pharmacology and rehabilitation engineering/neuroscience are currently under investigation for restoring function after SCI. In this paper, we focus on the use of neuroprosthetic devices for restoring standing and ambulation as well as improving general health and wellness after SCI. Four neuroprosthetic approaches are discussed along with their demonstrated advantages and their future needs for improved clinical applicability. We first introduce surface functional electrical stimulation (FES) devices for restoring ambulation and highlight the importance of these devices for facilitating exercise activities and systemic physiological activation. Implanted muscle-based FES devices for restoring standing and walking that are currently undergoing clinical trials are then presented. The use of implanted peripheral nerve intraneural arrays of multi-site microelectrodes for providing fine and graded control of force during sit-to-stand maneuvers is subsequently demonstrated. Finally, intraspinal microstimulation (ISMS) of the lumbosacral spinal cord for restoring standing and walking is introduced and its results to date are presented. We conclude with a general discussion of the common needs of the neuroprosthetic devices presented in this paper and the improvements that may be incorporated in the future to advance their clinical utility and user satisfaction. PMID:17873417
VFILM: a value function driven approach to information lifecycle management
NASA Astrophysics Data System (ADS)
Cleveland, Jeffrey; Loyall, Joseph P.; Webb, Jonathan; Hanna, James; Clark, Shane
2011-06-01
Information Management (IM) services need lifecycle management, i.e., determining how long persistent information is retained locally and when it is moved to accommodate new information. This is important when bridging IM services from enterprise to tactical environments, which can have limited onboard storage and be in highly dynamic situations with varying information needs. In this paper, we describe an approach to Value Function based Information Lifecycle Management (VFILM) that balances the value of existing information to current and future missions with constraints on available storage. VFILM operates in parallel with IM services in dynamic situations where missions and their information needs, the types of information being managed, and the criticality of information to current missions and operations are changing. In contrast to current solutions that simply move the oldest or least frequently accessed information when space is needed, VFILM manages information lifecycle based on a combination of inputs including attributes of the information (its age, size, type, and other observable attributes), ongoing operations and missions, and the relationships between different pieces of information. VFILM has three primary innovative features: (1) a fuzzy logic function that calculates a ordering of information value based on multiple relative valued attributes; (2) mission/task awareness that considers current and upcoming missions in information valuation and storage requirements; and (3) information grouping that treats related information collectively. This paper describes the VFILM architecture, a VFILM prototype that works with Air Force Research Laboratory IM services, and the results of experiments showing VFILM's effectiveness and efficiency.
Fast approach to infrared image restoration based on shrinkage functions calibration
NASA Astrophysics Data System (ADS)
Zhang, Chengshuo; Shi, Zelin; Xu, Baoshu; Feng, Bin
2016-05-01
High-quality image restoration in real time is a challenge for infrared imaging systems. We present a fast approach to infrared image restoration based on shrinkage functions calibration. Rather than directly modeling the prior of sharp images to obtain the shrinkage functions, we calibrate them for restoration directly by using the acquirable sharp and blurred image pairs from the same infrared imaging system. The calibration method is employed to minimize the sum of squared errors between sharp images and restored images from the blurred images. Our restoration algorithm is noniterative and its shrinkage functions are stored in the look-up tables, so an architecture solution of pipeline structure can work in real time. We demonstrate the effectiveness of our approach by testing its quantitative performance from simulation experiments and its qualitative performance from a developed wavefront coding infrared imaging system.
Enhancements to the SSME transfer function modeling code
NASA Technical Reports Server (NTRS)
Irwin, R. Dennis; Mitchell, Jerrel R.; Bartholomew, David L.; Glenn, Russell D.
1995-01-01
This report details the results of a one year effort by Ohio University to apply the transfer function modeling and analysis tools developed under NASA Grant NAG8-167 (Irwin, 1992), (Bartholomew, 1992) to attempt the generation of Space Shuttle Main Engine High Pressure Turbopump transfer functions from time domain data. In addition, new enhancements to the transfer function modeling codes which enhance the code functionality are presented, along with some ideas for improved modeling methods and future work. Section 2 contains a review of the analytical background used to generate transfer functions with the SSME transfer function modeling software. Section 2.1 presents the 'ratio method' developed for obtaining models of systems that are subject to single unmeasured excitation sources and have two or more measured output signals. Since most of the models developed during the investigation use the Eigensystem Realization Algorithm (ERA) for model generation, Section 2.2 presents an introduction of ERA, and Section 2.3 describes how it can be used to model spectral quantities. Section 2.4 details the Residue Identification Algorithm (RID) including the use of Constrained Least Squares (CLS) and Total Least Squares (TLS). Most of this information can be found in the report (and is repeated for convenience). Section 3 chronicles the effort of applying the SSME transfer function modeling codes to the a51p394.dat and a51p1294.dat time data files to generate transfer functions from the unmeasured input to the 129.4 degree sensor output. Included are transfer function modeling attempts using five methods. The first method is a direct application of the SSME codes to the data files and the second method uses the underlying trends in the spectral density estimates to form transfer function models with less clustering of poles and zeros than the models obtained by the direct method. In the third approach, the time data is low pass filtered prior to the modeling process in an
Modeling uncertainty in reservoir loss functions using fuzzy sets
NASA Astrophysics Data System (ADS)
Teegavarapu, Ramesh S. V.; Simonovic, Slobodan P.
1999-09-01
Imprecision involved in the definition of reservoir loss functions is addressed using fuzzy set theory concepts. A reservoir operation problem is solved using the concepts of fuzzy mathematical programming. Membership functions from fuzzy set theory are used to represent the decision maker's preferences in the definition of shape of loss curves. These functions are assumed to be known and are used to model the uncertainties. Linear and nonlinear optimization models are developed under fuzzy environment. A new approach is presented that involves development of compromise reservoir operating policies based on the rules from the traditional optimization models and their fuzzy equivalents while considering the preferences of the decision maker. The imprecision associated with the definition of penalty and storage zones and uncertainty in the penalty coefficients are the main issues addressed through this study. The models developed are applied to the Green Reservoir, Kentucky. Simulations are performed to evaluate the operating rules generated by the models considering the uncertainties in the loss functions. Results indicate that the reservoir operating policies are sensitive to change in the shapes of loss functions.
A Functional Model of [Fe]-Hydrogenase.
Xu, Tao; Yin, Chih-Juo Madeline; Wodrich, Matthew D; Mazza, Simona; Schultz, Katherine M; Scopelliti, Rosario; Hu, Xile
2016-03-16
[Fe]-Hydrogenase catalyzes the hydrogenation of a biological substrate via the heterolytic splitting of molecular hydrogen. While many synthetic models of [Fe]-hydrogenase have been prepared, none yet are capable of activating H2 on their own. Here, we report the first Fe-based functional mimic of the active site of [Fe]-hydrogenase, which was developed based on a mechanistic understanding. The activity of this iron model complex is enabled by its unique ligand environment, consisting of biomimetic pyridinylacyl and carbonyl ligands, as well as a bioinspired diphosphine ligand with a pendant amine moiety. The model complex activates H2 and mediates hydrogenation of an aldehyde. PMID:26926708
Pattern Formation and Functionality in Swarm Models
NASA Astrophysics Data System (ADS)
Rauch, Erik; Millonas, Mark; Chialvo, Dante
1996-03-01
We explore a simplified class of models we call swarms, which are inspired by the collective behavior of social insects. We perform a mean-field type stability analysis and numerical simulations of the model. Several interesting types of functional behavior appear in the vicinity of a second order phase transition, including the formation of stable lines of traffic flow, memory consolidation, and bootstrapping. In addition to providing an understanding of certain classes of biological behavior, these models bear a generic resemblence to a number of pattern formation processes in the physical sciences.
Making metals transparent: a circuit model approach.
Molero, Carlos; Medina, Francisco; Rodríguez-Berral, Rauĺ; Mesa, Francisco
2016-05-16
Solid metal films are well known to be opaque to electromagnetic waves over a wide frequency range, from low frequency to optics. High values of the conductivity at relatively low frequencies or negative values of the permittivity at the optical regime provide the macroscopic explanation for such opacity. In the microwave range, even extremely thin metal layers (much smaller than the skin depth at the operation frequency) reflect most of the impinging electromagnetic energy, thus precluding significant transmission. However, a drastic resonant narrow-band enhancement of the transparency has recently been reported. The quasi-transparent window is opened by placing the metal film between two symmetrically arranged and closely spaced copper strip gratings. This letter proposes an analytical circuit model that yields a simple explanation to this unexpected phenomenon. The proposed approach avoids the use of lengthy numerical calculations and suggests how the transmissivity can be controlled and enhanced by manipulating the values of the electrical parameters of the associated circuit model. PMID:27409851
Maximum entropy models of ecosystem functioning
Bertram, Jason
2014-12-05
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Maximum entropy models of ecosystem functioning
NASA Astrophysics Data System (ADS)
Bertram, Jason
2014-12-01
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes' broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Funnel function approach to determine uncertainty: Some advances
NASA Astrophysics Data System (ADS)
Routh, P. S.
2006-12-01
Given a finite number of noisy data it is difficult (perhaps impossible) to obtain unique average of the model value in any region of the model (Backus & Gilbert, 1970; Oldenburg, 1983). This difficulty motivated Backus and Gilbert to construct the averaging kernels that is in some sense close to delta function. Averaging kernels describe how the true model is averaged over the entire domain to generate the model value in the region of interest. An unique average value is difficult to obtain theoretically. However we can compute the bounds on the average value and this allows us to obtain a measure of uncertainty. This idea was proposed by Oldenburg (1983). As the region of interest increases the uncertainty decreases associated with the average value giving a funnel like shape. Mathematically this is equivalent to solving minimization and maximization problem of average value (Oldenburg, 1983). In this work I developed a nonlinear interior point method to solve this min-max problem and construct the bounds. The bounds determined in this manner honors all types of available information: (a) geophysical data with errors (b) deterministic or statistical prior information and (c ) complementary information from other data sets at different scales (such as hydrology or other geophysical data) if they are formulated in a joint inversion framework.
Using computational models to relate structural and functional brain connectivity
Hlinka, Jaroslav; Coombes, Stephen
2012-01-01
Modern imaging methods allow a non-invasive assessment of both structural and functional brain connectivity. This has lead to the identification of disease-related alterations affecting functional connectivity. The mechanism of how such alterations in functional connectivity arise in a structured network of interacting neural populations is as yet poorly understood. Here we use a modeling approach to explore the way in which this can arise and to highlight the important role that local population dynamics can have in shaping emergent spatial functional connectivity patterns. The local dynamics for a neural population is taken to be of the Wilson–Cowan type, whilst the structural connectivity patterns used, describing long-range anatomical connections, cover both realistic scenarios (from the CoComac database) and idealized ones that allow for more detailed theoretical study. We have calculated graph–theoretic measures of functional network topology from numerical simulations of model networks. The effect of the form of local dynamics on the observed network state is quantified by examining the correlation between structural and functional connectivity. We document a profound and systematic dependence of the simulated functional connectivity patterns on the parameters controlling the dynamics. Importantly, we show that a weakly coupled oscillator theory explaining these correlations and their variation across parameter space can be developed. This theoretical development provides a novel way to characterize the mechanisms for the breakdown of functional connectivity in diseases through changes in local dynamics. PMID:22805059
Density Functional Theory Models for Radiation Damage
NASA Astrophysics Data System (ADS)
Dudarev, S. L.
2013-07-01
Density functional theory models developed over the past decade provide unique information about the structure of nanoscale defects produced by irradiation and about the nature of short-range interaction between radiation defects, clustering of defects, and their migration pathways. These ab initio models, involving no experimental input parameters, appear to be as quantitatively accurate and informative as the most advanced experimental techniques developed for the observation of radiation damage phenomena. Density functional theory models have effectively created a new paradigm for the scientific investigation and assessment of radiation damage effects, offering new insight into the origin of temperature- and dose-dependent response of materials to irradiation, a problem of pivotal significance for applications.
Towards a New Approach to Dual Resonance Model Phenomenology
NASA Astrophysics Data System (ADS)
Torres, Ethan
2014-09-01
We have taken steps toward finding a dual-resonance (DR) model appropriate for phenomenological fits that can be built from an DR operator formalism which is attractive for its projective group gauge symmetries and factorization properties. This is done by attempting to generalize an approach [Szczepaniak, Adam, and Pennington, M.R., Application of the Veneziano Model in Charmonium Dalitz Plot Analysis, arXiv:1403.5782] of isolating DR poles by making all but one of the residues of on infinite sum of modified beta functions vanish. This leaves a closed-form amplitude that has a finite set of adjustable parameters and with only one ad hoc modification necessary for maintaining Regge asymptotic behavior. We have generalized this approach to double and single Regge limits of the DR five-point function with a pending application to pγ* -->K+K- p . Generalizations for (N - 3) -tuple Regge limits for N-point amplitudes can be gleaned from this work but a more rigorous treatment has been considered. Preliminary results suggest that these amplitudes may take the form of an expectation value of an infinite sum of an alternating product of vertex operators and Gervais-Neveu propagators. We have taken steps toward finding a dual-resonance (DR) model appropriate for phenomenological fits that can be built from an DR operator formalism which is attractive for its projective group gauge symmetries and factorization properties. This is done by attempting to generalize an approach [Szczepaniak, Adam, and Pennington, M.R., Application of the Veneziano Model in Charmonium Dalitz Plot Analysis, arXiv:1403.5782] of isolating DR poles by making all but one of the residues of on infinite sum of modified beta functions vanish. This leaves a closed-form amplitude that has a finite set of adjustable parameters and with only one ad hoc modification necessary for maintaining Regge asymptotic behavior. We have generalized this approach to double and single Regge limits of the DR five
A phase-space approach for propagating field-field correlation functions
NASA Astrophysics Data System (ADS)
Gradoni, Gabriele; Creagh, Stephen C.; Tanner, Gregor; Smartt, Christopher; Thomas, David W. P.
2015-09-01
We show that radiation from complex and inherently random but correlated wave sources can be modelled efficiently by using an approach based on the Wigner distribution function. Our method exploits the connection between correlation functions and the Wigner function and admits in its simplest approximation a direct representation in terms of the evolution of ray densities in phase space. We show that next leading order corrections to the ray-tracing approximation lead to Airy-function type phase space propagators. By exploiting the exact Wigner function propagator, inherently wave-like effects such as evanescent decay or radiation from more heterogeneous sources as well as diffraction and reflection can be included and analysed. We discuss in particular the role of evanescent waves in the near-field of non-paraxial sources and give explicit expressions for the growth rate of the correlation length as a function of the distance from the source. The approximations are validated using full-wave simulations of model sources. In particular, results for the reflection of partially coherent sources from flat mirrors are given where the influence of Airy function corrections can be demonstrated. We focus here on electromagnetic sources at microwave frequencies and modelling efforts in the context of electromagnetic compatibility.
Defining the neurocircuitry of borderline personality disorder: functional neuroimaging approaches.
Brendel, Gary R; Stern, Emily; Silbersweig, David A
2005-01-01
Functional neuroimaging recently has been used to localize brain dysfunction in borderline personality disorder (BPD). Initial studies have examined baseline activity or emotional reactivity, and our group has investigated what we consider to be a crucial interaction between negative emotion and behavioral (dys)control. This research is beginning to identify abnormal frontolimbic circuitry likely underlying core clinical features of this condition. We review the evidence for dysfunction in specific frontolimbic regions, leading to a mechanistic model of symptom formation in BPD. In addition, we offer an integration of these neuroimaging findings with developmental perspectives on the emergence of borderline psychopathology, focusing on the ways in which early psychosocial experience may interact with developing brain systems. We also consider possible mechanisms of psychotherapeutic change at the neural systems level in BPD. Finally, we propose that future neuroimaging studies of BPD should integrate multiple levels of observation (structural, functional, neurochemical, genetic, and clinical) in a model-driven fashion to further understand the dynamic relationship between biological and psychological factors in the development and treatment of this difficult condition. PMID:16613437
Gene function hypotheses for the Campylobacter jejuni glycome generated by a logic-based approach.
Sternberg, Michael J E; Tamaddoni-Nezhad, Alireza; Lesk, Victor I; Kay, Emily; Hitchen, Paul G; Cootes, Adrian; van Alphen, Lieke B; Lamoureux, Marc P; Jarrell, Harold C; Rawlings, Christopher J; Soo, Evelyn C; Szymanski, Christine M; Dell, Anne; Wren, Brendan W; Muggleton, Stephen H
2013-01-01
Increasingly, experimental data on biological systems are obtained from several sources and computational approaches are required to integrate this information and derive models for the function of the system. Here, we demonstrate the power of a logic-based machine learning approach to propose hypotheses for gene function integrating information from two diverse experimental approaches. Specifically, we use inductive logic programming that automatically proposes hypotheses explaining the empirical data with respect to logically encoded background knowledge. We study the capsular polysaccharide biosynthetic pathway of the major human gastrointestinal pathogen Campylobacter jejuni. We consider several key steps in the formation of capsular polysaccharide consisting of 15 genes of which 8 have assigned function, and we explore the extent to which functions can be hypothesised for the remaining 7. Two sources of experimental data provide the information for learning-the results of knockout experiments on the genes involved in capsule formation and the absence/presence of capsule genes in a multitude of strains of different serotypes. The machine learning uses the pathway structure as background knowledge. We propose assignments of specific genes to five previously unassigned reaction steps. For four of these steps, there was an unambiguous optimal assignment of gene to reaction, and to the fifth, there were three candidate genes. Several of these assignments were consistent with additional experimental results. We therefore show that the logic-based methodology provides a robust strategy to integrate results from different experimental approaches and propose hypotheses for the behaviour of a biological system. PMID:23103756
A Network Approach to Rare Disease Modeling
NASA Astrophysics Data System (ADS)
Ghiassian, Susan; Rabello, Sabrina; Sharma, Amitabh; Wiest, Olaf; Barabasi, Albert-Laszlo
2011-03-01
Network approaches have been widely used to better understand different areas of natural and social sciences. Network Science had a particularly great impact on the study of biological systems. In this project, using biological networks, candidate drugs as a potential treatment of rare diseases were identified. Developing new drugs for more than 2000 rare diseases (as defined by ORPHANET) is too expensive and beyond expectation. Disease proteins do not function in isolation but in cooperation with other interacting proteins. Research on FDA approved drugs have shown that most of the drugs do not target the disease protein but a protein which is 2 or 3 steps away from the disease protein in the Protein-Protein Interaction (PPI) network. We identified the already known drug targets in the disease gene's PPI subnetwork (up to the 3rd neighborhood) and among them those in the same sub cellular compartment and higher coexpression coefficient with the disease gene are expected to be stronger candidates. Out of 2177 rare diseases, 1092 were found not to have any drug target. Using the above method, we have found the strongest candidates among the rest in order to further experimental validations.
Decision making in bipolar disorder: a cognitive modeling approach.
Yechiam, Eldad; Hayden, Elizabeth P; Bodkins, Misty; O'Donnell, Brian F; Hetrick, William P
2008-11-30
A formal modeling approach was used to characterize decision-making processes in bipolar disorder. Decision making was examined in 28 bipolar patients (14 acute and 14 remitted) and 25 controls using the Iowa Gambling Task (Bechara et al., 1994), a decision-making task used for assessing cognitive impulsivity. To disentangle motivational and cognitive aspects of decision-making processes, we applied a formal cognitive model to the performance on the Iowa Gambling Task. The model has three parameters: The relative impact of rewards and punishments on evaluations, the impact of recent and past payoffs, and the degree of choice consistency. The results indicated that acute bipolar patients were characterized by low choice consistency, or a tendency to make erratic choices. Low choice consistency improved the prediction of acute bipolar disorder beyond that provided by cognitive functioning and self-report measures of personality and temperament. PMID:18848361
Transfer function approach based on simulation results for the determination of pod curves
NASA Astrophysics Data System (ADS)
Demeyer, S.; Jenson, F.; Dominguez, N.; Iakovleva, E.
2012-05-01
POD curves estimations are based on statistical studies of empirical data which are obtained thru costly and time consuming experimental campaigns. Currently, cost reduction of POD trials is a major issue. A proposed solution is to replace some of the experimental data required to determine the POD with model based results. Following this idea, the concept of Model Assisted POD (MAPOD) has been introduced first in the US in 2004 through the constitution of the MAPOD working group. One approach to Model Assisted POD is based on a transfer function which uses empirical data and models to transfer POD measured for one specific application to another related application. The objective of this paper is to show how numerical simulations could help to determine such transfer functions. A practical implementation of the approach to a high frequency eddy current inspection for fatigue cracks is presented. Empirical data is available for the titanium alloy plates. A model based transfer function is used to assess a POD curve for the inspection of aluminum components.
New approaches for modelling cancer mechanisms in the mouse.
Maddison, Kathryn; Clarke, Alan R
2005-01-01
Mouse models of human cancer are vital to our understanding of the neoplastic process, and to advances in both basic and clinical research. Indeed, models of many of the major human tumours are now available and are subject to constant revision to more faithfully recapitulate human disease. Despite these advances, it is important to recognize that limitations do exist to the current range of models. The principal approach to modelling has relied upon the use of constitutive gene knockouts, which can often result in embryonic lethality, can potentially be affected by developmental compensation, and which do not mimic the sporadic development of a tumour expanding from a single cell in an otherwise normal environment. Furthermore, simple knockouts are usually designed to lead to loss of protein function, whereas a subset of cancer-causing mutations clearly results in gain of function. These drawbacks are well recognized and this review describes some of the approaches used to address these issues. Key amongst these is the development of conditional alleles that precisely mimic the mutations found in vivo, and which can be spatially and tissue-specifically controlled using 'smart' systems such as the tetracycline system and Cre-Lox technology. Examples of genes being manipulated in this way include Ki-Ras, Myc, and p53. These new developments in modelling mean that any mutant allele can potentially be turned on or off, or over- or under-expressed, in any tissue at any stage of the life-cycle of the mouse. This will no doubt lead to ever more accurate and powerful mouse models to dissect the genetic pathways that lead to cancer. PMID:15641017
The Generating Function Approach for Peptide Identification in Spectral Networks
Guthals, Adrian; Boucher, Christina
2015-01-01
Abstract Tandem mass (MS/MS) spectrometry has become the method of choice for protein identification and has launched a quest for the identification of every translated protein and peptide. However, computational developments have lagged behind the pace of modern data acquisition protocols and have become a major bottleneck in proteomics analysis of complex samples. As it stands today, attempts to identify MS/MS spectra against large databases (e.g., the human microbiome or 6-frame translation of the human genome) face a search space that is 10–100 times larger than the human proteome, where it becomes increasingly challenging to separate between true and false peptide matches. As a result, the sensitivity of current state-of-the-art database search methods drops by nearly 38% to such low identification rates that almost 90% of all MS/MS spectra are left as unidentified. We address this problem by extending the generating function approach to rigorously compute the joint spectral probability of multiple spectra being matched to peptides with overlapping sequences, thus enabling the confident assignment of higher significance to overlapping peptide–spectrum matches (PSMs). We find that these joint spectral probabilities can be several orders of magnitude more significant than individual PSMs, even in the ideal case when perfect separation between signal and noise peaks could be achieved per individual MS/MS spectrum. After benchmarking this approach on a typical lysate MS/MS dataset, we show that the proposed intersecting spectral probabilities for spectra from overlapping peptides improve peptide identification by 30–62%. PMID:25423621
The generating function approach for Peptide identification in spectral networks.
Guthals, Adrian; Boucher, Christina; Bandeira, Nuno
2015-05-01
Tandem mass (MS/MS) spectrometry has become the method of choice for protein identification and has launched a quest for the identification of every translated protein and peptide. However, computational developments have lagged behind the pace of modern data acquisition protocols and have become a major bottleneck in proteomics analysis of complex samples. As it stands today, attempts to identify MS/MS spectra against large databases (e.g., the human microbiome or 6-frame translation of the human genome) face a search space that is 10-100 times larger than the human proteome, where it becomes increasingly challenging to separate between true and false peptide matches. As a result, the sensitivity of current state-of-the-art database search methods drops by nearly 38% to such low identification rates that almost 90% of all MS/MS spectra are left as unidentified. We address this problem by extending the generating function approach to rigorously compute the joint spectral probability of multiple spectra being matched to peptides with overlapping sequences, thus enabling the confident assignment of higher significance to overlapping peptide-spectrum matches (PSMs). We find that these joint spectral probabilities can be several orders of magnitude more significant than individual PSMs, even in the ideal case when perfect separation between signal and noise peaks could be achieved per individual MS/MS spectrum. After benchmarking this approach on a typical lysate MS/MS dataset, we show that the proposed intersecting spectral probabilities for spectra from overlapping peptides improve peptide identification by 30-62%. PMID:25423621
Executive function and food approach behavior in middle childhood
Groppe, Karoline; Elsner, Birgit
2014-01-01
Executive function (EF) has long been considered to be a unitary, domain-general cognitive ability. However, recent research suggests differentiating “hot” affective and “cool” cognitive aspects of EF. Yet, findings regarding this two-factor construct are still inconsistent. In particular, the development of this factor structure remains unclear and data on school-aged children is lacking. Furthermore, studies linking EF and overweight or obesity suggest that EF contributes to the regulation of eating behavior. So far, however, the links between EF and eating behavior have rarely been investigated in children and non-clinical populations. First, we examined whether EF can be divided into hot and cool factors or whether they actually correspond to a unitary construct in middle childhood. Second, we examined how hot and cool EF are associated with different eating styles that put children at risk of becoming overweight during development. Hot and cool EF were assessed experimentally in a non-clinical population of 1657 elementary-school children (aged 6–11 years). The “food approach” behavior was rated mainly via parent questionnaires. Findings indicate that hot EF is distinguishable from cool EF. However, only cool EF seems to represent a coherent functional entity, whereas hot EF does not seem to be a homogenous construct. This was true for a younger and an older subgroup of children. Furthermore, different EF components were correlated with eating styles, such as responsiveness to food, desire to drink, and restrained eating in girls but not in boys. This shows that lower levels of EF are not only seen in clinical populations of obese patients but are already associated with food approach styles in a normal population of elementary school-aged girls. Although the direction of effect still has to be clarified, results point to the possibility that EF constitutes a risk factor for eating styles contributing to the development of overweight in the long
Predicting plants -modeling traits as a function of environment
NASA Astrophysics Data System (ADS)
Franklin, Oskar
2016-04-01
A central problem in understanding and modeling vegetation dynamics is how to represent the variation in plant properties and function across different environments. Addressing this problem there is a strong trend towards trait-based approaches, where vegetation properties are functions of the distributions of functional traits rather than of species. Recently there has been enormous progress in in quantifying trait variability and its drivers and effects (Van Bodegom et al. 2012; Adier et al. 2014; Kunstler et al. 2015) based on wide ranging datasets on a small number of easily measured traits, such as specific leaf area (SLA), wood density and maximum plant height. However, plant function depends on many other traits and while the commonly measured trait data are valuable, they are not sufficient for driving predictive and mechanistic models of vegetation dynamics -especially under novel climate or management conditions. For this purpose we need a model to predict functional traits, also those not easily measured, and how they depend on the plants' environment. Here I present such a mechanistic model based on fitness concepts and focused on traits related to water and light limitation of trees, including: wood density, drought response, allocation to defense, and leaf traits. The model is able to predict observed patterns of variability in these traits in relation to growth and mortality, and their responses to a gradient of water limitation. The results demonstrate that it is possible to mechanistically predict plant traits as a function of the environment based on an eco-physiological model of plant fitness. References Adier, P.B., Salguero-Gómez, R., Compagnoni, A., Hsu, J.S., Ray-Mukherjee, J., Mbeau-Ache, C. et al. (2014). Functional traits explain variation in plant lifehistory strategies. Proc. Natl. Acad. Sci. U. S. A., 111, 740-745. Kunstler, G., Falster, D., Coomes, D.A., Hui, F., Kooyman, R.M., Laughlin, D.C. et al. (2015). Plant functional traits
A Generic Modeling Process to Support Functional Fault Model Development
NASA Technical Reports Server (NTRS)
Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.
2016-01-01
Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.
Agents: An approach for dynamic process modelling
NASA Astrophysics Data System (ADS)
Grohmann, Axel; Kopetzky, Roland; Lurk, Alexander
1999-03-01
With the growing amount of distributed and heterogeneous information and services, conventional information systems have come to their limits. This gave rise to the development of a Multi-Agent System (the "Logical Client") which can be used in complex information systems as well as in other advanced software systems. Computer agents are proactive, reactive and social. They form a community of independent software components that can communicate and co-operate in order to accomplish complex tasks. Thus the agent-oriented paradigm provides a new and powerful approach to programming distributed systems. The communication framework developed is based on standards like CORBA, KQML and KIF. It provides an embedded rule based system to find adequate reactions to incoming messages. The macro-architecture of the Logical Client consists of independent agents and uses artificial intelligence to cope with complex patterns of communication and actions. A set of system agents is also provided, including the Strategy Service as a core component for modelling processes at runtime, the Computer Supported Cooperative Work (CSCW) Component for supporting remote co-operation between human users and the Repository for managing and hiding the file based data flow in heterogeneous networks. This architecture seems to be capable of managing complexity in information systems. It is also being implemented in a complex simulation system that monitors and simulates the environmental radioactivity in the country Baden-Württemberg.
A Green's function quantum average atom model
Starrett, Charles Edward
2015-05-21
A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.
Cheng, Longlong; Zhang, Guangju; Wan, Baikun; Hao, Linlin; Qi, Hongzhi; Ming, Dong
2009-01-01
Functional electrical stimulation (FES) has been widely used in the area of neural engineering. It utilizes electrical current to activate nerves innervating extremities affected by paralysis. An effective combination of a traditional PID controller and a neural network, being capable of nonlinear expression and adaptive learning property, supply a more reliable approach to construct FES controller that help the paraplegia complete the action they want. A FES system tuned by Radial Basis Function (RBF) Neural Network-based Proportional-Integral-Derivative (PID) model was designed to control the knee joint according to the desired trajectory through stimulation of lower limbs muscles in this paper. Experiment result shows that the FES system with RBF Neural Network-based PID model get a better performance when tracking the preset trajectory of knee angle comparing with the system adjusted by Ziegler- Nichols tuning PID model. PMID:19964991
Probabilistic downscaling approaches: Application to wind cumulative distribution functions
NASA Astrophysics Data System (ADS)
Michelangeli, P.-A.; Vrac, M.; Loukos, H.
2009-06-01
A statistical method is developed to generate local cumulative distribution functions (CDFs) of surface climate variables from large-scale fields. Contrary to most downscaling methods producing continuous time series, our “probabilistic downscaling methods” (PDMs), named “CDF-transform”, is designed to deal with and provide local-scale CDFs through a transformation applied to large-scale CDFs. First, our PDM is compared to a reference method (Quantile-matching), and validated on a historical time period by downscaling CDFs of wind intensity anomalies over France, for reanalyses and simulations from a general circulation model (GCM). Then, CDF-transform is applied to GCM output fields to project changes in wind intensity anomalies for the 21st century under A2 scenario. Results show a decrease in wind anomalies for most weather stations, ranging from less than 1% (in the South) to nearly 9% (in the North), with a maximum in the Brittany region.
Ocean acoustic signal processing: A model-based approach
Candy, J.V. ); Sullivan, E.J. )
1992-12-01
A model-based approach is proposed to solve the ocean acoustic signal processing problem that is based on a state-space representation of the normal-mode propagation model. It is shown that this representation can be utilized to spatially propagate both modal (depth) and range functions given the basic parameters (wave numbers, etc.) developed from the solution of the associated boundary value problem. This model is then generalized to the stochastic case where an approximate Gauss--Markov model evolves. The Gauss--Markov representation, in principle, allows the inclusion of stochastic phenomena such as noise and modeling errors in a consistent manner. Based on this framework, investigations are made of model-based solutions to the signal enhancement, detection and related parameter estimation problems. In particular, a modal/pressure field processor is designed that allows {ital in} {ital situ} recursive estimation of the sound velocity profile. Finally, it is shown that the associated residual or so-called innovation sequence that ensues from the recursive nature of this formulation can be employed to monitor the model's fit to the data and also form the basis of a sequential detector.
A Functional Genomic Approach to Chlorinated Ethenes Bioremediation
NASA Astrophysics Data System (ADS)
Lee, P. K.; Brodie, E. L.; MacBeth, T. W.; Deeb, R. A.; Sorenson, K. S.; Andersen, G. L.; Alvarez-Cohen, L.
2007-12-01
With the recent advances in genomic sciences, a knowledge-based approach can now be taken to optimize the bioremediation of trichloroethene (TCE). During the bioremediation of a heterogeneous subsurface, it is vital to identify and quantify the functionally important microorganisms present, characterize the microbial community and measure their physiological activity. In our field experiments, quantitative PCR (qPCR) was coupled with reverse-transcription (RT) to analyze both copy numbers and transcripts expressed by the 16S rRNA gene and three reductive dehalogenase (RDase) genes as biomarkers of Dehalococcoides spp. in the groundwater of a TCE-DNAPL site at Ft. Lewis (WA) that was serially subjected to biostimulation and bioaugmentation. Genes in the Dehalococcoides genus were targeted as they are the only known organisms that can completely dechlorinate TCE to the innocuous product ethene. Biomarker quantification revealed an overall increase of more than three orders of magnitude in the total Dehalococcoides population and quantification of the more liable and stringently regulated mRNAs confirmed that Dehalococcoides spp. were active. Parallel with our field experiments, laboratory studies were conducted to explore the physiology of Dehalococcoides isolates in order to develop relevant biomarkers that are indicative of the metabolic state of cells. Recently, we verified the function of the nitrogenase operon in Dehalococcoides sp. strain 195 and nitrogenase-encoding genes are ideal biomarker targets to assess cellular nitrogen requirement. To characterize the microbial community, we applied a high-density phylogenetic microarray (16S PhyloChip) that simultaneous monitors over 8,700 unique taxa to track the bacterial and archaeal populations through different phases of treatment. As a measure of species richness, 1,300 to 1,520 taxa were detected in groundwater samples extracted during different stages of treatment as well as in the bioaugmentation culture. We
A secured e-tendering modeling using misuse case approach
NASA Astrophysics Data System (ADS)
Mohd, Haslina; Robie, Muhammad Afdhal Muhammad; Baharom, Fauziah; Darus, Norida Muhd; Saip, Mohamed Ali; Yasin, Azman
2016-08-01
Major risk factors relating to electronic transactions may lead to destructive impacts on trust and transparency in the process of tendering. Currently, electronic tendering (e-tendering) systems still remain uncertain in issues relating to legal and security compliance and most importantly it has an unclear security framework. Particularly, the available systems are lacking in addressing integrity, confidentiality, authentication, and non-repudiation in e-tendering requirements. Thus, one of the challenges in developing an e-tendering system is to ensure the system requirements include the function for secured and trusted environment. Therefore, this paper aims to model a secured e-tendering system using misuse case approach. The modeling process begins with identifying the e-tendering process, which is based on the Australian Standard Code of Tendering (AS 4120-1994). It is followed by identifying security threats and their countermeasure. Then, the e-tendering was modelled using misuse case approach. The model can contribute to e-tendering developers and also to other researchers or experts in the e-tendering domain.
A functional proteomics approach to the comprehension of sarcoidosis.
Landi, C; Bargagli, E; Carleo, A; Bianchi, L; Gagliardi, A; Cillis, G; Perari, M G; Refini, R M; Prasse, A; Bini, L; Rottoli, P
2015-10-14
Pulmonary sarcoidosis (Sar) is an idiopathic disease histologically typified by non-caseating epitheliod cell sarcoid granulomas. A cohort of 37 Sar patients with chronic persistent pulmonary disease was described in this study. BAL protein profiles from 9 of these Sar patients were compared with those from 8 smoker (SC) and 10 no-smoker controls (NSC) by proteomic approach. Principal Component Analysis was performed to clusterize the samples in the corresponding conditions highlighting a differential pattern profiles primarily in Sar than SC. Spot identification reveals thirty-four unique proteins involved in lipid, mineral, and vitamin Dmetabolism, and immuneregulation of macrophage function. Enrichment analysis has been elaborated by MetaCore, revealing 14-3-3ε, α1-antitrypsin, GSTP1, and ApoA1 as "central hubs". Process Network as well as Pathway Maps underline proteins involved in immune response and inflammation induced by complement system, innate inflammatory response and IL-6signalling. Disease Biomarker Network highlights Tuberculosis and COPD as pathologies that share biomarkers with sarcoidosis. In conclusion, Sar protein expression profile seems more similar to that of NSC than SC, conversely to other ILDs. Moreover, Disease Biomarker Network revealed several common features between Sar and TB, exhorting to orientate the future proteomics investigations also in comparative BALF analysis of Sar and TB. PMID:26342673
New Approaches for the Study of Orexin Function
Yamanaka, A; Tsunematsu, T
2010-01-01
Orexin is a neuropeptide produced by a specific subset of neurones located in the lateral hypothalamic area. Mice lacking either prepro-orexin or orexin receptor 2, as well as those in which orexin-producing neurones (orexin neurones) are deleted, share a common phenotype: altered sleep–wake regulation and the sudden onset of muscle atonia. These symptoms are similar to the human sleep disorder narcolepsy. In this review, we describe recent advances in the study of orexin function with a particular emphasis on microscopic techniques that better characterise the neuronal networks involving orexin neurones, as well as recent optogenetic approaches that allow for the activation or inhibition of specific neurones by expressing different light-activated proteins. In particular, the use of orexin/halorhodopsin and orexin/channelrhodopsin-2 transgenic mice has demonstrated an important role for orexin neurones in regulating the sleep–wake cycle and state of arousal in vivo. Further refinement of these in vitro and in vivo techniques will allow for a more detailed understanding of the interaction of orexin with other neurotransmitter pathways in the brain. PMID:20456607
A simplified approach to calculate atomic partition functions in plasmas
D'Ammando, Giuliano; Colonna, Gianpiero
2013-03-15
A simplified method to calculate the electronic partition functions and the corresponding thermodynamic properties of atomic species is presented and applied to C(I) up to C(VI) ions. The method consists in reducing the complex structure of an atom to three lumped levels. The ground level of the lumped model describes the ground term of the real atom, while the second lumped level represents the low lying states and the last one groups all the other atomic levels. It is also shown that for the purpose of thermodynamic function calculation, the energy and the statistical weight of the upper lumped level, describing high-lying excited atomic states, can be satisfactorily approximated by an analytic hydrogenlike formula. The results of the simplified method are in good agreement with those obtained by direct summation over a complete set (i.e., including all possible terms and configurations below a given cutoff energy) of atomic energy levels. The method can be generalized to include more lumped levels in order to improve the accuracy.
Insight and executive functioning in schizophrenia: a multidimensional approach.
Raffard, Stéphane; Bayard, Sophie; Gely-Nargeot, Marie-Christine; Capdevielle, Delphine; Maggi, Maximilien; Barbotte, Emeline; Morris, Davina; Boulenger, Jean-Philippe
2009-05-30
Past research suggests that unawareness of illness in schizophrenia is associated with deficits in executive functions; however, the relationships between executive processes and the various dimensions of insight are still unclear. Recent models of executive functioning have proposed that four executive processes - inhibition, updating, shifting and dual task coordination - are moderately related yet separable. In this study, we proposed to investigate and clarify the relationships between insight dimensions and the aforementioned four executive components. A total of 60 patients were administered the Test for Attentional Performance and the Scale to Assess Unawareness of Mental Disorder. The effect of potential confounding variables such as medication, symptomatology, demography, psycho-affective state, and general processing speed were also examined in a preliminary statistical analysis. We found that both awareness of disorder and awareness of response to medication were significantly related to Updating. Awareness of the social consequences of the disease was significantly related to Updating, Divided Attention and Inhibition Processes. The analysis indicates that poor insight in schizophrenia may be partially related to executive dysfunction. Finally, our study emphasizes the possible role of neuropsychological intervention in improving patients' insight into illness. PMID:19395049
A Geometric Approach to Decouple Robotino Motions and its Functional Controllability
NASA Astrophysics Data System (ADS)
Straßberger, Daniel; Mercorelli, Paolo; Sergiyenko, Oleg
2015-11-01
This paper analyses a functional control of the Robotino. The proposed control strategy considers a functional decoupling control strategy realized using a geometric approach and the invertibility property of the DC-drives with which the Robotino is equipped. For a given control structure the functional controllability is proven for motion trajectories of class C3, continuous functions with third derivative also being continuous. Horizontal, Vertical and Angular motions are considered and the decoupling between these motions is obtained. Control simulation results using real data of the Robotino are shown. The used control which enables to produce the presented results is a standard Linear Model Predictive Control (LMPC), even though for sake of brevity the standard algorithm is not shown.
Mathematical Models of Cardiac Pacemaking Function
NASA Astrophysics Data System (ADS)
Li, Pan; Lines, Glenn T.; Maleckar, Mary M.; Tveito, Aslak
2013-10-01
Over the past half century, there has been intense and fruitful interaction between experimental and computational investigations of cardiac function. This interaction has, for example, led to deep understanding of cardiac excitation-contraction coupling; how it works, as well as how it fails. However, many lines of inquiry remain unresolved, among them the initiation of each heartbeat. The sinoatrial node, a cluster of specialized pacemaking cells in the right atrium of the heart, spontaneously generates an electro-chemical wave that spreads through the atria and through the cardiac conduction system to the ventricles, initiating the contraction of cardiac muscle essential for pumping blood to the body. Despite the fundamental importance of this primary pacemaker, this process is still not fully understood, and ionic mechanisms underlying cardiac pacemaking function are currently under heated debate. Several mathematical models of sinoatrial node cell membrane electrophysiology have been constructed as based on different experimental data sets and hypotheses. As could be expected, these differing models offer diverse predictions about cardiac pacemaking activities. This paper aims to present the current state of debate over the origins of the pacemaking function of the sinoatrial node. Here, we will specifically review the state-of-the-art of cardiac pacemaker modeling, with a special emphasis on current discrepancies, limitations, and future challenges.
Model Adequacy and the Macroevolution of Angiosperm Functional Traits.
Pennell, Matthew W; FitzJohn, Richard G; Cornwell, William K; Harmon, Luke J
2015-08-01
Making meaningful inferences from phylogenetic comparative data requires a meaningful model of trait evolution. It is thus important to determine whether the model is appropriate for the data and the question being addressed. One way to assess this is to ask whether the model provides a good statistical explanation for the variation in the data. To date, researchers have focused primarily on the explanatory power of a model relative to alternative models. Methods have been developed to assess the adequacy, or absolute explanatory power, of phylogenetic trait models, but these have been restricted to specific models or questions. Here we present a general statistical framework for assessing the adequacy of phylogenetic trait models. We use our approach to evaluate the statistical performance of commonly used trait models on 337 comparative data sets covering three key angiosperm functional traits. In general, the models we tested often provided poor statistical explanations for the evolution of these traits. This was true for many different groups and at many different scales. Whether such statistical inadequacy will qualitatively alter inferences drawn from comparative data sets will depend on the context. Regardless, assessing model adequacy can provide interesting biological insights-how and why a model fails to describe variation in a data set give us clues about what evolutionary processes may have driven trait evolution across time. PMID:26655160
Ryll, A; Bucher, J; Bonin, A; Bongard, S; Gonçalves, E; Saez-Rodriguez, J; Niklas, J; Klamt, S
2014-10-01
Systems biology has to increasingly cope with large- and multi-scale biological systems. Many successful in silico representations and simulations of various cellular modules proved mathematical modelling to be an important tool in gaining a solid understanding of biological phenomena. However, models spanning different functional layers (e.g. metabolism, signalling and gene regulation) are still scarce. Consequently, model integration methods capable of fusing different types of biological networks and various model formalisms become a key methodology to increase the scope of cellular processes covered by mathematical models. Here we propose a new integration approach to couple logical models of signalling or/and gene-regulatory networks with kinetic models of metabolic processes. The procedure ends up with an integrated dynamic model of both layers relying on differential equations. The feasibility of the approach is shown in an illustrative case study integrating a kinetic model of central metabolic pathways in hepatocytes with a Boolean logical network depicting the hormonally induced signal transduction and gene regulation events involved. In silico simulations demonstrate the integrated model to qualitatively describe the physiological switch-like behaviour of hepatocytes in response to nutritionally regulated changes in extracellular glucagon and insulin levels. A simulated failure mode scenario addressing insulin resistance furthermore illustrates the pharmacological potential of a model covering interactions between signalling, gene regulation and metabolism. PMID:25063553
Functional derivatives for multi-scale modeling
NASA Astrophysics Data System (ADS)
Reeve, Samuel; Strachan, Alejandro
2015-03-01
As we look beyond petascale computing and towards the exascale, effectively utilizing computational resources by using multi-fidelity and multi-scale materials simulations becomes increasingly important. Determining when and where to run high-fidelity simulations in order to have the most effect on a given quantity of interest (QoI) is a difficult problem. This work utilizes functional uncertainty quantification (UQ) for this task. While most UQ focuses on uncertainty in output from uncertainty in input parameters, we focus on uncertainty from the function itself (e.g. from using a specific functional form for an interatomic potential or constitutive law). In the case of a multi-scale simulation with a given constitutive law, calculating the functional derivative of the QoI with respect to that constitutive law can determine where a fine-scale model evaluation will maximize the increase in accuracy of the predicted QoI. Additionally, for a given computational budget the optimal set of coarse and fine-scale simulations can be determined. Numerical calculation of the functional derivative has been developed and methods of including this work within existing multi-fidelity and multi-scale orchestrators are explored.
Lightning Modelling: From 3D to Circuit Approach
NASA Astrophysics Data System (ADS)
Moussa, H.; Abdi, M.; Issac, F.; Prost, D.
2012-05-01
The topic of this study is electromagnetic environment and electromagnetic interferences (EMI) effects, specifically the modelling of lightning indirect effects [1] on aircraft electrical systems present on deported and highly exposed equipments, such as nose landing gear (NLG) and nacelle, through a circuit approach. The main goal of the presented work, funded by a French national project: PREFACE, is to propose a simple equivalent electrical circuit to represent a geometrical structure, taking into account mutual, self inductances, and resistances, which play a fundamental role in the lightning current distribution. Then this model is intended to be coupled to a functional one, describing a power train chain composed of: a converter, a shielded power harness and a motor or a set of resistors used as a load for the converter. The novelty here, is to provide a pre-sizing qualitative approach allowing playing on integration in pre-design phases. This tool intends to offer a user-friendly way for replying rapidly to calls for tender, taking into account the lightning constraints. Two cases are analysed: first, a NLG that is composed of tubular pieces that can be easily approximated by equivalent cylindrical straight conductors. Therefore, passive R, L, M elements of the structure can be extracted through analytical engineer formulas such as those implemented in the partial element equivalent circuit (PEEC) [2] technique. Second, the same approach is intended to be applied on an electrical de-icing nacelle sub-system.
Lithium battery aging model based on Dakin's degradation approach
NASA Astrophysics Data System (ADS)
Baghdadi, Issam; Briat, Olivier; Delétage, Jean-Yves; Gyan, Philippe; Vinassa, Jean-Michel
2016-09-01
This paper proposes and validates a calendar and power cycling aging model for two different lithium battery technologies. The model development is based on previous SIMCAL and SIMSTOCK project data. In these previous projects, the effect of the battery state of charge, temperature and current magnitude on aging was studied on a large panel of different battery chemistries. In this work, data are analyzed using Dakin's degradation approach. In fact, the logarithms of battery capacity fade and the increase in resistance evolves linearly over aging. The slopes identified from straight lines correspond to battery aging rates. Thus, a battery aging rate expression function of aging factors was deduced and found to be governed by Eyring's law. The proposed model simulates the capacity fade and resistance increase as functions of the influencing aging factors. Its expansion using Taylor series was consistent with semi-empirical models based on the square root of time, which are widely studied in the literature. Finally, the influence of the current magnitude and temperature on aging was simulated. Interestingly, the aging rate highly increases with decreasing and increasing temperature for the ranges of -5 °C-25 °C and 25 °C-60 °C, respectively.
The Pleiades mass function: Models versus observations
NASA Astrophysics Data System (ADS)
Moraux, E.; Kroupa, P.; Bouvier, J.
2004-10-01
Two stellar-dynamical models of binary-rich embedded proto-Orion-Nebula-type clusters that evolve to Pleiades-like clusters are studied with an emphasis on comparing the stellar mass function with observational constraints. By the age of the Pleiades (about 100 Myr) both models show a similar degree of mass segregation which also agrees with observational constraints. This thus indicates that the Pleiades is well relaxed and that it is suffering from severe amnesia. It is found that the initial mass function (IMF) must have been indistinguishable from the standard or Galactic-field IMF for stars with mass m ≲ 2 M⊙, provided the Pleiades precursor had a central density of about 104.8 stars/pc3. A denser model with 105.8 stars/pc3 also leads to reasonable agreement with observational constraints, but owing to the shorter relaxation time of the embedded cluster it evolves through energy equipartition to a mass-segregated condition just prior to residual-gas expulsion. This model consequently preferentially loses low-mass stars and brown dwarfs (BDs), but the effect is not very pronounced. The empirical data indicate that the Pleiades IMF may have been steeper than the Salpeter for stars with m⪆ 2 M⊙.
ERIC Educational Resources Information Center
Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.
2011-01-01
Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…
Experimental and modeling approaches for food waste composting: a review.
Li, Zhentong; Lu, Hongwei; Ren, Lixia; He, Li
2013-10-01
Composting has been used as a method to dispose food waste (FW) and recycle organic matter to improve soil structure and fertility. Considering the significance of composting in FW treatment, many researchers have paid their attention on how to improve FW composting efficiency, reduce operating cost, and mitigate the associated environmental damage. This review focuses on the overall studies of FW composting, not only various parameters significantly affecting the processes and final results, but also a number of simulation approaches that are greatly instrumental in well understanding the process mechanism and/or results prediction. Implications of many key ingredients on FW composting performance are also discussed. Perspects of effective laboratory experiments and computer-based simulation are finally investigated, demonstrating many demanding areas for enhanced research efforts, which include the screening of multi-functional additives, volatile organiccompound emission control, necessity of modeling and post-modeling analysis, and usefulness of developing more conjunctive AI-based process control techniques. PMID:23876506
Pathprinting: An integrative approach to understand the functional basis of disease
2013-01-01
New strategies to combat complex human disease require systems approaches to biology that integrate experiments from cell lines, primary tissues and model organisms. We have developed Pathprint, a functional approach that compares gene expression profiles in a set of pathways, networks and transcriptionally regulated targets. It can be applied universally to gene expression profiles across species. Integration of large-scale profiling methods and curation of the public repository overcomes platform, species and batch effects to yield a standard measure of functional distance between experiments. We show that pathprints combine mouse and human blood developmental lineage, and can be used to identify new prognostic indicators in acute myeloid leukemia. The code and resources are available at http://compbio.sph.harvard.edu/hidelab/pathprint PMID:23890051
Vlah, Zvonimir; Seljak, Uroš; Baldauf, Tobias; McDonald, Patrick; Okumura, Teppei E-mail: seljak@physik.uzh.ch E-mail: teppei@ewha.ac.kr
2012-11-01
We develop a perturbative approach to redshift space distortions (RSD) using the phase space distribution function approach and apply it to the dark matter redshift space power spectrum and its moments. RSD can be written as a sum over density weighted velocity moments correlators, with the lowest order being density, momentum density and stress energy density. We use standard and extended perturbation theory (PT) to determine their auto and cross correlators, comparing them to N-body simulations. We show which of the terms can be modeled well with the standard PT and which need additional terms that include higher order corrections which cannot be modeled in PT. Most of these additional terms are related to the small scale velocity dispersion effects, the so called finger of god (FoG) effects, which affect some, but not all, of the terms in this expansion, and which can be approximately modeled using a simple physically motivated ansatz such as the halo model. We point out that there are several velocity dispersions that enter into the detailed RSD analysis with very different amplitudes, which can be approximately predicted by the halo model. In contrast to previous models our approach systematically includes all of the terms at a given order in PT and provides a physical interpretation for the small scale dispersion values. We investigate RSD power spectrum as a function of μ, the cosine of the angle between the Fourier mode and line of sight, focusing on the lowest order powers of μ and multipole moments which dominate the observable RSD power spectrum. Overall we find considerable success in modeling many, but not all, of the terms in this expansion. This is similar to the situation in real space, but predicting power spectrum in redshift space is more difficult because of the explicit influence of small scale dispersion type effects in RSD, which extend to very large scales.
Dimensionality of electronic excitations in organic semiconductors: A dielectric function approach
NASA Astrophysics Data System (ADS)
Campoy-Quiles, Mariano; Nelson, Jenny; Bradley, Donal D. C.; Etchegoin, Pablo G.
2007-12-01
We present a detailed investigation on the effective dimensionality (associated with the degree of delocalization) of electronic excitations in thin organic films using the dielectric function as obtained from ellipsometry. To this end, we study first the best analytical representation of the optical dielectric function of these materials and compare different approaches found in the literature: (i) the harmonic oscillator approximation, (ii) the standard critical-point model (SCP), (iii) the model dielectric function (MDF), and (iv) the Forouhi-Bloomer model. We use these models to analyze variable angle spectroscopic ellipsometry raw data for a thin poly(9,9-dioctylfluorene) (PFO) film deposited on quartz (taken as an archetypal sample). The superiority of the SCP model for PFO films and a wide range of other spin-coated conjugated polymers (and guest-molecules in polymers) is demonstrated. Moreover, we show how the SCP model can be used to gain physical information on the microscopic structure. As an example, we show that the delocalization of excitons decreases for nonconjugated polymers, such as polymethylmethacrylate and polyimide, while the conjugation length and exciton delocalization are, respectively, enhanced in cases where a planar conformation (e.g., β phase of PFO) or a high degree of crystallinity [e.g., poly(3-hexylthiophene)] is achieved. As an additional example, we employ the SCP excitonic model to investigate the temperature dependence of the dielectric function of crystalline and glassy PFO films. We propose that the SCP excitonic model should be adopted as the standard choice to model the optical properties of polymer thin films from ellipsometry data.
Optimizing experimental design for comparing models of brain function.
Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas
2011-11-01
This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485
Local Bathymetry Estimation Using Variational Inverse Modeling: A Nested Approach
NASA Astrophysics Data System (ADS)
Almeida, T. G.; Walker, D. T.; Farquharson, G.
2014-12-01
Estimation of subreach river bathymetry from remotely-sensed surface velocity data is presented using variational inverse modeling applied to the 2D depth-averaged, shallow-water equations (SWEs). A nested approach is adopted to focus on obtaining an accurate estimate of bathymetry over a small region of interest within a larger complex hydrodynamic system. This approach reduces computational cost significantly. We begin by constructing a minimization problem with a cost function defined by the error between observed and estimated surface velocities, and then apply the SWEs as a constraint on the velocity field. An adjoint SWE model is developed through the use of Lagrange multipliers, converting the unconstrained minimization problem into a constrained one. The adjoint model solution is used to calculate the gradient of the cost function with respect to bathymetry. The gradient is used in a descent algorithm to determine the bathymetry that yields a surface velocity field that is a best-fit to the observational data. In this application of the algorithm, the 2D depth-averaged flow is computed within a nested framework using Delft3D-FLOW as the forward computational model. First, an outer simulation is generated using discharge rate and other measurements from USGS and NOAA, assuming a uniform bottom-friction coefficient. Then a nested, higher resolution inner model is constructed using open boundary condition data interpolated from the outer model (see figure). Riemann boundary conditions with specified tangential velocities are utilized to ensure a near seamless transition between outer and inner model results. The initial guess bathymetry matches the outer model bathymetry, and the iterative assimilation procedure is used to adjust the bathymetry only for the inner model. The observation data was collected during the ONR Rivet II field exercise for the mouth of the Columbia River near Hammond, OR. A dual beam squinted along-track-interferometric, synthetic
A zonal model of cortical functions.
Green, H S; Triffet, T
1989-01-01
A model of cortical functions is developed with the object of simulating the observed behavior of individual neurons organized in unit circuits and functional systems of the cerebellum, the cerebrum and the hippocampal formation. The neuronal model is capable of representing refractory and potentiated states, as well as the firing and lowest resting states. The unit circuits of each system consist of all common types of cells with known synaptic connections. In the cerebral system these unit circuits are interconnected to form columns as well as zones. A new discrete neural network equation, which takes account of interactions with the extracellular field, is proposed to simulate electrical activity in these circuits. A coherent theory of cortical activity and functions is derived that accounts for many of the observed phenomena, including those associated with the development of long-term potentiation and sequential memory. Three appendices are devoted to the theory of extracellular interactions, the derivation of non-linear network equations, and a computer program to simulate learning in the cortex. PMID:2779262
de Almeida, Patrícia Maria Duarte
2006-02-01
Considering the body structures and systems loss of function, after a Spinal Cord Injury, with is respective activities limitations and social participation restriction, the rehabilitation process goals are to achieve the maximal functional independence and quality of life allowed by the clinical lesion. For this is necessary a rehabilitation period with a rehabilitation team, including the physiotherapist whose interventions will depend on factors such degree of completeness or incompleteness and patient clinical stage. Physiotherapy approach includes several procedures and techniques related with a traditional model or with the recent perspective of neuronal regeneration. Following a traditional model, the interventions in complete A and incomplete B lesions, is based on compensatory method of functional rehabilitation using the non affected muscles. In the incomplete C and D lesions, motor re-education below the lesion, using key points to facilitate normal and selective patterns of movement is preferable. In other way if the neuronal regeneration is possible with respective function improve; the physiotherapy approach goals are to maintain muscular trofism and improve the recruitment of motor units using intensive techniques. In both, there is no scientific evidence to support the procedures, exists a lack of investigation and most of the research are methodologically poor. PMID:25976285
12 CFR 3.153 - Internal models approach (IMA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... Federal savings association's model uses a scenario methodology, the national bank or Federal savings... 12 Banks and Banking 1 2014-01-01 2014-01-01 false Internal models approach (IMA). 3.153 Section 3... Assets for Equity Exposures § 3.153 Internal models approach (IMA). (a) General. A national bank...
Wavelet spectrum analysis approach to model validation of dynamic systems
NASA Astrophysics Data System (ADS)
Jiang, Xiaomo; Mahadevan, Sankaran
2011-02-01
Feature-based validation techniques for dynamic system models could be unreliable for nonlinear, stochastic, and transient dynamic behavior, where the time series is usually non-stationary. This paper presents a wavelet spectral analysis approach to validate a computational model for a dynamic system. Continuous wavelet transform is performed on the time series data for both model prediction and experimental observation using a Morlet wavelet function. The wavelet cross-spectrum is calculated for the two sets of data to construct a time-frequency phase difference map. The Box-plot, an exploratory data analysis technique, is applied to interpret the phase difference for validation purposes. In addition, wavelet time-frequency coherence is calculated using the locally and globally smoothed wavelet power spectra of the two data sets. Significance tests are performed to quantitatively verify whether the wavelet time-varying coherence is significant at a specific time and frequency point, considering uncertainties in both predicted and observed time series data. The proposed wavelet spectrum analysis approach is illustrated with a dynamics validation challenge problem developed at the Sandia National Laboratories. A comparison study is conducted to demonstrate the advantages of the proposed methodologies over classical frequency-independent cross-correlation analysis and time-independent cross-coherence analysis for the validation of dynamic systems.
Functionalized anatomical models for EM-neuron Interaction modeling.
Neufeld, Esra; Cassará, Antonino Mario; Montanaro, Hazael; Kuster, Niels; Kainz, Wolfgang
2016-06-21
The understanding of interactions between electromagnetic (EM) fields and nerves are crucial in contexts ranging from therapeutic neurostimulation to low frequency EM exposure safety. To properly consider the impact of in vivo induced field inhomogeneity on non-linear neuronal dynamics, coupled EM-neuronal dynamics modeling is required. For that purpose, novel functionalized computable human phantoms have been developed. Their implementation and the systematic verification of the integrated anisotropic quasi-static EM solver and neuronal dynamics modeling functionality, based on the method of manufactured solutions and numerical reference data, is described. Electric and magnetic stimulation of the ulnar and sciatic nerve were modeled to help understanding a range of controversial issues related to the magnitude and optimal determination of strength-duration (SD) time constants. The results indicate the importance of considering the stimulation-specific inhomogeneous field distributions (especially at tissue interfaces), realistic models of non-linear neuronal dynamics, very short pulses, and suitable SD extrapolation models. These results and the functionalized computable phantom will influence and support the development of safe and effective neuroprosthetic devices and novel electroceuticals. Furthermore they will assist the evaluation of existing low frequency exposure standards for the entire population under all exposure conditions. PMID:27224508
Functionalized anatomical models for EM-neuron Interaction modeling
NASA Astrophysics Data System (ADS)
Neufeld, Esra; Cassará, Antonino Mario; Montanaro, Hazael; Kuster, Niels; Kainz, Wolfgang
2016-06-01
The understanding of interactions between electromagnetic (EM) fields and nerves are crucial in contexts ranging from therapeutic neurostimulation to low frequency EM exposure safety. To properly consider the impact of in vivo induced field inhomogeneity on non-linear neuronal dynamics, coupled EM-neuronal dynamics modeling is required. For that purpose, novel functionalized computable human phantoms have been developed. Their implementation and the systematic verification of the integrated anisotropic quasi-static EM solver and neuronal dynamics modeling functionality, based on the method of manufactured solutions and numerical reference data, is described. Electric and magnetic stimulation of the ulnar and sciatic nerve were modeled to help understanding a range of controversial issues related to the magnitude and optimal determination of strength-duration (SD) time constants. The results indicate the importance of considering the stimulation-specific inhomogeneous field distributions (especially at tissue interfaces), realistic models of non-linear neuronal dynamics, very short pulses, and suitable SD extrapolation models. These results and the functionalized computable phantom will influence and support the development of safe and effective neuroprosthetic devices and novel electroceuticals. Furthermore they will assist the evaluation of existing low frequency exposure standards for the entire population under all exposure conditions.
Mathematical Modelling Approach in Mathematics Education
ERIC Educational Resources Information Center
Arseven, Ayla
2015-01-01
The topic of models and modeling has come to be important for science and mathematics education in recent years. The topic of "Modeling" topic is especially important for examinations such as PISA which is conducted at an international level and measures a student's success in mathematics. Mathematical modeling can be defined as using…
Annotation and retrieval system of CAD models based on functional semantics
NASA Astrophysics Data System (ADS)
Wang, Zhansong; Tian, Ling; Duan, Wenrui
2014-11-01
CAD model retrieval based on functional semantics is more significant than content-based 3D model retrieval during the mechanical conceptual design phase. However, relevant research is still not fully discussed. Therefore, a functional semantic-based CAD model annotation and retrieval method is proposed to support mechanical conceptual design and design reuse, inspire designer creativity through existing CAD models, shorten design cycle, and reduce costs. Firstly, the CAD model functional semantic ontology is constructed to formally represent the functional semantics of CAD models and describe the mechanical conceptual design space comprehensively and consistently. Secondly, an approach to represent CAD models as attributed adjacency graphs(AAG) is proposed. In this method, the geometry and topology data are extracted from STEP models. On the basis of AAG, the functional semantics of CAD models are annotated semi-automatically by matching CAD models that contain the partial features of which functional semantics have been annotated manually, thereby constructing CAD Model Repository that supports model retrieval based on functional semantics. Thirdly, a CAD model retrieval algorithm that supports multi-function extended retrieval is proposed to explore more potential creative design knowledge in the semantic level. Finally, a prototype system, called Functional Semantic-based CAD Model Annotation and Retrieval System(FSMARS), is implemented. A case demonstrates that FSMARS can successfully botain multiple potential CAD models that conform to the desired function. The proposed research addresses actual needs and presents a new way to acquire CAD models in the mechanical conceptual design phase.
Functional genomics of Plasmodium falciparum using metabolic modelling and analysis
Oppenheim, Rebecca D.; Soldati-Favre, Dominique; Hatzimanikatis, Vassily
2013-01-01
Plasmodium falciparum is an obligate intracellular parasite and the leading cause of severe malaria responsible for tremendous morbidity and mortality particularly in sub-Saharan Africa. Successful completion of the P. falciparum genome sequencing project in 2002 provided a comprehensive foundation for functional genomic studies on this pathogen in the following decade. Over this period, a large spectrum of experimental approaches has been deployed to improve and expand the scope of functionally annotated genes. Meanwhile, rapidly evolving methods of systems biology have also begun to contribute to a more global understanding of various aspects of the biology and pathogenesis of malaria. Herein we provide an overview on metabolic modelling, which has the capability to integrate information from functional genomics studies in P. falciparum and guide future malaria research efforts towards the identification of novel candidate drug targets. PMID:23793264
Rival approaches to mathematical modelling in immunology
NASA Astrophysics Data System (ADS)
Andrew, Sarah M.; Baker, Christopher T. H.; Bocharov, Gennady A.
2007-08-01
In order to formulate quantitatively correct mathematical models of the immune system, one requires an understanding of immune processes and familiarity with a range of mathematical techniques. Selection of an appropriate model requires a number of decisions to be made, including a choice of the modelling objectives, strategies and techniques and the types of model considered as candidate models. The authors adopt a multidisciplinary perspective.
Piecewise Linear Membership Function Generator-Divider Approach
NASA Technical Reports Server (NTRS)
Hart, Ron; Martinez, Gene; Yuan, Bo; Zrilic, Djuro; Ramirez, Jaime
1997-01-01
In this paper a simple, inexpensive, membership function circuit for fuzzy controllers is presented. The proposed circuit may be used to generate a general trapezoidal membership function. The slope and horizontal shift are fully programmable parameters.
Measuring neuronal branching patterns using model-based approach.
Luczak, Artur
2010-01-01
Neurons have complex branching systems which allow them to communicate with thousands of other neurons. Thus understanding neuronal geometry is clearly important for determining connectivity within the network and how this shapes neuronal function. One of the difficulties in uncovering relationships between neuronal shape and its function is the problem of quantifying complex neuronal geometry. Even by using multiple measures such as: dendritic length, distribution of segments, direction of branches, etc, a description of three dimensional neuronal embedding remains incomplete. To help alleviate this problem, here we propose a new measure, a shape diffusiveness index (SDI), to quantify spatial relations between branches at the local and global scale. It was shown that growth of neuronal trees can be modeled by using diffusion limited aggregation (DLA) process. By measuring "how easy" it is to reproduce the analyzed shape by using the DLA algorithm it can be measured how "diffusive" is that shape. Intuitively, "diffusiveness" measures how tree-like is a given shape. For example shapes like an oak tree will have high values of SDI. This measure is capturing an important feature of dendritic tree geometry, which is difficult to assess with other measures. This approach also presents a paradigm shift from well-defined deterministic measures to model-based measures, which estimate how well a model with specific properties can account for features of analyzed shape. PMID:21079752
Catch bonds: physical models and biological functions.
Zhu, Cheng; McEver, Rodger P
2005-09-01
Force can shorten the lifetimes of receptor-ligand bonds by accelerating their dissociation. Perhaps paradoxical at first glance, bond lifetimes can also be prolonged by force. This counterintuitive behavior was named catch bonds, which is in contrast to the ordinary slip bonds that describe the intuitive behavior of lifetimes being shortened by force. Fifteen years after their theoretical proposal, catch bonds have finally been observed. In this article we review recently published data that have demonstrated catch bonds in the selectin system and suggested catch bonds in other systems, the theoretical models for their explanations, and their function as a mechanism for flow-enhanced adhesion. PMID:16708472
Mathematical Modelling: A New Approach to Teaching Applied Mathematics.
ERIC Educational Resources Information Center
Burghes, D. N.; Borrie, M. S.
1979-01-01
Describes the advantages of mathematical modeling approach in teaching applied mathematics and gives many suggestions for suitable material which illustrates the links between real problems and mathematics. (GA)
Functional GI disorders: from animal models to drug development
Mayer, E A; Bradesi, S; Chang, L; Spiegel, B M R; Bueller, J A; Naliboff, B D
2014-01-01
Despite considerable efforts by academic researchers and by the pharmaceutical industry, the development of novel pharmacological treatments for irritable bowel syndrome (IBS) and other functional gastrointestinal (GI) disorders has been slow and disappointing. The traditional approach to identifying and evaluating novel drugs for these symptom-based syndromes has relied on a fairly standard algorithm using animal models, experimental medicine models and clinical trials. In the current article, the empirical basis for this process is reviewed, focusing on the utility of the assessment of visceral hypersensitivity and GI transit, in both animals and humans, as well as the predictive validity of preclinical and clinical models of IBS for identifying successful treatments for IBS symptoms and IBS-related quality of life impairment. A review of published evidence suggests that abdominal pain, defecation-related symptoms (urgency, straining) and psychological factors all contribute to overall symptom severity and to health-related quality of life. Correlations between readouts obtained in preclinical and clinical models and respective symptoms are small, and the ability to predict drug effectiveness for specific as well as for global IBS symptoms is limited. One possible drug development algorithm is proposed which focuses on pharmacological imaging approaches in both preclinical and clinical models, with decreased emphasis on evaluating compounds in symptom-related animal models, and more rapid screening of promising candidate compounds in man. PMID:17965064
Linear functional minimization for inverse modeling
NASA Astrophysics Data System (ADS)
Barajas-Solano, D. A.; Wohlberg, B. E.; Vesselinov, V. V.; Tartakovsky, D. M.
2015-06-01
We present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulic head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.
Katanin, A. A.
2015-06-15
We consider formulations of the functional renormalization-group (fRG) flow for correlated electronic systems with the dynamical mean-field theory as a starting point. We classify the corresponding renormalization-group schemes into those neglecting one-particle irreducible six-point vertices (with respect to the local Green’s functions) and neglecting one-particle reducible six-point vertices. The former class is represented by the recently introduced DMF{sup 2}RG approach [31], but also by the scale-dependent generalization of the one-particle irreducible representation (with respect to local Green’s functions, 1PI-LGF) of the generating functional [20]. The second class is represented by the fRG flow within the dual fermion approach [16, 32]. We compare formulations of the fRG approach in each of these cases and suggest their further application to study 2D systems within the Hubbard model.
Yu, Jue; Zhuang, Jian; Yu, Dehong
2015-01-01
This paper concerns a state feedback integral control using a Lyapunov function approach for a rotary direct drive servo valve (RDDV) while considering parameter uncertainties. Modeling of this RDDV servovalve reveals that its mechanical performance is deeply influenced by friction torques and flow torques; however, these torques are uncertain and mutable due to the nature of fluid flow. To eliminate load resistance and to achieve satisfactory position responses, this paper develops a state feedback control that integrates an integral action and a Lyapunov function. The integral action is introduced to address the nonzero steady-state error; in particular, the Lyapunov function is employed to improve control robustness by adjusting the varying parameters within their value ranges. This new controller also has the advantages of simple structure and ease of implementation. Simulation and experimental results demonstrate that the proposed controller can achieve higher control accuracy and stronger robustness. PMID:25234140
Electronic states on a fractal: Exact Green's-function renormalization approach
NASA Astrophysics Data System (ADS)
Andrade, R. F. S.; Schellnhuber, H. J.
1991-12-01
A nontrivial tight-binding model for electron dynamics on the fractal Koch curve is investigated within the framework of the Green's-function formalism. The key result is the construction of a multiple exact renormalization group that allows one to derive all the rather unusual properties of the model. This group is generated by four nonequivalent decimation operations, which define distinct transformation rules for the 48 relevant parameters to be renormalized. The calculation of the density of states confirms the crucial results that were obtained recently using transfer-matrix methods: local self-affinity, dense gap structure, and singular electronic levels with infinite degeneracy. This demonstrates that the Green's-function approach is not inferior to other techniques even in topologically one-dimensional situations.
A Unified Approach to Model-Based Planning and Execution
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Norvig, Peter (Technical Monitor)
2000-01-01
Writing autonomous software is complex, requiring the coordination of functionally and technologically diverse software modules. System and mission engineers must rely on specialists familiar with the different software modules to translate requirements into application software. Also, each module often encodes the same requirement in different forms. The results are high costs and reduced reliability due to the difficulty of tracking discrepancies in these encodings. In this paper we describe a unified approach to planning and execution that we believe provides a unified representational and computational framework for an autonomous agent. We identify the four main components whose interplay provides the basis for the agent's autonomous behavior: the domain model, the plan database, the plan running module, and the planner modules. This representational and problem solving approach can be applied at all levels of the architecture of a complex agent, such as Remote Agent. In the rest of the paper we briefly describe the Remote Agent architecture. The new agent architecture proposed here aims at achieving the full Remote Agent functionality. We then give the fundamental ideas behind the new agent architecture and point out some implication of the structure of the architecture, mainly in the area of reactivity and interaction between reactive and deliberative decision making. We conclude with related work and current status.
A featureless approach to 3D polyhedral building modeling from aerial images.
Hammoudi, Karim; Dornaika, Fadi
2011-01-01
This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575
A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images
Hammoudi, Karim; Dornaika, Fadi
2011-01-01
This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575
Zebrafish models for the functional genomics of neurogenetic disorders.
Kabashi, Edor; Brustein, Edna; Champagne, Nathalie; Drapeau, Pierre
2011-03-01
In this review, we consider recent work using zebrafish to validate and study the functional consequences of mutations of human genes implicated in a broad range of degenerative and developmental disorders of the brain and spinal cord. Also we present technical considerations for those wishing to study their own genes of interest by taking advantage of this easily manipulated and clinically relevant model organism. Zebrafish permit mutational analyses of genetic function (gain or loss of function) and the rapid validation of human variants as pathological mutations. In particular, neural degeneration can be characterized at genetic, cellular, functional, and behavioral levels. Zebrafish have been used to knock down or express mutations in zebrafish homologs of human genes and to directly express human genes bearing mutations related to neurodegenerative disorders such as spinal muscular atrophy, ataxia, hereditary spastic paraplegia, amyotrophic lateral sclerosis (ALS), epilepsy, Huntington's disease, Parkinson's disease, fronto-temporal dementia, and Alzheimer's disease. More recently, we have been using zebrafish to validate mutations of synaptic genes discovered by large-scale genomic approaches in developmental disorders such as autism, schizophrenia, and non-syndromic mental retardation. Advances in zebrafish genetics such as multigenic analyses and chemical genetics now offer a unique potential for disease research. Thus, zebrafish hold much promise for advancing the functional genomics of human diseases, the understanding of the genetics and cell biology of degenerative and developmental disorders, and the discovery of therapeutics. This article is part of a Special Issue entitled Zebrafish Models of Neurological Diseases. PMID:20887784
Modeling the three-point correlation function
Marin, Felipe; Wechsler, Risa; Frieman, Joshua A.; Nichol, Robert; /Portsmouth U., ICG
2007-04-01
We present new theoretical predictions for the galaxy three-point correlation function (3PCF) using high-resolution dissipationless cosmological simulations of a flat {Lambda}CDM Universe which resolve galaxy-size halos and subhalos. We create realistic mock galaxy catalogs by assigning luminosities and colors to dark matter halos and subhalos, and we measure the reduced 3PCF as a function of luminosity and color in both real and redshift space. As galaxy luminosity and color are varied, we find small differences in the amplitude and shape dependence of the reduced 3PCF, at a level qualitatively consistent with recent measurements from the SDSS and 2dFGRS. We confirm that discrepancies between previous 3PCF measurements can be explained in part by differences in binning choices. We explore the degree to which a simple local bias model can fit the simulated 3PCF. The agreement between the model predictions and galaxy 3PCF measurements lends further credence to the straightforward association of galaxies with CDM halos and subhalos.
Hierarchical organization of functional connectivity in the mouse brain: a complex network approach
Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano
2016-01-01
This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges. PMID:27534708
Hierarchical organization of functional connectivity in the mouse brain: a complex network approach.
Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano
2016-01-01
This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges. PMID:27534708
A modeling approach for compounds affecting body composition.
Gennemark, Peter; Jansson-Löfmark, Rasmus; Hyberg, Gina; Wigstrand, Maria; Kakol-Palm, Dorota; Håkansson, Pernilla; Hovdal, Daniel; Brodin, Peter; Fritsch-Fredin, Maria; Antonsson, Madeleine; Ploj, Karolina; Gabrielsson, Johan
2013-12-01
Body composition and body mass are pivotal clinical endpoints in studies of welfare diseases. We present a combined effort of established and new mathematical models based on rigorous monitoring of energy intake (EI) and body mass in mice. Specifically, we parameterize a mechanistic turnover model based on the law of energy conservation coupled to a drug mechanism model. Key model variables are fat-free mass (FFM) and fat mass (FM), governed by EI and energy expenditure (EE). An empirical Forbes curve relating FFM to FM was derived experimentally for female C57BL/6 mice. The Forbes curve differs from a previously reported curve for male C57BL/6 mice, and we thoroughly analyse how the choice of Forbes curve impacts model predictions. The drug mechanism function acts on EI or EE, or both. Drug mechanism parameters (two to three parameters) and system parameters (up to six free parameters) could be estimated with good precision (coefficients of variation typically <20 % and not greater than 40 % in our analyses). Model simulations were done to predict the EE and FM change at different drug provocations in mice. In addition, we simulated body mass and FM changes at different drug provocations using a similar model for man. Surprisingly, model simulations indicate that an increase in EI (e.g. 10 %) was more efficient than an equal lowering of EI. Also, the relative change in body mass and FM is greater in man than in mouse at the same relative change in either EI or EE. We acknowledge that this assumes the same drug mechanism impact across the two species. A set of recommendations regarding the Forbes curve, vehicle control groups, dual action on EI and loss, and translational aspects are discussed. This quantitative approach significantly improves data interpretation, disease system understanding, safety assessment and translation across species. PMID:24158456
Engelmann spruce site index models: a comparison of model functions and parameterizations.
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce - Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike's Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
A novel bone scaffold design approach based on shape function and all-hexahedral mesh refinement.
Cai, Shengyong; Xi, Juntong; Chua, Chee Kai
2012-01-01
Tissue engineering is the application of interdisciplinary knowledge in the building and repairing of tissues. Generally, an engineered tissue is a combination of living cells and a support structure called a scaffold. The scaffold provides support for bone-producing cells and can be used to heal or replace a defective bone. In this chapter, a novel bone scaffold design approach based on shape function and an all-hexahedral mesh refinement method is presented. Based on the shape function in the finite element method, an all-hexahedral mesh is used to design a porous bone scaffold. First, the individual pore based on the subdivided individual element is modeled; then, the Boolean operation union among the pores is used to generate the whole pore model of TE bone scaffold; finally, the bone scaffold which contains various irregular pores can be modeled by the Boolean operation difference between the solid model and the whole pore model. From the SEM images, the pore size distribution in the native bone is not randomly distributed and there are gradients for pore size distribution. Therefore, a control approach for pore size distribution in the bone scaffold based on the hexahedral mesh refinement is also proposed in this chapter. A well-defined pore size distribution can be achieved based on the fact that a hexahedral element size distribution can be obtained through an all-hexahedral mesh refinement and the pore morphology and size are under the control of the hexahedral element. The designed bone scaffold can be converted to a universal 3D file format (such as STL or STEP) which could be used for rapid prototyping (RP). Finally, 3D printing (Spectrum Z510), a type of RP system, is adopted to fabricate these bone scaffolds. The successfully fabricated scaffolds validate the novel computer-aided design approach in this research. PMID:22692603
Refining Pathways: A Model Comparison Approach
Moffa, Giusi; Erdmann, Gerrit; Voloshanenko, Oksana; Hundsrucker, Christian; Sadeh, Mohammad J.; Boutros, Michael; Spang, Rainer
2016-01-01
Cellular signalling pathways consolidate multiple molecular interactions into working models of signal propagation, amplification, and modulation. They are described and visualized as networks. Adjusting network topologies to experimental data is a key goal of systems biology. While network reconstruction algorithms like nested effects models are well established tools of computational biology, their data requirements can be prohibitive for their practical use. In this paper we suggest focussing on well defined aspects of a pathway and develop the computational tools to do so. We adapt the framework of nested effect models to focus on a specific aspect of activated Wnt signalling in HCT116 colon cancer cells: Does the activation of Wnt target genes depend on the secretion of Wnt ligands or do mutations in the signalling molecule β-catenin make this activation independent from them? We framed this question into two competing classes of models: Models that depend on Wnt ligands secretion versus those that do not. The model classes translate into restrictions of the pathways in the network topology. Wnt dependent models are more flexible than Wnt independent models. Bayes factors are the standard Bayesian tool to compare different models fairly on the data evidence. In our analysis, the Bayes factors depend on the number of potential Wnt signalling target genes included in the models. Stability analysis with respect to this number showed that the data strongly favours Wnt ligands dependent models for all realistic numbers of target genes. PMID:27248690
Metal mixture modeling evaluation project: 2. Comparison of four modeling approaches.
Farley, Kevin J; Meyer, Joseph S; Balistrieri, Laurie S; De Schamphelaere, Karel A C; Iwasaki, Yuichi; Janssen, Colin R; Kamo, Masashi; Lofts, Stephen; Mebane, Christopher A; Naito, Wataru; Ryan, Adam C; Santore, Robert C; Tipping, Edward
2015-04-01
As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the US Geological Survey (USA), HDR|HydroQual (USA), and the Centre for Ecology and Hydrology (United Kingdom) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME workshop in Brussels, Belgium (May 2012), is provided in the present study. Overall, the models were found to be similar in structure (free ion activities computed by the Windermere humic aqueous model [WHAM]; specific or nonspecific binding of metals/cations in or on the organism; specification of metal potency factors or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single vs multiple types of binding sites on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong interrelationships among the model parameters (binding constants, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed. PMID:25418584
The imprint of plants on ecosystem functioning: A data-driven approach
NASA Astrophysics Data System (ADS)
Musavi, Talie; Mahecha, Miguel D.; Migliavacca, Mirco; Reichstein, Markus; van de Weg, Martine Janet; van Bodegom, Peter M.; Bahn, Michael; Wirth, Christian; Reich, Peter B.; Schrodt, Franziska; Kattge, Jens
2015-12-01
Terrestrial ecosystems strongly determine the exchange of carbon, water and energy between the biosphere and atmosphere. These exchanges are influenced by environmental conditions (e.g., local meteorology, soils), but generally mediated by organisms. Often, mathematical descriptions of these processes are implemented in terrestrial biosphere models. Model implementations of this kind should be evaluated by empirical analyses of relationships between observed patterns of ecosystem functioning, vegetation structure, plant traits, and environmental conditions. However, the question of how to describe the imprint of plants on ecosystem functioning based on observations has not yet been systematically investigated. One approach might be to identify and quantify functional attributes or responsiveness of ecosystems (often very short-term in nature) that contribute to the long-term (i.e., annual but also seasonal or daily) metrics commonly in use. Here we define these patterns as "ecosystem functional properties", or EFPs. Such as the ecosystem capacity of carbon assimilation or the maximum light use efficiency of an ecosystem. While EFPs should be directly derivable from flux measurements at the ecosystem level, we posit that these inherently include the influence of specific plant traits and their local heterogeneity. We present different options of upscaling in situ measured plant traits to the ecosystem level (ecosystem vegetation properties - EVPs) and provide examples of empirical analyses on plants' imprint on ecosystem functioning by combining in situ measured plant traits and ecosystem flux measurements. Finally, we discuss how recent advances in remote sensing contribute to this framework.
Geomorphological Approach to Glacial and Snow Modeling applied to Hydrology
NASA Astrophysics Data System (ADS)
Gsell, P.; Le Moine, N.; Ribstein, P.
2012-12-01
Hydrological modeling of mountainous watershed has specific problems due to the effect of ice and snow cover in a certain range of altitude. The representation of the snow and ice storage dyanmics is a main issue for the understanding of mountainous hydrosystems mechanisms for future and past climate. That's also an operational concern for watersheds equipped with hydroelectric dams, whose dimensioning and electric capacity evaluation rely on a good understanding of ice-snow dynamics, in particular for a lapse of several years. The objective of the study is to get ahead, at a theoretical view, in a way in between classical representation used in hydrological models (infinity of ice stock) and 3D ice tongues modeling describing explicitly viscous glacier evolution at a river basin scale. Geomorphology will be used in this approach. Noticing that glaciers, at a catchment scale, take the drainage system as a geometrical framework, an axe of our study lies on the coupling of the probabilistic description of the river network with determinist glacier models using concepts that already have been used in hydrology modeling like Geomorphological Instantaneous Unitary Hydrogram. By analogy, a simplified glacier model (Shallow Ice Approximation or Minimal Glacier Models) will be put together as a transfer function to simulate large scale ablation and ice front dynamics. In our study, we analyze the distribution of upstream area for a dataset of 78 river basins in the Southern Rocky Mountains. In a certain range of scale and under a few assumptions, we use a statistic model for river networks description that we adapt by considering relief by linking hypsometry and morphology. The model developed P(A>a,z) allow us to identify any site of the river network from a DEM analysis via elevation z and upstream area a fields with the help of 2 parameters. The 3D consideration may be relevant for hydrologic implications as production function usually increases with relief. This model
A model-based multisensor data fusion knowledge management approach
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2014-06-01
A variety of approaches exist for combining data from multiple sensors. The model-based approach combines data based on its support for or refutation of elements of the model which in turn can be used to evaluate an experimental thesis. This paper presents a collection of algorithms for mapping various types of sensor data onto a thesis-based model and evaluating the truth or falsity of the thesis, based on the model. The use of this approach for autonomously arriving at findings and for prioritizing data are considered. Techniques for updating the model (instead of arriving at a true/false assertion) are also discussed.
Multidrug resistance ABC transporter structure predictions by homology modeling approaches.
Honorat, Mylène; Falson, Pierre; Terreux, Raphael; Di Pietro, Attilio; Dumontet, Charles; Payen, Léa
2011-03-01
Human multidrug resistance ABC transporters are ubiquitous membrane proteins responsible for the efflux of multiple, endogenous or exogenous, compounds out of the cells, and therefore they are involved in multi-drug resistance phenotype (MDR). They thus deeply impact the pharmacokinetic parameters and toxicity properties of drugs. A great pressure to develop inhibitors of these pumps is carried out, by either ligand-based drug design or (more ideally) structure-based drug design. In that goal, many biochemical studies have been carried out to characterize their transport functions, and many efforts have been spent to get high-resolution structures. Currently, beside the 3D-structures of bacterial ABC transporters Sav1866 and MsbA, only the mouse ABCB1 complete structure has been published at high-resolution, illustrating the tremendous difficulty in getting such information, taking into account that the human genome accounts for 48 ABC transporters encoding genes. Homology modeling is consequently a reasonable approach to overcome this obstacle. The present review describes, in the first part, the different approaches which have been published to set up human ABC pump 3D-homology models allowing the localization of binding sites for drug candidates, and the identification of critical residues therein. In a second part, the review proposes a more accurate strategy and practical keys to use such biological tools for initiating structure-based drug design. PMID:21470105
Traffic flow forecasting: Comparison of modeling approaches
Smith, B.L.; Demetsky, M.J.
1997-08-01
The capability to forecast traffic volume in an operational setting has been identified as a critical need for intelligent transportation systems (ITS). In particular, traffic volume forecasts will support proactive, dynamic traffic control. However, previous attempts to develop traffic volume forecasting models have met with limited success. This research effort focused on developing traffic volume forecasting models for two sites on Northern Virginia`s Capital Beltway. Four models were developed and tested for the freeway traffic flow forecasting problem, which is defined as estimating traffic flow 15 min into the future. They were the historical average, time-series, neural network, and nonparametric regression models. The nonparametric regression model significantly outperformed the other models. A Wilcoxon signed-rank test revealed that the nonparametric regression model experienced significantly lower errors than the other models. In addition, the nonparametric regression model was easy to implement, and proved to be portable, performing well at two distinct sites. Based on its success, research is ongoing to refine the nonparametric regression model and to extend it to produce multiple interval forecasts.
Development on electromagnetic impedance function modeling and its estimation
Sutarno, D.
2015-09-30
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition
Development on electromagnetic impedance function modeling and its estimation
NASA Astrophysics Data System (ADS)
Sutarno, D.
2015-09-01
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition
Early Intervention Team Approaches: The Transdisciplinary Model.
ERIC Educational Resources Information Center
Woodruff, Geneva; McGonigel, Mary J.
Part of a volume which explores current issues in service delivery to infants and toddlers (ages birth to 3) with handicapping conditions, this chapter defines the team concept as it relates to the field of early intervention and describes three approaches (multidisciplinary, interdisciplinary, and transdisciplinary) commonly used to organize…
Integrating Person- and Function-Centered Approaches in Career Development Theory and Research.
ERIC Educational Resources Information Center
Vondracek, Fred W.; Porfeli, Erik
2002-01-01
Compares life-span approaches (function and variable centered) and life-course approaches (person centered and holistic) in the study of career development. Advocates their integration in developmental theory and research. (Contains 69 references.) (SK)
An algebraic approach to the Hubbard model
NASA Astrophysics Data System (ADS)
de Leeuw, Marius; Regelskis, Vidas
2016-02-01
We study the algebraic structure of an integrable Hubbard-Shastry type lattice model associated with the centrally extended su (2 | 2) superalgebra. This superalgebra underlies Beisert's AdS/CFT worldsheet R-matrix and Shastry's R-matrix. The considered model specializes to the one-dimensional Hubbard model in a certain limit. We demonstrate that Yangian symmetries of the R-matrix specialize to the Yangian symmetry of the Hubbard model found by Korepin and Uglov. Moreover, we show that the Hubbard model Hamiltonian has an algebraic interpretation as the so-called secret symmetry. We also discuss Yangian symmetries of the A and B models introduced by Frolov and Quinn.
The TETRAD Approach to Model Respecification.
Ting, K F
1998-01-01
The TETRAD project revives the tetrad analysis developed almost a century ago. Vanishing tetrads are overidentifying restrictions implied by the structure of a model. As such, it is possible to examine a model empirically by these constraints. Scheines, Spirtes, Glymour, Meek, & Richardson (1998) advocate using vanishing tetrads as a tool for automatic model searches. Despite the search algorithm proving to be superior to those from LISREL and EQS in an earlier report, it is argued that TETRAD II, the search program, is still a datamining procedure. It is important that substantive justifications should be given before, not after, a model is selected. This is impossible with any type of automatic, procedure for specification search. Researchers should take an active role in formulating alternative ' models rather than looking for a quick fix. Finally, the tetrad test developed by Bollen and Ting (1993) is discussed with its application for testing competing models or their components formulated in I advance. PMID:26771758
Modeling population dynamics: A quantile approach.
Chavas, Jean-Paul
2015-04-01
The paper investigates the modeling of population dynamics, both conceptually and empirically. It presents a reduced form representation that provides a flexible characterization of population dynamics. It leads to the specification of a threshold quantile autoregression (TQAR) model, which captures nonlinear dynamics by allowing lag effects to vary across quantiles of the distribution as well as with previous population levels. The usefulness of the model is illustrated in an application to the dynamics of lynx population. We find statistical evidence that the quantile autoregression parameters vary across quantiles (thus rejecting the AR model as well as the TAR model) as well as with past populations (thus rejecting the quantile autoregression QAR model). The results document the nature of dynamics and cycle in the lynx population over time. They show how both the period of the cycle and the speed of population adjustment vary with population level and environmental conditions. PMID:25661501
Bayesian approach for network modeling of brain structural features
NASA Astrophysics Data System (ADS)
Joshi, Anand A.; Joshi, Shantanu H.; Leahy, Richard M.; Shattuck, David W.; Dinov, Ivo; Toga, Arthur W.
2010-03-01
Brain connectivity patterns are useful in understanding brain function and organization. Anatomical brain connectivity is largely determined using the physical synaptic connections between neurons. In contrast statistical brain connectivity in a given brain population refers to the interaction and interdependencies of statistics of multitudes of brain features including cortical area, volume, thickness etc. Traditionally, this dependence has been studied by statistical correlations of cortical features. In this paper, we propose the use of Bayesian network modeling for inferring statistical brain connectivity patterns that relate to causal (directed) as well as non-causal (undirected) relationships between cortical surface areas. We argue that for multivariate cortical data, the Bayesian model provides for a more accurate representation by removing the effect of confounding correlations that get introduced due to canonical dependence between the data. Results are presented for a population of 466 brains, where a SEM (structural equation modeling) approach is used to generate a Bayesian network model, as well as a dependency graph for the joint distribution of cortical areas.
A Prediction Model of the Capillary Pressure J-Function.
Xu, W S; Luo, P Y; Sun, L; Lin, N
2016-01-01
The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701
Consumer preference models: fuzzy theory approach
NASA Astrophysics Data System (ADS)
Turksen, I. B.; Wilson, I. A.
1993-12-01
Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).
Serpentinization reaction pathways: implications for modeling approach
Janecky, D.R.
1986-01-01
Experimental seawater-peridotite reaction pathways to form serpentinites at 300/sup 0/C, 500 bars, can be accurately modeled using the EQ3/6 codes in conjunction with thermodynamic and kinetic data from the literature and unpublished compilations. These models provide both confirmation of experimental interpretations and more detailed insight into hydrothermal reaction processes within the oceanic crust. The accuracy of these models depends on careful evaluation of the aqueous speciation model, use of mineral compositions that closely reproduce compositions in the experiments, and definition of realistic reactive components in terms of composition, thermodynamic data, and reaction rates.
The Motivational Function of Private Speech: An Experimental Approach.
ERIC Educational Resources Information Center
de Dios, M. J.; Montero, I.
Recently, some works have been published exploring the role of private speech as a tool for motivation, reaching beyond the classical research on its regulatory function for cognitive processes such as attention or executive function. In fact, the authors' own previous research has shown that a moderate account of spontaneous private speech of…
A Comparison of Functional Models for Use in the Function-Failure Design Method
NASA Technical Reports Server (NTRS)
Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.
2006-01-01
When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based
Gyrokinetic modeling: A multi-water-bag approach
Morel, P.; Gravier, E.; Besse, N.; Klein, R.; Ghizzo, A.; Bertrand, P.; Garbet, X.; Ghendrih, P.; Grandgirard, V.; Sarazin, Y.
2007-11-15
Predicting turbulent transport in nearly collisionless fusion plasmas requires one to solve kinetic (or, more precisely, gyrokinetic) equations. In spite of considerable progress, several pending issues remain; although more accurate, the kinetic calculation of turbulent transport is much more demanding in computer resources than fluid simulations. An alternative approach is based on a water-bag representation of the distribution function that is not an approximation but rather a special class of initial conditions, allowing one to reduce the full kinetic Vlasov equation into a set of hydrodynamic equations while keeping its kinetic character. The main result for the water-bag model is a lower cost in the parallel velocity direction since no differential operator associated with some approximate numerical scheme has to be carried out on this variable v{sub parallel}. Indeed, a small bag number is sufficient to correctly describe the ion temperature gradient instability.
A Mixed Approach for Modeling Blood Flow in Brain Microcirculation
NASA Astrophysics Data System (ADS)
Peyrounette, M.; Sylvie, L.; Davit, Y.; Quintard, M.
2014-12-01
We have previously demonstrated [1] that the vascular system of the healthy human brain cortex is a superposition of two structural components, each corresponding to a different spatial scale. At small-scale, the vascular network has a capillary structure, which is homogeneous and space-filling over a cut-off length. At larger scale, veins and arteries conform to a quasi-fractal branched structure. This structural duality is consistent with the functional duality of the vasculature, i.e. distribution and exchange. From a modeling perspective, this can be viewed as the superposition of: (a) a continuum model describing slow transport in the small-scale capillary network, characterized by a representative elementary volume and effective properties; and (b) a discrete network approach [2] describing fast transport in the arterial and venous network, which cannot be homogenized because of its fractal nature. This problematic is analogous to modeling problems encountered in geological media, e.g, in petroleum engineering, where fast conducting channels (wells or fractures) are embedded in a porous medium (reservoir rock). An efficient method to reduce the computational cost of fractures/continuum simulations is to use relatively large grid blocks for the continuum model. However, this also makes it difficult to accurately couple both structural components. In this work, we solve this issue by adapting the "well model" concept used in petroleum engineering [3] to brain specific 3-D situations. We obtain a unique linear system of equations describing the discrete network, the continuum and the well model coupling. Results are presented for realistic geometries and compared with a non-homogenized small-scale network model of an idealized periodic capillary network of known permeability. [1] Lorthois & Cassot, J. Theor. Biol. 262, 614-633, 2010. [2] Lorthois et al., Neuroimage 54 : 1031-1042, 2011. [3] Peaceman, SPE J. 18, 183-194, 1978.
Recent approaches in physical modification of protein functionality.
Mirmoghtadaie, Leila; Shojaee Aliabadi, Saeedeh; Hosseini, Seyede Marzieh
2016-05-15
Today, there is a growing demand for novel technologies, such as high hydrostatic pressure, irradiation, ultrasound, filtration, supercritical carbon dioxide, plasma technology, and electrical methods, which are not based on chemicals or heat treatment for modifying ingredient functionality and extending product shelf life. Proteins are essential components in many food processes, and provide various functions in food quality and stability. They can create interfacial films that stabilize emulsions and foams as well as interact to make networks that play key roles in gel and edible film production. These properties of protein are referred to as 'protein functionality', because they can be modified by different processing. The common protein modification (chemical, enzymatic and physical) methods have strong effects on the structure and functionality of food proteins. Furthermore, novel technologies can modify protein structure and functional properties that will be reviewed in this study. PMID:26776016
12 CFR 324.153 - Internal models approach (IMA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... FDIC-supervised institution's model uses a scenario methodology, the FDIC-supervised institution must... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Internal models approach (IMA). 324.153 Section... Advanced Measurement Approaches Risk-Weighted Assets for Equity Exposures § 324.153 Internal...
Students' Approaches to Learning a New Mathematical Model
ERIC Educational Resources Information Center
Flegg, Jennifer A.; Mallet, Daniel G.; Lupton, Mandy
2013-01-01
In this article, we report on the findings of an exploratory study into the experience of undergraduate students as they learn new mathematical models. Qualitative and quantitative data based around the students' approaches to learning new mathematical models were collected. The data revealed that students actively adopt three approaches to…
A new multi-objective approach to finite element model updating
NASA Astrophysics Data System (ADS)
Jin, Seung-Seop; Cho, Soojin; Jung, Hyung-Jo; Lee, Jong-Jae; Yun, Chung-Bang
2014-05-01
The single objective function (SOF) has been employed for the optimization process in the conventional finite element (FE) model updating. The SOF balances the residual of multiple properties (e.g., modal properties) using weighting factors, but the weighting factors are hard to determine before the run of model updating. Therefore, the trial-and-error strategy is taken to find the most preferred model among alternative updated models resulted from varying weighting factors. In this study, a new approach to the FE model updating using the multi-objective function (MOF) is proposed to get the most preferred model in a single run of updating without trial-and-error. For the optimization using the MOF, non-dominated sorting genetic algorithm-II (NSGA-II) is employed to find the Pareto optimal front. The bend angle related to the trade-off relationship of objective functions is used to select the most preferred model among the solutions on the Pareto optimal front. To validate the proposed approach, a highway bridge is selected as a test-bed and the modal properties of the bridge are obtained from the ambient vibration test. The initial FE model of the bridge is built using SAP2000. The model is updated using the identified modal properties by the SOF approach with varying the weighting factors and the proposed MOF approach. The most preferred model is selected using the bend angle of the Pareto optimal front, and compared with the results from the SOF approach using varying the weighting factors. The comparison shows that the proposed MOF approach is superior to the SOF approach using varying the weighting factors in getting smaller objective function values, estimating better updated parameters, and taking less computational time.
A systematic approach to a self-generating fuzzy rule-table for function approximation.
Pomares, H; Rojas, I; Ortega, J; Gonzalez, J; Prieto, A
2000-01-01
In this paper, a systematic design is proposed to determine fuzzy system structure and learning its parameters, from a set of given training examples. In particular, two fundamental problems concerning fuzzy system modeling are addressed: 1) fuzzy rule parameter optimization and 2) the identification of system structure (i.e., the number of membership functions and fuzzy rules). A four-step approach to build a fuzzy system automatically is presented: Step 1 directly obtains the optimum fuzzy rules for a given membership function configuration. Step 2 optimizes the allocation of the membership functions and the conclusion of the rules, in order to achieve a better approximation. Step 3 determines a new and more suitable topology with the information derived from the approximation error distribution; it decides which variables should increase the number of membership functions. Finally, Step 4 determines which structure should be selected to approximate the function, from the possible configurations provided by the algorithm in the three previous steps. The results of applying this method to the problem of function approximation are presented and then compared with other methodologies proposed in the bibliography. PMID:18252375
Teacher Consultation Model: An Operant Approach
ERIC Educational Resources Information Center
Halfacre, John; Welch, Frances
1973-01-01
This article describes a model for changing teacher behavior in dealing with problem students. The model reflects the incorporation of learning theory techniques (pinpointing behavior, reinforcement, shaping, etc.). A step-by-step account of how a psychologist deals with a teacher concerned about a boy's cursing is given. The teacher is encouraged…
Quantum Supersymmetric Models in the Causal Approach
NASA Astrophysics Data System (ADS)
Grigore, Dan-Radu
2007-04-01
We consider the massless supersymmetric vector multiplet in a purely quantum framework. First order gauge invariance determines uniquely the interaction Lagrangian as in the case of Yang-Mills models. Going to the second order of perturbation theory produces an anomaly which cannot be eliminated. We make the analysis of the model working only with the component fields.
COMPARING AND LINKING PLUMES ACROSS MODELING APPROACHES
River plumes carry many pollutants, including microorganisms, into lakes and the coastal ocean. The physical scales of many stream and river plumes often lie between the scales for mixing zone plume models, such as the EPA Visual Plumes model, and larger-sized grid scales for re...
NASA Astrophysics Data System (ADS)
Dries, M.; Trager, S. C.; Koopmans, L. V. E.
2016-08-01
Recent studies based on the integrated light of distant galaxies suggest that the initial mass function (IMF) might not be universal. Variations of the IMF with galaxy type and/or formation time may have important consequences for our understanding of galaxy evolution. We have developed a new stellar population synthesis (SPS) code specifically designed to reconstruct the IMF. We implement a novel approach combining regularization with hierarchical Bayesian inference. Within this approach we use a parametrized IMF prior to regulate a direct inference of the IMF. This direct inference gives more freedom to the IMF and allows the model to deviate from parametrized models when demanded by the data. We use Markov Chain Monte Carlo sampling techniques to reconstruct the best parameters for the IMF prior, the age, and the metallicity of a single stellar population. We present our code and apply our model to a number of mock single stellar populations with different ages, metallicities, and IMFs. When systematic uncertainties are not significant, we are able to reconstruct the input parameters that were used to create the mock populations. Our results show that if systematic uncertainties do play a role, this may introduce a bias on the results. Therefore, it is important to objectively compare different ingredients of SPS models. Through its Bayesian framework, our model is well-suited for this.
Linking geophysics and soil function modelling - two examples
NASA Astrophysics Data System (ADS)
Krüger, J.; Franko, U.; Werban, U.; Dietrich, P.; Behrens, T.; Schmidt, K.; Fank, J.; Kroulik, M.
2011-12-01
potential hot spots where local adaptations of agricultural management would be required to improve soil functions. Example B realizes a soil function modelling with an adapted model parameterization based on data of ground penetration radar (GPR). This work shows an approach to handle heterogeneity of soil properties with geophysical data used for modelling. The field site in Austria is characterised by highly heterogenic soil with fluvioglacial gravel sediments. The variation of thickness of topsoil above a sandy subsoil with gravels strongly influences the soil water balance. GPR detected exact soil horizon depth between topsoil and subsoil. The extension of the input data improves the model performance of CANDY PLUS for plant biomass production. Both examples demonstrate how geophysics provide a surplus of data for agroecosystem modelling which identifies and contributes alternative options for agricultural management decisions.
Hubbard Model Approach to X-ray Spectroscopy
NASA Astrophysics Data System (ADS)
Ahmed, Towfiq
We have implemented a Hubbard model based first-principles approach for real-space calculations of x-ray spectroscopy, which allows one to study excited state electronic structure of correlated systems. Theoretical understanding of many electronic features in d and f electron systems remains beyond the scope of conventional density functional theory (DFT). In this work our main effort is to go beyond the local density approximation (LDA) by incorporating the Hubbard model within the real-space multiple-scattering Green's function (RSGF) formalism. Historically, the first theoretical description of correlated systems was published by Sir Neville Mott and others in 1937. They realized that the insulating gap and antiferromagnetism in the transition metal oxides are mainly caused by the strong on-site Coulomb interaction of the localized unfilled 3d orbitals. Even with the recent progress of first principles methods (e.g. DFT) and model Hamiltonian approaches (e.g., Hubbard-Anderson model), the electronic description of many of these systems remains a non-trivial combination of both. X-ray absorption near edge spectra (XANES) and x-ray emission spectra (XES) are very powerful spectroscopic probes for many electronic features near Fermi energy (EF), which are caused by the on-site Coulomb interaction of localized electrons. In this work we focus on three different cases of many-body effects due to the interaction of localized d electrons. Here, for the first time, we have applied the Hubbard model in the real-space multiple scattering (RSGF) formalism for the calculation of x-ray spectra of Mott insulators (e.g., NiO and MnO). Secondly, we have implemented in our RSGF approach a doping dependent self-energy that was constructed from a single-band Hubbard model for the over doped high-T c cuprate La2-xSrxCuO4. Finally our RSGF calculation of XANES is calculated with the spectral function from Lee and Hedin's charge transfer satellite model. For all these cases our
A simple approach to modeling ductile failure.
Wellman, Gerald William
2012-06-01
Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.
Murphy, Matthew C; Poplawsky, Alexander J; Vazquez, Alberto L; Chan, Kevin C; Kim, Seong-Gi; Fukuda, Mitsuhiro
2016-08-15
Functional MRI (fMRI) is a popular and important tool for noninvasive mapping of neural activity. As fMRI measures the hemodynamic response, the resulting activation maps do not perfectly reflect the underlying neural activity. The purpose of this work was to design a data-driven model to improve the spatial accuracy of fMRI maps in the rat olfactory bulb. This system is an ideal choice for this investigation since the bulb circuit is well characterized, allowing for an accurate definition of activity patterns in order to train the model. We generated models for both cerebral blood volume weighted (CBVw) and blood oxygen level dependent (BOLD) fMRI data. The results indicate that the spatial accuracy of the activation maps is either significantly improved or at worst not significantly different when using the learned models compared to a conventional general linear model approach, particularly for BOLD images and activity patterns involving deep layers of the bulb. Furthermore, the activation maps computed by CBVw and BOLD data show increased agreement when using the learned models, lending more confidence to their accuracy. The models presented here could have an immediate impact on studies of the olfactory bulb, but perhaps more importantly, demonstrate the potential for similar flexible, data-driven models to improve the quality of activation maps calculated using fMRI data. PMID:27236085
Scaling functions in the square Ising model
NASA Astrophysics Data System (ADS)
Hassani, S.; Maillard, J.-M.
2015-03-01
We show and give the linear differential operators Lqscal of order q={{n}2}/4+n+7/8+{{(-1)}n}/8, for the integrals {{I}n}(r) which appear in the two-point correlation scaling function of Ising model \\{{F}+/- }(r)={{lim }scaling}M+/- -2 \\lt {{σ }0,0} {{σ }M,N}\\gt ={{\\sum }n}{{I}n}(r). The integrals {{I}n}(r) are given in expansion around r=0 in the basis of the formal solutions of Lqscal with transcendental combination coefficients. We find that the expression {{r}1/4}exp ({{r}2}/8) is a solution of the Painlevé VI equation in the scaling limit. Combinations of the (analytic at r=0) solutions of Lqscal sum to exp ({{r}2}/8). We show that the expression {{r}1/4}exp ({{r}2}/8) is the scaling limit of the correlation function C(N,N) and C(N,N+1). The differential Galois groups of the factors occurring in the operators Lqscal are given.
Models of protocellular structures, functions and evolution
NASA Technical Reports Server (NTRS)
Pohorille, Andrew; New, Michael H.; DeVincenzi, Donald L. (Technical Monitor)
2000-01-01
The central step in the origin of life was the emergence of organized structures from organic molecules available on the early earth. These predecessors to modern cells, called 'proto-cells,' were simple, membrane bounded structures able to maintain themselves, grow, divide, and evolve. Since there is no fossil record of these earliest of life forms, it is a scientific challenge to discover plausible mechanisms for how these entities formed and functioned. To meet this challenge, it is essential to create laboratory models of protocells that capture the main attributes associated with living systems, while remaining consistent with known, or inferred, protobiological conditions. This report provides an overview of a project which has focused on protocellular metabolism and the coupling of metabolism to energy transduction. We have assumed that the emergence of systems endowed with genomes and capable of Darwinian evolution was preceded by a pre-genomic phase, in which protocells functioned and evolved using mostly proteins, without self-replicating nucleic acids such as RNA.
An Odds Ratio Approach for Detecting DDF under the Nested Logit Modeling Framework
ERIC Educational Resources Information Center
Terzi, Ragip; Suh, Youngsuk
2015-01-01
An odds ratio approach (ORA) under the framework of a nested logit model was proposed for evaluating differential distractor functioning (DDF) in multiple-choice items and was compared with an existing ORA developed under the nominal response model. The performances of the two ORAs for detecting DDF were investigated through an extensive…
APPROACHES TO LUNG FUNCTION ASSESSMENT IN SMALL MAMMALS
The review chapter of pulmonary function assessment in small mammals first discusses basic principles and methods such as assessment of various pressures, volumes and flows. The three types of plethysmographs (pressure, flow and barometric) used by animal physiologists are evalua...
Path probability of stochastic motion: A functional approach
NASA Astrophysics Data System (ADS)
Hattori, Masayuki; Abe, Sumiyoshi
2016-06-01
The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.
A Functional Genomics Approach to Tanshinone Biosynthesis Provides Stereochemical Insights
2009-01-01
Tanshinones are abietane-type norditerpenoid quinone natural products that are the bioactive components of the Chinese medicinal herb Salvia miltiorrhiza Bunge. The initial results from a functional genomics-based investigation of tanshinone biosynthesis, specifically the functional identification of the relevant diterpene synthases from S. miltiorrhiza, are reported. The cyclohexa-1,4-diene arrangement of the distal ring poises the resulting miltiradiene for the ensuing aromatization and hydroxylation to ferruginol suggested for tanshinone biosynthesis. PMID:19905026
A functional genomics approach to tanshinone biosynthesis provides stereochemical insights.
Gao, Wei; Hillwig, Matthew L; Huang, Luqi; Cui, Guanghong; Wang, Xueyong; Kong, Jianqiang; Yang, Bin; Peters, Reuben J
2009-11-19
Tanshinones are abietane-type norditerpenoid quinone natural products that are the bioactive components of the Chinese medicinal herb Salvia miltiorrhiza Bunge. The initial results from a functional genomics-based investigation of tanshinone biosynthesis, specifically the functional identification of the relevant diterpene synthases from S. miltiorrhiza, are reported. The cyclohexa-1,4-diene arrangement of the distal ring poises the resulting miltiradiene for the ensuing aromatization and hydroxylation to ferruginol suggested for tanshinone biosynthesis. PMID:19905026
An improved approach for tank purge modeling
NASA Astrophysics Data System (ADS)
Roth, Jacob R.; Chintalapati, Sunil; Gutierrez, Hector M.; Kirk, Daniel R.
2013-05-01
Many launch support processes use helium gas to purge rocket propellant tanks and fill lines to rid them of hazardous contaminants. As an example, the purge of the Space Shuttle's External Tank used approximately 1,100 kg of helium. With the rising cost of helium, initiatives are underway to examine methods to reduce helium consumption. Current helium purge processes have not been optimized using physics-based models, but rather use historical 'rules of thumb'. To develop a more accurate and useful model of the tank purge process, computational fluid dynamics simulations of several tank configurations were completed and used as the basis for the development of an algebraic model of the purge process. The computationally efficient algebraic model of the purge process compares well with a detailed transient, three-dimensional computational fluid dynamics (CFD) simulation as well as with experimental data from two external tank purges.
Runoff-rainfall (sic!) modelling: Comparing two different approaches
NASA Astrophysics Data System (ADS)
Herrnegger, Mathew; Schulz, Karsten
2015-04-01
rainfall estimates from the two models. Here, time series from a station observation in the proximity of the catchment and the independent INCA rainfall analysis of Austrian Central Institute for Meteorology and Geodynamics (ZAMG, Haiden et al., 2011) are used. References: Adamovic, M., Braud, I., Branger, F., and Kirchner, J. W. (2014). Does the simple dynamical systems approach provide useful information about catchment hydrological functioning in a Mediterranean context? Application to the Ardèche catchment (France), Hydrol. Earth Syst. Sci. Discuss., 11, 10725-10786. Haiden, T., Kann, A., Wittman, C., Pistotnik, G., Bica, B., and Gruber, C. (2011). The Integrated Nowcasting through Comprehensive Analysis (INCA) system and its validation over the Eastern Alpine region. Wea. Forecasting 26, 166-183, doi: 10.1175/2010WAF2222451.1. Herrnegger, M., Nachtnebel, H.P., and Schulz, K. (2014). From runoff to rainfall: inverse rainfall-runoff modelling in a high temporal resolution, Hydrol. Earth Syst. Sci. Discuss., 11, 13259-13309. Kirchner, J. W. (2009). Catchments as simple dynamical systems: catchment characterization, rainfall-runoff modeling, and doing hydrology backward. Water Resour .Res., 45, W02429. Krier, R., Matgen, P., Goergen, K., Pfister, L., Hoffmann, L., Kirchner, J. W., Uhlenbrook, S., and Savenije, H.H.G. (2012). Inferring catchment precipitation by doing hydrology backward: A test in 24 small and mesoscale catchments in Luxembourg, Water Resour. Res., 48, W10525.
A Probabilistic Approach to Model Update
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.
2001-01-01
Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.
Logan, Deirdre E.; Carpino, Elizabeth A.; Chiang, Gloria; Condon, Marianne; Firn, Emily; Gaughan, Veronica J.; Hogan, Melinda, P.T.; Leslie, David S.; Olson, Katie, P.T.; Sager, Susan; Sethna, Navil; Simons, Laura E.; Zurakowski, David; Berde, Charles B.
2013-01-01
Objectives To examine clinical outcomes of an interdisciplinary day hospital treatment program (comprised of physical, occupational, and cognitive-behavioral therapies with medical and nursing services) for pediatric complex regional pain syndrome (CRPS). Methods The study is a longitudinal case series of consecutive patients treated in a day hospital pediatric pain rehabilitation program. Participants were 56 children and adolescents ages 8–18 years (median = 14 years) with CRPS spectrum conditions who failed to progress sufficiently with a previous outpatient and/or inpatient treatments. Patients participated in daily physical therapy, occupational therapy and psychological treatment and received nursing and medical care as necessary. The model places equal emphasis on physical and cognitive-behavioral approaches to pain management. Median duration of stay was 3 weeks. Outcome measures included assessments of physical, occupational, and psychological functioning at program admission, discharge, and at post-treatment follow-up at a median of 10 months post-discharge. Scores at discharge and follow-up were compared with measures on admission by Wilcoxon tests, paired t tests, or ANOVA as appropriate, with corrections for multiple comparisons. Results Outcomes demonstrate clinically and statistically significant improvements from admission to discharge in pain intensity (p<0.001), functional disability (p<0.001), subjective report of limb function (p<0.001), timed running (p<0.001) occupational performance (p<0.001), medication use (p<0.01), use of assistive devices (p<0.001), and emotional functioning (anxiety, p<0.001; depression, p<0.01). Functional gains were maintained or further improved at follow-up. Discussion A day-hospital interdisciplinary rehabilitation approach appears effective in reducing disability and improving physical and emotional functioning and occupational performance among children and adolescents with complex regional pain syndromes that
A reversible functional sensory neuropathy model.
Danigo, Aurore; Magy, Laurent; Richard, Laurence; Sturtz, Franck; Funalot, Benoît; Demiot, Claire
2014-06-13
Small-fiber neuropathy was induced in young adult mice by intraperitoneal injection of resiniferatoxin (RTX), a TRPV1 agonist. At day 7, RTX induced significant thermal and mechanical hypoalgesia. At day 28, mechanical and thermal nociception were restored. No nerve degeneration in skin was observed and unmyelinated nerve fiber morphology and density in sciatic nerve were unchanged. At day 7, substance P (SP) was largely depleted in dorsal root ganglia (DRG) neurons, although calcitonin gene-related peptide (CGRP) was only moderately depleted. Three weeks after, SP and CGRP expression was restored in DRG neurons. At the same time, CGRP expression remained low in intraepidermal nerve fibers (IENFs) whereas SP expression had improved. In summary, RTX induced in our model a transient neuropeptide depletion in sensory neurons without nerve degeneration. We think this model is valuable as it brings the opportunity to study functional nerve changes in the very early phase of small fiber neuropathy. Moreover, it may represent a useful tool to study the mechanisms of action of therapeutic strategies to prevent sensory neuropathy of various origins. PMID:24792390
Cooperative fuzzy games approach to setting target levels of ECs in quality function deployment.
Yang, Zhihui; Chen, Yizeng; Yin, Yunqiang
2014-01-01
Quality function deployment (QFD) can provide a means of translating customer requirements (CRs) into engineering characteristics (ECs) for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach. PMID:25097884
NASA Astrophysics Data System (ADS)
Das, Priyanka; Ahmad, Zeeshan; Singh, P. N.; Prasad, Ashutosh
2011-11-01
The present work makes use of experimental data for real part of microwave complex permittivity of spring oats (Avena sativa L.) at 2.45 GHz and 24 °C as a function of moisture content, as extracted from the literature. These permittivity data were individually converted to those for solid materials using seven independent mixture equations for effective permittivity of random media. Moisture dependent quadratic models for complex permittivity of spring oats (Avena sativa L.), as developed by the present group, were used to evaluate the dielectric loss factor of spring oats kernels. Using these data, seven density—independent permittivity functions were evaluated and plotted as a function of moisture content of the samples. Second and third order polynomial regression equations were used for curve fittings with these data and their performances are reported. Coefficients of determination (r2) approaching unity (˜ 0.95-0.9999) and very small Standard Deviation (SD) ˜0.001-8.87 show good acceptability for these models. The regularity in the nature of these variations revealed the usefulness of the density—independent permittivity functions as indicators/calibrators of moisture content of spring oats kernels. Keeping in view the fact that moisture content of grains and seeds is an important factor determining quality and affecting the storage, transportation, and milling of grains and seeds, the work has the potentiality of its practical applications.
Cooperative Fuzzy Games Approach to Setting Target Levels of ECs in Quality Function Deployment
Yang, Zhihui; Chen, Yizeng; Yin, Yunqiang
2014-01-01
Quality function deployment (QFD) can provide a means of translating customer requirements (CRs) into engineering characteristics (ECs) for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach. PMID:25097884
A modelling approach towards epidermal homoeostasis control.
Schaller, Gernot; Meyer-Hermann, Michael
2007-08-01
In order to grasp the features arising from cellular discreteness and individuality, in large parts of cell tissue modelling agent-based models are favoured. The subclass of off-lattice models allows for a physical motivation of the intercellular interaction rules. We apply an improved version of a previously introduced off-lattice agent-based model to the steady-state flow equilibrium of skin. The dynamics of cells is determined by conservative and drag forces, supplemented with delta-correlated random forces. Cellular adjacency is detected by a weighted Delaunay triangulation. The cell cycle time of keratinocytes is controlled by a diffusible substance provided by the dermis. Its concentration is calculated from a diffusion equation with time-dependent boundary conditions and varying diffusion coefficients. The dynamics of a nutrient is also taken into account by a reaction-diffusion equation. It turns out that the analysed control mechanism suffices to explain several characteristics of epidermal homoeostasis formation. In addition, we examine the question of how in silico melanoma with decreased basal adhesion manage to persist within the steady-state flow equilibrium of the skin. Interestingly, even for melanocyte cell cycle times being substantially shorter than for keratinocytes, tiny stochastic effects can lead to completely different outcomes. The results demonstrate that the understanding of initial states of tumour growth can profit significantly from the application of off-lattice agent-based models in computer simulations. PMID:17466340
A mixed basis density functional approach for one-dimensional systems with B-splines
NASA Astrophysics Data System (ADS)
Ren, Chung-Yuan; Chang, Yia-Chung; Hsue, Chen-Shiung
2016-05-01
A mixed basis approach based on density functional theory is extended to one-dimensional (1D) systems. The basis functions here are taken to be the localized B-splines for the two finite non-periodic dimensions and the plane waves for the third periodic direction. This approach will significantly reduce the number of the basis and therefore is computationally efficient for the diagonalization of the Kohn-Sham Hamiltonian. For 1D systems, B-spline polynomials are particularly useful and efficient in two-dimensional spatial integrations involved in the calculations because of their absolute localization. Moreover, B-splines are not associated with atomic positions when the geometry structure is optimized, making the geometry optimization easy to implement. With such a basis set we can directly calculate the total energy of the isolated system instead of using the conventional supercell model with artificial vacuum regions among the replicas along the two non-periodic directions. The spurious Coulomb interaction between the charged defect and its repeated images by the supercell approach for charged systems can also be avoided. A rigorous formalism for the long-range Coulomb potential of both neutral and charged 1D systems under the mixed basis scheme will be derived. To test the present method, we apply it to study the infinite carbon-dimer chain, graphene nanoribbon, carbon nanotube and positively-charged carbon-dimer chain. The resulting electronic structures are presented and discussed in detail.
Shen, Hua; McHale, Cliona M.; Smith, Martyn T; Zhang, Luoping
2015-01-01
Characterizing variability in the extent and nature of responses to environmental exposures is a critical aspect of human health risk assessment. Chemical toxicants act by many different mechanisms, however, and the genes involved in adverse outcome pathways (AOPs) and AOP networks are not yet characterized. Functional genomic approaches can reveal both toxicity pathways and susceptibility genes, through knockdown or knockout of all non-essential genes in a cell of interest, and identification of genes associated with a toxicity phenotype following toxicant exposure. Screening approaches in yeast and human near-haploid leukemic KBM7 cells, have identified roles for genes and pathways involved in response to many toxicants but are limited by partial homology among yeast and human genes and limited relevance to normal diploid cells. RNA interference (RNAi) suppresses mRNA expression level but is limited by off-target effects (OTEs) and incomplete knockdown. The recently developed gene editing approach called clustered regularly interspaced short palindrome repeats-associated nuclease (CRISPR)-Cas9, can precisely knock-out most regions of the genome at the DNA level with fewer OTEs than RNAi, in multiple human cell types, thus overcoming the limitations of the other approaches. It has been used to identify genes involved in the response to chemical and microbial toxicants in several human cell types and could readily be extended to the systematic screening of large numbers of environmental chemicals. CRISPR-Cas9 can also repress and activate gene expression, including that of non-coding RNA, with near-saturation, thus offering the potential to more fully characterize AOPs and AOP networks. Finally, CRISPR-Cas9 can generate complex animal models in which to conduct preclinical toxicity testing at the level of individual genotypes or haplotypes. Therefore, CRISPR-Cas9 is a powerful and flexible functional genomic screening approach that can be harnessed to provide
Shen, Hua; McHale, Cliona M; Smith, Martyn T; Zhang, Luoping
2015-01-01
Characterizing variability in the extent and nature of responses to environmental exposures is a critical aspect of human health risk assessment. Chemical toxicants act by many different mechanisms, however, and the genes involved in adverse outcome pathways (AOPs) and AOP networks are not yet characterized. Functional genomic approaches can reveal both toxicity pathways and susceptibility genes, through knockdown or knockout of all non-essential genes in a cell of interest, and identification of genes associated with a toxicity phenotype following toxicant exposure. Screening approaches in yeast and human near-haploid leukemic KBM7 cells have identified roles for genes and pathways involved in response to many toxicants but are limited by partial homology among yeast and human genes and limited relevance to normal diploid cells. RNA interference (RNAi) suppresses mRNA expression level but is limited by off-target effects (OTEs) and incomplete knockdown. The recently developed gene editing approach called clustered regularly interspaced short palindrome repeats-associated nuclease (CRISPR)-Cas9, can precisely knock-out most regions of the genome at the DNA level with fewer OTEs than RNAi, in multiple human cell types, thus overcoming the limitations of the other approaches. It has been used to identify genes involved in the response to chemical and microbial toxicants in several human cell types and could readily be extended to the systematic screening of large numbers of environmental chemicals. CRISPR-Cas9 can also repress and activate gene expression, including that of non-coding RNA, with near-saturation, thus offering the potential to more fully characterize AOPs and AOP networks. Finally, CRISPR-Cas9 can generate complex animal models in which to conduct preclinical toxicity testing at the level of individual genotypes or haplotypes. Therefore, CRISPR-Cas9 is a powerful and flexible functional genomic screening approach that can be harnessed to provide
Aircraft engine mathematical model - linear system approach
NASA Astrophysics Data System (ADS)
Rotaru, Constantin; Roateşi, Simona; Cîrciu, Ionicǎ
2016-06-01
This paper examines a simplified mathematical model of the aircraft engine, based on the theory of linear and nonlinear systems. The dynamics of the engine was represented by a linear, time variant model, near a nominal operating point within a finite time interval. The linearized equations were expressed in a matrix form, suitable for the incorporation in the MAPLE program solver. The behavior of the engine was included in terms of variation of the rotational speed following a deflection of the throttle. The engine inlet parameters can cover a wide range of altitude and Mach numbers.
New approaches for modeling type Ia supernovae
Zingale, Michael; Almgren, Ann S.; Bell, John B.; Day, Marcus S.; Rendleman, Charles A.; Woosley, Stan
2007-06-25
Type Ia supernovae (SNe Ia) are the largest thermonuclearexplosions in the Universe. Their light output can be seen across greatstances and has led to the discovery that the expansion rate of theUniverse is accelerating. Despite the significance of SNe Ia, there arestill a large number of uncertainties in current theoretical models.Computational modeling offers the promise to help answer the outstandingquestions. However, even with today's supercomputers, such calculationsare extremely challenging because of the wide range of length and timescales. In this paper, we discuss several new algorithms for simulationsof SNe Ia and demonstrate some of their successes.
A new approach to modified gravity models
NASA Astrophysics Data System (ADS)
Chakrabarti, Sayan K.; Saridakis, Emmanuel N.; Sen, Anjan A.
2011-11-01
We investigate f ( R)-gravity models performing the ADM-slicing of standard General Relativity. We extract the static, spherically-symmetric vacuum solutions in the general case, which correspond to either Schwarzschild de-Sitter or Schwarzschild anti-de-Sitter ones. Additionally, we study the cosmological evolution of a homogeneous and isotropic universe, which is governed by an algebraic and not a differential equation. We show that the universe admits solutions corresponding to acceleration at late cosmological epochs, without the need of fine-tuning the model-parameters or the initial conditions.
Extension of the Nakajima-Zwanzig approach to multitime correlation functions of open systems
NASA Astrophysics Data System (ADS)
Ivanov, Anton; Breuer, Heinz-Peter
2015-09-01
We extend the Nakajima-Zwanzig projection operator technique to the determination of multitime correlation functions of open quantum systems. The correlation functions are expressed in terms of certain multitime homogeneous and inhomogeneous memory kernels for which suitable equations of motion are derived. We show that under the condition of finite memory times, these equations can be used to determine the memory kernels by employing an exact stochastic unraveling of the full system-environment dynamics. The approach thus allows us to combine exact stochastic methods, feasible for short times, with long-time master equation simulations. The applicability of the method is demonstrated by numerical simulations of two-dimensional spectra for a donor-acceptor model, and by comparison of the results with those obtained from the reduced hierarchy equations of motion. We further show that the formalism is also applicable to the time evolution of a periodically driven two-level system initially in equilibrium with its environment.
NASA Astrophysics Data System (ADS)
García-García, J.; Martín, F.
2000-11-01
From a coupling model between the Boltzmann transport equation and the quantum Liouville equation, we have developed a simulator based on the Wigner distribution function (WDF) approach that can be applied to resonant tunneling diodes (RTDs) and other vertical transport quantum devices. In comparison to previous WDF simulators, the tool allows one to extend the simulation domains up to hundreds of nanometers, which are the typical dimensions required for the study of actual multilayer structures. With these improvements, a level of agreement between theory and experiment comparable to that obtained by using other simulators based on Green functions has been achieved. The results of this work reveal that the WDF formalism can be alternatively used to study the behavior of actual multilayered RTDs.
Mixtures of ions and amphiphilic molecules in slit-like pores: A density functional approach
Pizio, O.; Rżysko, W. Sokołowski, S.; Sokołowska, Z.
2015-04-28
We investigate microscopic structure and thermodynamic properties of a mixture that contains amphiphilic molecules and charged hard spheres confined in slit-like pores with uncharged hard walls. The model and the density functional approach are the same as described in details in our previous work [Pizio et al., J. Chem. Phys. 140, 174706 (2014)]. Our principal focus is in exploring the effects brought by the presence of ions on the structure of confined amphiphilic particles. We have found that for some cases of anisotropic interactions, the change of the structure of confined fluids occurs via the first-order transitions. Moreover, if anions and cations are attracted by different hemispheres of amphiphiles, a charge at the walls appears at the zero value of the wall electrostatic potential. For a given thermodynamic state, this charge is an oscillating function of the pore width.
Hamiltonian Light-Front Ffield Theory in a Basis Function Approach
Vary, J.P.; Honkanen, H.; Li, Jun; Maris, P.; Brodsky, S.J.; Harindranath, A.; de Teramond, G.F.; Sternberg, P.; Ng, E.G.; Yang, C.
2009-05-15
Hamiltonian light-front quantum field theory constitutes a framework for the non-perturbative solution of invariant masses and correlated parton amplitudes of self-bound systems. By choosing the light-front gauge and adopting a basis function representation, we obtain a large, sparse, Hamiltonian matrix for mass eigenstates of gauge theories that is solvable by adapting the ab initio no-core methods of nuclear many-body theory. Full covariance is recovered in the continuum limit, the infinite matrix limit. There is considerable freedom in the choice of the orthonormal and complete set of basis functions with convenience and convergence rates providing key considerations. Here, we use a two-dimensional harmonic oscillator basis for transverse modes that corresponds with eigensolutions of the soft-wall AdS/QCD model obtained from light-front holography. We outline our approach, present illustrative features of some non-interacting systems in a cavity and discuss the computational challenges.
Approaches to organizing public relations functions in healthcare.
Guy, Bonnie; Williams, David R; Aldridge, Alicia; Roggenkamp, Susan D
2007-01-01
This article provides health care audiences with a framework for understanding different perspectives of the role and functions of public relations in healthcare organizations and the resultant alternatives for organizing and enacting public relations functions. Using an example of a current issue receiving much attention in US healthcare (improving rates of organ donation), the article provides examples of how these different perspectives influence public relations goals and objectives, definitions of 'public', activities undertaken, who undertakes them and where they fit into the organizational hierarchy. PMID:19042525
A Reflective Approach to Model-Driven Web Engineering
NASA Astrophysics Data System (ADS)
Clowes, Darren; Kolovos, Dimitris; Holmes, Chris; Rose, Louis; Paige, Richard; Johnson, Julian; Dawson, Ray; Probets, Steve
A reflective approach to model-driven web engineering is presented, which aims to overcome several of the shortcomings of existing generative approaches. The approach uses the Epsilon platform and Apache Tomcat to render dynamic HTML content using Epsilon Generation Language templates. This enables EMF-based models to be used as data sources without the need to pre-generate any HTML or dynamic script, or duplicate the contents into a database. The paper reports on our experimental results in using this approach for dynamically querying and visualising a very large military standard.
"Dispersion modeling approaches for near road
Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of app...
A Gaussian graphical model approach to climate networks
NASA Astrophysics Data System (ADS)
Zerenner, Tanja; Friederichs, Petra; Lehnertz, Klaus; Hense, Andreas
2014-06-01
Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.
A Gaussian graphical model approach to climate networks
Zerenner, Tanja; Friederichs, Petra; Hense, Andreas; Lehnertz, Klaus
2014-06-15
Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.
A Gaussian graphical model approach to climate networks.
Zerenner, Tanja; Friederichs, Petra; Lehnertz, Klaus; Hense, Andreas
2014-06-01
Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately. PMID:24985417
Alcaraz, Jordi; Nelson, Celeste M.; Bissell, Mina J.
2010-01-01
The structure and function of each individual mammary epithelial cell (MEC) is largely controlled by a bidirectional interchange of chemical and mechanical signals with the microenvironment. Most of these signals are tissue-specific, since they arise from the three-dimensional (3D) tissue organization and are modulated during mammary gland development, maturation, pregnancy, lactation, and involution. Although the important role played by structural and mechanical signals in mammary cell and tissue function is being increasingly recognized, quantitative biomechanical approaches are still scarce. Here we review currently available biomechanical tools that allow quantitative examination of individual cells, groups of cells or full monolayers in two-dimensional cultures, and cells in 3D cultures. Current technological limitations and challenges are discussed, with special emphasis on their potential applications in MEC biology. We argue that the combination of biomechanical tools with current efforts in mathematical modeling and in cell and molecular biology applied to 3D cultures provides a powerful approach to unravel the complexity of tissue-specific structure-function relationships. PMID:15838605
Nonperturbative approach to the parton model
NASA Astrophysics Data System (ADS)
Simonov, Yu. A.
2016-02-01
In this paper, the nonperturbative parton distributions, obtained from the Lorentz contracted wave functions, are analyzed in the formalism of many-particle Fock components and their properties are compared to the standard perturbative distributions. We show that the collinear and IR divergencies specific for perturbative evolution treatment are absent in the nonperturbative version, however for large momenta pi2 ≫ σ (string tension), the bremsstrahlung kinematics is restored. A preliminary discussion of possible nonperturbative effects in DIS and high energy scattering is given, including in particular a possible role of multihybrid states in creating ridge-type effects.
Integration models: multicultural and liberal approaches confronted
NASA Astrophysics Data System (ADS)
Janicki, Wojciech
2012-01-01
European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.
Verbal Neuropsychological Functions in Aphasia: An Integrative Model.
Vigliecca, Nora Silvana; Báez, Sandra
2015-12-01
A theoretical framework which considers the verbal functions of the brain under a multivariate and comprehensive cognitive model was statistically analyzed. A confirmatory factor analysis was performed to verify whether some recognized aphasia constructs can be hierarchically integrated as latent factors from a homogenously verbal test. The Brief Aphasia Evaluation was used. A sample of 65 patients with left cerebral lesions, and two supplementary samples comprising 35 patients with right cerebral lesions and 30 healthy participants were studied. A model encompassing an all inclusive verbal organizer and two successive organizers was validated. The two last organizers were: three factors of comprehension, expression and a "complementary" verbal factor which included praxia, attention, and memory; followed by the individual (and correlated) factors of auditory comprehension, repetition, naming, speech, reading, writing, and the "complementary" factor. By following this approach all the patients fall inside the classification system; consequently, theoretical improvement is guaranteed. PMID:25168953
Functional approach to derivative expansion of the effective Lagrangian
Zuk, J.A.
1985-11-15
We present a new functional method for calculating higher-derivative terms in the one-loop effective Lagrangian for multicomponent scalar field theories. The general results we obtain are illustrated with explicit calculations for the O(n)-invariant phi/sup 4/ theory.
Negation in Context: A Functional Approach to Suppression
ERIC Educational Resources Information Center
Giora, Rachel; Fein, Ofer; Aschkenazi, Keren; Alkabets-Zlozover, Inbar
2007-01-01
Three experiments show that, contrary to the current view, comprehenders do not unconditionally deactivate information marked by negation. Instead, they discard negated information when it is functionally motivated. In Experiment 1, comprehenders discarded negated concepts when cued by a topic shift to dampen recently processed information.…
Newer Approaches to Identify Potential Untoward Effects in Functional Foods.
Marone, Palma Ann; Birkenbach, Victoria L; Hayes, A Wallace
2016-01-01
Globalization has greatly accelerated the numbers and variety of food and beverage products available worldwide. The exchange among greater numbers of countries, manufacturers, and products in the United States and worldwide has necessitated enhanced quality measures for nutritional products for larger populations increasingly reliant on functionality. These functional foods, those that provide benefit beyond basic nutrition, are increasingly being used for their potential to alleviate food insufficiency while enhancing quality and longevity of life. In the United States alone, a steady import increase of greater than 15% per year or 24 million shipments, over 70% products of which are food related, is regulated under the Food and Drug Administration (FDA). This unparalleled growth has resulted in the need for faster, cheaper, and better safety and efficacy screening methods in the form of harmonized guidelines and recommendations for product standardization. In an effort to meet this need, the in vitro toxicology testing market has similarly grown with an anticipatory 15% increase between 2010 and 2015 of US$1.3 to US$2.7 billion. Although traditionally occupying a small fraction of the market behind pharmaceuticals and cosmetic/household products, the scope of functional food testing, including additives/supplements, ingredients, residues, contact/processing, and contaminants, is potentially expansive. Similarly, as functional food testing has progressed, so has the need to identify potential adverse factors that threaten the safety and quality of these products. PMID:26657815
A Functional Approach to Televised Political Spots: Acclaiming, Attacking, Defending.
ERIC Educational Resources Information Center
Benoit, William L.; Pier, P. M.; Blaney, Joseph R.
1997-01-01
Articulates a theoretical framework for understanding the fundamental functions of political advertising (acclaiming, attacking, defending) which occur on the twin grounds of policy considerations and character. Applies this theory of political discourse to presidential general election television spots from 1980-1996, finding that Democrats and…
Small Molecule Approach to Study the Function of Mitotic Kinesins.
Al-Obaidi, Naowras; Kastl, Johanna; Mayer, Thomas U
2016-01-01
Mitotic motor proteins of the kinesin superfamily are critical for the faithful segregation of chromosomes and the formation of the two daughter cells during meiotic and mitotic M-phase. Of the 45 human kinesins, roughly a dozen are involved in the assembly of the bipolar spindle, alignment of chromosomes at the spindle equator, chromosome segregation, and cytokinesis. The functions of kinesins in these processes are highly diverse and include the transport of cargo molecules, sliding and bundling of microtubules, and regulation of microtubule dynamics. In light of this multitude of diverse functions and the complex functional interplay of different kinesins during M-phase, it is not surprising that one of the greatest challenges in cell biology is the functional dissection of individual motor proteins. Reversible and fast acting small molecules are powerful tools to accomplish this challenge. However, the validity of conclusions drawn from small molecule studies strictly depends on compound specificity. In this chapter, we present methods for the identification of small molecule inhibitors of a motor protein of interest. In particular, we focus on a protein-based large throughput screen to identify inhibitors of the ATPase activity of kinesins. Furthermore, we provide protocols and guidelines for secondary screens to validate hits and select for specific inhibitors. PMID:27193856
Constructing and Deriving Reciprocal Trigonometric Relations: A Functional Analytic Approach
ERIC Educational Resources Information Center
Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K.; McGinty, Jennifer
2009-01-01
Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed…
Social Psychology--A Functional Approach to Recreation
ERIC Educational Resources Information Center
Groves, David L.; And Others
1977-01-01
The subject content of the study was a public forested area. Both user and general populations of the area were sampled to obtain a diversity of results about the functional nature of attitudes and/or behavior. Results indicated that there were differences among individuals with regard to the larger context. (Author)
Newton Algorithms for Analytic Rotation: An Implicit Function Approach
ERIC Educational Resources Information Center
Boik, Robert J.
2008-01-01
In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…
NASA Astrophysics Data System (ADS)
Bruntz, R. J.; Paxton, L. J.; Miller, E. S.; Bust, G. S.; Mayr, H. G.
2015-12-01
The Transfer Function Model (TFM) has been used in numerous studies to simulate gravity waves. In the TFM, the time dependence is formulated in terms of frequencies, and the horizontal wave pattern on the globe is formulated in terms of vector spherical harmonics. For a wide range of frequencies, the equations of mass, energy and momentum conservation are solved to compile a transfer function. The transfer function can then be easily combined with a time-dependent source whose spatial extent is also expressed in spherical harmonics, to produce a global atmospheric response, including gravity waves. This approach has significant benefits in that the solution is grid-independent (without any inherent limits on resolution), and the solutions do not suffer from singularities at the poles. We will show results from our simulations that couple the output of the TFM to an ionospheric model, to predict traveling ionospheric disturbances (TIDs) driven by the simulated gravity waves.
Linking geophysics and soil function modelling - biomass production
NASA Astrophysics Data System (ADS)
Krüger, J.; Franko, U.; Werban, U.; Fank, J.
2012-04-01
The iSOIL project aims at reliable mapping of soil properties and soil functions with various methods including geophysical, spectroscopic and monitoring techniques. The general procedure contains three steps (i) geophysical monitoring, (ii) generation of soil property maps and (iii) process modelling. The objective of this work is to demonstrate the mentioned procedure with a focus on process modelling. It deals with the dynamics of soil water and the direct influence on crop biomass production. The new module PLUS extends CANDY to simulate crop biomass production based on environmental influences. A soil function modelling with an adapted model parameterisation based on data of ground penetration radar (GPR) and conductivity (EM38) was realized. This study shows an approach to handle heterogeneity of soil properties with geophysical data used for biomass production modelling. The Austrian field site Wagna is characterised by highly heterogenic soil with fluvioglacial gravel sediments. The variation of thickness of topsoil above a sandy subsoil with gravels strongly influences the soil water balance. EM38, mounted on a mobile platform, enables to rapidly scan large areas whereas GPR requires a greater logistical effort. However, GPR can detect exact soil horizon depth between topsoil and subsoil, the combination of both results in a detailed large scale soil map. The combined plot-specific GPR and field site EM38 measurements extends the soil input data and improves the model performance of CANDY PLUS for plant biomass production (Krüger et al. 2011). The example demonstrates how geophysics provides a surplus of data for agroecosystem modelling which identifies and contributes alternative options for agricultural management decisions. iSOIL - "Interactions between soil related sciences - Linking geophysics, soil science and digital soil mapping" is a Collaborative Project (Grant Agreement number 211386) co-funded by the Research DG of the European Commission
A fuzzy logic approach to modeling a vehicle crash test
NASA Astrophysics Data System (ADS)
Pawlus, Witold; Karimi, Hamid; Robbersmyr, Kjell
2013-03-01
This paper presents an application of fuzzy approach to vehicle crash modeling. A typical vehicle to pole collision is described and kinematics of a car involved in this type of crash event is thoroughly characterized. The basics of fuzzy set theory and modeling principles based on fuzzy logic approach are presented. In particular, exceptional attention is paid to explain the methodology of creation of a fuzzy model of a vehicle collision. Furthermore, the simulation results are presented and compared to the original vehicle's kinematics. It is concluded which factors have influence on the accuracy of the fuzzy model's output and how they can be adjusted to improve the model's fidelity.
A modular approach for item response theory modeling with the R package flirt.
Jeon, Minjeong; Rijmen, Frank
2016-06-01
The new R package flirt is introduced for flexible item response theory (IRT) modeling of psychological, educational, and behavior assessment data. flirt integrates a generalized linear and nonlinear mixed modeling framework with graphical model theory. The graphical model framework allows for efficient maximum likelihood estimation. The key feature of flirt is its modular approach to facilitate convenient and flexible model specifications. Researchers can construct customized IRT models by simply selecting various modeling modules, such as parametric forms, number of dimensions, item and person covariates, person groups, link functions, etc. In this paper, we describe major features of flirt and provide examples to illustrate how flirt works in practice. PMID:26174711
Tumour resistance to cisplatin: a modelling approach
NASA Astrophysics Data System (ADS)
Marcu, L.; Bezak, E.; Olver, I.; van Doorn, T.
2005-01-01
Although chemotherapy has revolutionized the treatment of haematological tumours, in many common solid tumours the success has been limited. Some of the reasons for the limitations are: the timing of drug delivery, resistance to the drug, repopulation between cycles of chemotherapy and the lack of complete understanding of the pharmacokinetics and pharmacodynamics of a specific agent. Cisplatin is among the most effective cytotoxic agents used in head and neck cancer treatments. When modelling cisplatin as a single agent, the properties of cisplatin only have to be taken into account, reducing the number of assumptions that are considered in the generalized chemotherapy models. The aim of the present paper is to model the biological effect of cisplatin and to simulate the consequence of cisplatin resistance on tumour control. The 'treated' tumour is a squamous cell carcinoma of the head and neck, previously grown by computer-based Monte Carlo techniques. The model maintained the biological constitution of a tumour through the generation of stem cells, proliferating cells and non-proliferating cells. Cell kinetic parameters (mean cell cycle time, cell loss factor, thymidine labelling index) were also consistent with the literature. A sensitivity study on the contribution of various mechanisms leading to drug resistance is undertaken. To quantify the extent of drug resistance, the cisplatin resistance factor (CRF) is defined as the ratio between the number of surviving cells of the resistant population and the number of surviving cells of the sensitive population, determined after the same treatment time. It is shown that there is a supra-linear dependence of CRF on the percentage of cisplatin-DNA adducts formed, and a sigmoid-like dependence between CRF and the percentage of cells killed in resistant tumours. Drug resistance is shown to be a cumulative process which eventually can overcome tumour regression leading to treatment failure.
Şentürk, Damla; Dalrymple, Lorien S.; Nguyen, Danh V.
2014-01-01
Summary We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. While the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction (PR), geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first two years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression (PCR). PMID:24942314
Walking in circles: a modelling approach
Maus, Horst-Moritz; Seyfarth, Andre
2014-01-01
Blindfolded or disoriented people have the tendency to walk in circles rather than on a straight line even if they wanted to. Here, we use a minimalistic walking model to examine this phenomenon. The bipedal spring-loaded inverted pendulum exhibits asymptotically stable gaits with centre of mass (CoM) dynamics and ground reaction forces similar to human walking in the sagittal plane. We extend this model into three dimensions, and show that stable walking patterns persist if the leg is aligned with respect to the body (here: CoM velocity) instead of a world reference frame. Further, we demonstrate that asymmetric leg configurations, which are common in humans, will typically lead to walking in circles. The diameter of these circles depends strongly on parameter configuration, but is in line with empirical data from human walkers. Simulation results suggest that walking radius and especially direction of rotation are highly dependent on leg configuration and walking velocity, which explains inconsistent veering behaviour in repeated trials in human data. Finally, we discuss the relation between findings in the model and implications for human walking. PMID:25056215
Walking in circles: a modelling approach.
Maus, Horst-Moritz; Seyfarth, Andre
2014-10-01
Blindfolded or disoriented people have the tendency to walk in circles rather than on a straight line even if they wanted to. Here, we use a minimalistic walking model to examine this phenomenon. The bipedal spring-loaded inverted pendulum exhibits asymptotically stable gaits with centre of mass (CoM) dynamics and ground reaction forces similar to human walking in the sagittal plane. We extend this model into three dimensions, and show that stable walking patterns persist if the leg is aligned with respect to the body (here: CoM velocity) instead of a world reference frame. Further, we demonstrate that asymmetric leg configurations, which are common in humans, will typically lead to walking in circles. The diameter of these circles depends strongly on parameter configuration, but is in line with empirical data from human walkers. Simulation results suggest that walking radius and especially direction of rotation are highly dependent on leg configuration and walking velocity, which explains inconsistent veering behaviour in repeated trials in human data. Finally, we discuss the relation between findings in the model and implications for human walking. PMID:25056215
Different experimental approaches in modelling cataractogenesis
Kyselova, Zuzana
2010-01-01
Cataract, the opacification of eye lens, is the leading cause of blindness worldwide. At present, the only remedy is surgical removal of the cataractous lens and substitution with a lens made of synthetic polymers. However, besides significant costs of operation and possible complications, an artificial lens just does not have the overall optical qualities of a normal one. Hence it remains a significant public health problem, and biochemical solutions or pharmacological interventions that will maintain the transparency of the lens are highly required. Naturally, there is a persistent demand for suitable biological models. The ocular lens would appear to be an ideal organ for maintaining culture conditions because of lacking blood vessels and nerves. The lens in vivo obtains its nutrients and eliminates waste products via diffusion with the surrounding fluids. Lens opacification observed in vivo can be mimicked in vitro by addition of the cataractogenic agent sodium selenite (Na2SeO3) to the culture medium. Moreover, since an overdose of sodium selenite induces also cataract in young rats, it became an extremely rapid and convenient model of nuclear cataract in vivo. The main focus of this review will be on selenium (Se) and its salt sodium selenite, their toxicological characteristics and safety data in relevance of modelling cataractogenesis, either under in vivo or in vitro conditions. The studies revealing the mechanisms of lens opacification induced by selenite are highlighted, the representatives from screening for potential anti-cataract agents are listed. PMID:21217865