Functional integral approach to the Lipkin model
Kaneko, K.
1988-07-01
A quantum-mechanical formulation involving both collective and independent-particle motions in many-fermion systems is proposed by using the path-integral technique. A semiclassical method of evaluating the functional integral over both fields is described. As an illustration, the Lipkin model is utilized.
Thermoplasmonics modeling: A Green's function approach
NASA Astrophysics Data System (ADS)
Baffou, Guillaume; Quidant, Romain; Girard, Christian
2010-10-01
We extend the discrete dipole approximation (DDA) and the Green’s dyadic tensor (GDT) methods—previously dedicated to all-optical simulations—to investigate the thermodynamics of illuminated plasmonic nanostructures. This extension is based on the use of the thermal Green’s function and a original algorithm that we named Laplace matrix inversion. It allows for the computation of the steady-state temperature distribution throughout plasmonic systems. This hybrid photothermal numerical method is suited to investigate arbitrarily complex structures. It can take into account the presence of a dielectric planar substrate and is simple to implement in any DDA or GDT code. Using this numerical framework, different applications are discussed such as thermal collective effects in nanoparticles assembly, the influence of a substrate on the temperature distribution and the heat generation in a plasmonic nanoantenna. This numerical approach appears particularly suited for new applications in physics, chemistry, and biology such as plasmon-induced nanochemistry and catalysis, nanofluidics, photothermal cancer therapy, or phase-transition control at the nanoscale.
Functional state modelling approach validation for yeast and bacteria cultivations
Roeva, Olympia; Pencheva, Tania
2014-01-01
In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778
The Thirring-Wess model revisited: a functional integral approach
Belvedere, L.V. . E-mail: armflavio@if.uff.br
2005-06-01
We consider the Wess-Zumino-Witten theory to obtain the functional integral bosonization of the Thirring-Wess model with an arbitrary regularization parameter. Proceeding a systematic of decomposing the Bose field algebra into gauge-invariant- and gauge-non-invariant field subalgebras, we obtain the local decoupled quantum action. The generalized operator solutions for the equations of motion are reconstructed from the functional integral formalism. The isomorphism between the QED {sub 2} (QCD {sub 2}) with broken gauge symmetry by a regularization prescription and the Abelian (non-Abelian) Thirring-Wess model with a fixed bare mass for the meson field is established.
A Model-Based Approach to Constructing Music Similarity Functions
NASA Astrophysics Data System (ADS)
West, Kris; Lamere, Paul
2006-12-01
Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.
Executive function in older adults: a structural equation modeling approach.
Hull, Rachel; Martin, Randi C; Beier, Margaret E; Lane, David; Hamilton, A Cris
2008-07-01
Confirmatory factor analysis (CFA) and structural equation modeling (SEM) were used to study the organization of executive functions in older adults. The four primary goals were to examine (a) whether executive functions were supported by one versus multiple underlying factors, (b) which underlying skill(s) predicted performance on complex executive function tasks, (c) whether performance on analogous verbal and nonverbal tasks was supported by separable underlying skills, and (d) how patterns of performance generally compared with those of young adults. A sample of 100 older adults completed 10 tasks, each designed to engage one of three control processes: mental set shifting (Shifting), information updating or monitoring (Updating), and inhibition of prepotent responses (Inhibition). CFA identified robust Shifting and Updating factors, but the Inhibition factor failed to emerge, and there was no evidence for verbal and nonverbal factors. SEM showed that Updating was the best predictor of performance on each of the complex tasks the authors assessed (the Tower of Hanoi and the Wisconsin Card Sort). Results are discussed in terms of insight for theories of cognitive aging and executive function. PMID:18590362
Model approach to starch functionality in bread making.
Goesaert, Hans; Leman, Pedro; Delcour, Jan A
2008-08-13
We used modified wheat starches in gluten-starch flour models to study the role of starch in bread making. Incorporation of hydroxypropylated starch in the recipe reduced loaf volume and initial crumb firmness and increased crumb gas cell size. Firming rate and firmness after storage increased for loaves containing the least hydroxypropylated starch. Inclusion of cross-linked starch had little effect on loaf volume or crumb structure but increased crumb firmness. The firming rate was mostly similar to that of control samples. Presumably, the moment and extent of starch gelatinization and the concomitant water migration influence the structure formation during baking. Initial bread firmness seems determined by the rigidity of the gelatinized granules and leached amylose. Amylopectin retrogradation and strengthening of a long-range network by intensifying the inter- and intramolecular starch-starch and possibly also starch-gluten interactions (presumably because of water incorporation in retrograded amylopectin crystallites) play an important role in firming.
Mapping baroreceptor function to genome: a mathematical modeling approach.
Kendziorski, C M; Cowley, A W; Greene, A S; Salgado, H C; Jacob, H J; Tonellato, P J
2002-01-01
To gain information about the genetic basis of a complex disease such as hypertension, blood pressure averages are often obtained and used as phenotypes in genetic mapping studies. In contrast, direct measurements of physiological regulatory mechanisms are not often obtained, due in large part to the time and expense required. As a result, little information about the genetic basis of physiological controlling mechanisms is available. Such information is important for disease diagnosis and treatment. In this article, we use a mathematical model of blood pressure to derive phenotypes related to the baroreceptor reflex, a short-term controller of blood pressure. The phenotypes are then used in a quantitative trait loci (QTL) mapping study to identify a potential genetic basis of this controller. PMID:11973321
Approaches to Modelling the Dynamical Activity of Brain Function Based on the Electroencephalogram
NASA Astrophysics Data System (ADS)
Liley, David T. J.; Frascoli, Federico
The brain is arguably the quintessential complex system as indicated by the patterns of behaviour it produces. Despite many decades of concentrated research efforts, we remain largely ignorant regarding the essential processes that regulate and define its function. While advances in functional neuroimaging have provided welcome windows into the coarse organisation of the neuronal networks that underlie a range of cognitive functions, they have largely ignored the fact that behaviour, and by inference brain function, unfolds dynamically. Modelling the brain's dynamics is therefore a critical step towards understanding the underlying mechanisms of its functioning. To date, models have concentrated on describing the sequential organisation of either abstract mental states (functionalism, hard AI) or the objectively measurable manifestations of the brain's ongoing activity (rCBF, EEG, MEG). While the former types of modelling approach may seem to better characterise brain function, they do so at the expense of not making a definite connection with the actual physical brain. Of the latter, only models of the EEG (or MEG) offer a temporal resolution well matched to the anticipated temporal scales of brain (mental processes) function. This chapter will outline the most pertinent of these modelling approaches, and illustrate, using the electrocortical model of Liley et al, how the detailed application of the methods of nonlinear dynamics and bifurcation theory is central to exploring and characterising their various dynamical features. The rich repertoire of dynamics revealed by such dynamical systems approaches arguably represents a critical step towards an understanding of the complexity of brain function.
NASA Astrophysics Data System (ADS)
Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.
2016-10-01
Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach to the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (< ˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe tradeoffs - an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.
a Radiative Transfer Equation/phase Function Approach to Vegetation Canopy Reflectance Modeling
NASA Astrophysics Data System (ADS)
Randolph, Marion Herbert
Vegetation canopy reflectance models currently in use differ considerably in their treatment of the radiation scattering problem, and it is this fundamental difference which stimulated this investigation of the radiative transfer equation/phase function approach. The primary objective of this thesis is the development of vegetation canopy phase functions which describe the probability of radiation scattering within a canopy in terms of its biological and physical characteristics. In this thesis a technique based upon quadrature formulae is used to numerically generate a variety of vegetation canopy phase functions. Based upon leaf inclination distribution functions, phase functions are generated for plagiophile, extremophile, erectophile, spherical, planophile, blue grama (Bouteloua gracilis), and soybean canopies. The vegetation canopy phase functions generated are symmetric with respect to the incident and exitant angles, and hence satisfy the principle of reciprocity. The remaining terms in the radiative transfer equation are also derived in terms of canopy geometry and optical properties to complete the development of the radiative transfer equation/phase function description for vegetation canopy reflectance modeling. In order to test the radiative transfer equation/phase function approach the iterative discrete ordinates method for solving the radiative transfer equation is implemented. In comparison with field data, the approach tends to underestimate the visible reflectance and overestimate infrared reflectance. The approach does compare well, however, with other extant canopy reflectance models; for example, it agrees to within ten to fifteen percent of the Suits model (Suits, 1972). Sensitivity analysis indicates that canopy geometry may influence reflectance as much as 100 percent for a given wavelength. Optical thickness produces little change in reflectance after a depth of 2.5 (Leaf area index of 4.0) is reached, and reflectance generally increases
Functional modelling of planar cell polarity: an approach for identifying molecular function
2013-01-01
Background Cells in some tissues acquire a polarisation in the plane of the tissue in addition to apical-basal polarity. This polarisation is commonly known as planar cell polarity and has been found to be important in developmental processes, as planar polarity is required to define the in-plane tissue coordinate system at the cellular level. Results We have built an in-silico functional model of cellular polarisation that includes cellular asymmetry, cell-cell signalling and a response to a global cue. The model has been validated and parameterised against domineering non-autonomous wing hair phenotypes in Drosophila. Conclusions We have carried out a systematic comparison of in-silico polarity phenotypes with patterns observed in vivo under different genetic manipulations in the wing. This has allowed us to classify the specific functional roles of proteins involved in generating cell polarity, providing new hypotheses about their specific functions, in particular for Pk and Dsh. The predictions from the model allow direct assignment of functional roles of genes from genetic mosaic analysis of Drosophila wings. PMID:23672397
ERIC Educational Resources Information Center
Brunold, S.; Scheuermeier, U.
1996-01-01
Uses of the agricultural knowledge systems concept of information flow are described in Holland, Bhutan, Switzerland, and India. Application of the model for gaining an overview of institutions must be combined with a functional approach for designing appropriate extension programs. (SK)
Quasiclassical approach to partition functions of ions in a chemical plasma model
Shpatakovskaya, G. V.
2008-03-15
The partition functions of ions that are used in a chemical plasma model are estimated by the Thomas-Fermi free ion model without reference to empirical data. Different form factors limiting the number of the excitation levels taken into account are considered, namely, those corresponding to the average atomic radius criterion, the temperature criterion, and the Planck-Brillouin-Larkin approximation. Expressions are presented for the average excitation energy and for the temperature and volume derivatives of the partition function. A comparison with the results of the empirical approach is made for the aluminum and iron plasmas.
Application of Model-Assisted Pod Using a Transfer Function Approach
NASA Astrophysics Data System (ADS)
Harding, C. A.; Hugo, G. R.; Bowles, S. J.
2009-03-01
A transfer function approach to model-assisted probability of detection (POD) has been applied to the detection of fatigue cracks at fastener holes. The model uses data obtained from field trials and laboratory experiments and takes into account the effects of structural geometry, the natural variability in fatigue cracks and human factors in the inspection process. A fully representative POD trial was not feasible for this inspection procedure, and so this application provides an important real-world context for the development of model-assisted POD.
NASA Astrophysics Data System (ADS)
Reich, P. B.; Butler, E. E.
2015-12-01
This project will advance global land models by shifting from the current plant functional type approach to one that better utilizes what is known about the importance and variability of plant traits, within a framework of simultaneously improving fundamental physiological relations that are at the core of model carbon cycling algorithms. Existing models represent the global distribution of vegetation types using the Plant Functional Typeconcept. Plant Functional Types are classes of plant species with similar evolutionary and life history withpresumably similar responses to environmental conditions like CO2, water and nutrient availability. Fixedproperties for each Plant Functional Type are specified through a collection of physiological parameters, or traits.These traits, mostly physiological in nature (e.g., leaf nitrogen and longevity) are used in model algorithms to estimate ecosystem properties and/or drive calculated process rates. In most models, 5 to 15 functional types represent terrestrial vegetation; in essence, they assume there are a total of only 5 to 15 different kinds of plants on the entire globe. This assumption of constant plant traits captured within the functional type concept has serious limitations, as a single set of traits does not reflect trait variation observed within and between species and communities. While this simplification was necessary decades past, substantial improvement is now possible. Rather than assigning a small number of constant parameter values to all grid cells in a model, procedures will be developed that predict a frequency distribution of values for any given grid cell. Thus, the mean and variance, and how these change with time, will inform and improve model performance. The trait-based approach will improve land modeling by (1) incorporating patterns and heterogeneity of traits into model parameterization, thus evolving away from a framework that considers large areas of vegetation to have near identical trait
Modeling and Simulation Approaches for Cardiovascular Function and Their Role in Safety Assessment
Collins, TA; Bergenholm, L; Abdulla, T; Yates, JWT; Evans, N; Chappell, MJ; Mettetal, JT
2015-01-01
Systems pharmacology modeling and pharmacokinetic-pharmacodynamic (PK/PD) analysis of drug-induced effects on cardiovascular (CV) function plays a crucial role in understanding the safety risk of new drugs. The aim of this review is to outline the current modeling and simulation (M&S) approaches to describe and translate drug-induced CV effects, with an emphasis on how this impacts drug safety assessment. Current limitations are highlighted and recommendations are made for future effort in this vital area of drug research. PMID:26225237
Modeling and Simulation Approaches for Cardiovascular Function and Their Role in Safety Assessment.
Collins, T A; Bergenholm, L; Abdulla, T; Yates, Jwt; Evans, N; Chappell, M J; Mettetal, J T
2015-03-01
Systems pharmacology modeling and pharmacokinetic-pharmacodynamic (PK/PD) analysis of drug-induced effects on cardiovascular (CV) function plays a crucial role in understanding the safety risk of new drugs. The aim of this review is to outline the current modeling and simulation (M&S) approaches to describe and translate drug-induced CV effects, with an emphasis on how this impacts drug safety assessment. Current limitations are highlighted and recommendations are made for future effort in this vital area of drug research.
Kim, Jieun; Zhu, Wei; Chang, Linda; Bentler, Peter M; Ernst, Thomas
2007-02-01
The ultimate goal of brain connectivity studies is to propose, test, modify, and compare certain directional brain pathways. Path analysis or structural equation modeling (SEM) is an ideal statistical method for such studies. In this work, we propose a two-stage unified SEM plus GLM (General Linear Model) approach for the analysis of multisubject, multivariate functional magnetic resonance imaging (fMRI) time series data with subject-level covariates. In Stage 1, we analyze the fMRI multivariate time series for each subject individually via a unified SEM model by combining longitudinal pathways represented by a multivariate autoregressive (MAR) model, and contemporaneous pathways represented by a conventional SEM. In Stage 2, the resulting subject-level path coefficients are merged with subject-level covariates such as gender, age, IQ, etc., to examine the impact of these covariates on effective connectivity via a GLM. Our approach is exemplified via the analysis of an fMRI visual attention experiment. Furthermore, the significant path network from the unified SEM analysis is compared to that from a conventional SEM analysis without incorporating the longitudinal information as well as that from a Dynamic Causal Modeling (DCM) approach.
Mass predictions of the relativistic mean-field model with the radial basis function approach
NASA Astrophysics Data System (ADS)
Zheng, J. S.; Wang, N. Y.; Wang, Z. Y.; Niu, Z. M.; Niu, Y. F.; Sun, B.
2014-07-01
The radial basis function (RBF) is a powerful tool to improve mass predictions of nuclear models. By combining the RBF approach with the relativistic mean-field (RMF) model, the systematic deviations between mass predictions of the RMF model and the experimental data are eliminated to a large extent and the resulting rms deviation is reduced from 2.217 to 0.488 MeV. Furthermore, it is found that the RBF approach has a relatively reliable extrapolative power along the distance from the β-stability line except for a large uncertainty around the region at magic number. From the deduced neutron separation energies, we found that the description of the nuclear shell structure and shape transition is also significantly improved by the RBF approach, thus improving agreement with the solar r-process abundances before A =130 and speeding up the r-matter flow. Therefore, a shorter irradiation time is enough to reproduce the solar r-process abundance distribution for the improved RMF mass model, which is closer to the irradiation time for those sophisticated mass models.
Xue, Wenqiong; Bowman, F. DuBois; Pileggi, Anthony V.; Mayer, Andrew R.
2015-01-01
Recent innovations in neuroimaging technology have provided opportunities for researchers to investigate connectivity in the human brain by examining the anatomical circuitry as well as functional relationships between brain regions. Existing statistical approaches for connectivity generally examine resting-state or task-related functional connectivity (FC) between brain regions or separately examine structural linkages. As a means to determine brain networks, we present a unified Bayesian framework for analyzing FC utilizing the knowledge of associated structural connections, which extends an approach by Patel et al. (2006a) that considers only functional data. We introduce an FC measure that rests upon assessments of functional coherence between regional brain activity identified from functional magnetic resonance imaging (fMRI) data. Our structural connectivity (SC) information is drawn from diffusion tensor imaging (DTI) data, which is used to quantify probabilities of SC between brain regions. We formulate a prior distribution for FC that depends upon the probability of SC between brain regions, with this dependence adhering to structural-functional links revealed by our fMRI and DTI data. We further characterize the functional hierarchy of functionally connected brain regions by defining an ascendancy measure that compares the marginal probabilities of elevated activity between regions. In addition, we describe topological properties of the network, which is composed of connected region pairs, by performing graph theoretic analyses. We demonstrate the use of our Bayesian model using fMRI and DTI data from a study of auditory processing. We further illustrate the advantages of our method by comparisons to methods that only incorporate functional information. PMID:25750621
A new approach to wall modeling in LES of incompressible flow via function enrichment
NASA Astrophysics Data System (ADS)
Krank, Benjamin; Wall, Wolfgang A.
2016-07-01
A novel approach to wall modeling for the incompressible Navier-Stokes equations including flows of moderate and large Reynolds numbers is presented. The basic idea is that a problem-tailored function space allows prediction of turbulent boundary layer gradients with very coarse meshes. The proposed function space consists of a standard polynomial function space plus an enrichment, which is constructed using Spalding's law-of-the-wall. The enrichment function is not enforced but "allowed" in a consistent way and the overall methodology is much more general and also enables other enrichment functions. The proposed method is closely related to detached-eddy simulation as near-wall turbulence is modeled statistically and large eddies are resolved in the bulk flow. Interpreted in terms of a three-scale separation within the variational multiscale method, the standard scale resolves large eddies and the enrichment scale represents boundary layer turbulence in an averaged sense. The potential of the scheme is shown applying it to turbulent channel flow of friction Reynolds numbers from Reτ = 590 and up to 5,000, flow over periodic constrictions at the Reynolds numbers ReH = 10 , 595 and 19,000 as well as backward-facing step flow at Reh = 5 , 000, all with extremely coarse meshes. Excellent agreement with experimental and DNS data is observed with the first grid point located at up to y1+ = 500 and especially under adverse pressure gradients as well as in separated flows.
Uga, Minako; Dan, Ippeita; Sano, Toshifumi; Dan, Haruka; Watanabe, Eiju
2014-01-01
Abstract. An increasing number of functional near-infrared spectroscopy (fNIRS) studies utilize a general linear model (GLM) approach, which serves as a standard statistical method for functional magnetic resonance imaging (fMRI) data analysis. While fMRI solely measures the blood oxygen level dependent (BOLD) signal, fNIRS measures the changes of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) signals at a temporal resolution severalfold higher. This suggests the necessity of adjusting the temporal parameters of a GLM for fNIRS signals. Thus, we devised a GLM-based method utilizing an adaptive hemodynamic response function (HRF). We sought the optimum temporal parameters to best explain the observed time series data during verbal fluency and naming tasks. The peak delay of the HRF was systematically changed to achieve the best-fit model for the observed oxy- and deoxy-Hb time series data. The optimized peak delay showed different values for each Hb signal and task. When the optimized peak delays were adopted, the deoxy-Hb data yielded comparable activations with similar statistical power and spatial patterns to oxy-Hb data. The adaptive HRF method could suitably explain the behaviors of both Hb parameters during tasks with the different cognitive loads during a time course, and thus would serve as an objective method to fully utilize the temporal structures of all fNIRS data. PMID:26157973
NASA Astrophysics Data System (ADS)
Stradi, Daniele; Martinez, Umberto; Blom, Anders; Brandbyge, Mads; Stokbro, Kurt
2016-04-01
Metal-semiconductor contacts are a pillar of modern semiconductor technology. Historically, their microscopic understanding has been hampered by the inability of traditional analytical and numerical methods to fully capture the complex physics governing their operating principles. Here we introduce an atomistic approach based on density functional theory and nonequilibrium Green's function, which includes all the relevant ingredients required to model realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via I -Vbias curve simulations. We apply this method to characterize an Ag/Si interface relevant for photovoltaic applications and study the rectifying-to-Ohmic transition as a function of the semiconductor doping. We also demonstrate that the standard "activation energy" method for the analysis of I -Vbias data might be inaccurate for nonideal interfaces as it neglects electron tunneling, and that finite-size atomistic models have problems in describing these interfaces in the presence of doping due to a poor representation of space-charge effects. Conversely, the present method deals effectively with both issues, thus representing a valid alternative to conventional procedures for the accurate characterization of metal-semiconductor interfaces.
Optogenetic approaches to evaluate striatal function in animal models of Parkinson disease.
Parker, Krystal L; Kim, Youngcho; Alberico, Stephanie L; Emmons, Eric B; Narayanan, Nandakumar S
2016-03-01
Optogenetics refers to the ability to control cells that have been genetically modified to express light-sensitive ion channels. The introduction of optogenetic approaches has facilitated the dissection of neural circuits. Optogenetics allows for the precise stimulation and inhibition of specific sets of neurons and their projections with fine temporal specificity. These techniques are ideally suited to investigating neural circuitry underlying motor and cognitive dysfunction in animal models of human disease. Here, we focus on how optogenetics has been used over the last decade to probe striatal circuits that are involved in Parkinson disease, a neurodegenerative condition involving motor and cognitive abnormalities resulting from degeneration of midbrain dopaminergic neurons. The precise mechanisms underlying the striatal contribution to both cognitive and motor dysfunction in Parkinson disease are unknown. Although optogenetic approaches are somewhat removed from clinical use, insight from these studies can help identify novel therapeutic targets and may inspire new treatments for Parkinson disease. Elucidating how neuronal and behavioral functions are influenced and potentially rescued by optogenetic manipulation in animal models could prove to be translatable to humans. These insights can be used to guide future brain-stimulation approaches for motor and cognitive abnormalities in Parkinson disease and other neuropsychiatric diseases.
Optogenetic approaches to evaluate striatal function in animal models of Parkinson disease
Parker, Krystal L.; Kim, Youngcho; Alberico, Stephanie L.; Emmons, Eric B.; Narayanan, Nandakumar S.
2016-01-01
Optogenetics refers to the ability to control cells that have been genetically modified to express light-sensitive ion channels. The introduction of optogenetic approaches has facilitated the dissection of neural circuits. Optogenetics allows for the precise stimulation and inhibition of specific sets of neurons and their projections with fine temporal specificity. These techniques are ideally suited to investigating neural circuitry underlying motor and cognitive dysfunction in animal models of human disease. Here, we focus on how optogenetics has been used over the last decade to probe striatal circuits that are involved in Parkinson disease, a neurodegenerative condition involving motor and cognitive abnormalities resulting from degeneration of midbrain dopaminergic neurons. The precise mechanisms underlying the striatal contribution to both cognitive and motor dysfunction in Parkinson disease are unknown. Although optogenetic approaches are somewhat removed from clinical use, insight from these studies can help identify novel therapeutic targets and may inspire new treatments for Parkinson disease. Elucidating how neuronal and behavioral functions are influenced and potentially rescued by optogenetic manipulation in animal models could prove to be translatable to humans. These insights can be used to guide future brain-stimulation approaches for motor and cognitive abnormalities in Parkinson disease and other neuropsychiatric diseases. PMID:27069384
A stochastic approach for model reduction and memory function design in hydrogeophysical inversion
NASA Astrophysics Data System (ADS)
Hou, Z.; Kellogg, A.; Terry, N.
2009-12-01
Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the
NASA Astrophysics Data System (ADS)
Zenzerovic, I.; Kropp, W.; Pieringer, A.
2016-08-01
Curve squeal is a strong tonal sound that may arise when a railway vehicle negotiates a tight curve. In contrast to frequency-domain models, time-domain models are able to capture the nonlinear and transient nature of curve squeal. However, these models are computationally expensive due to requirements for fine spatial and time discretization. In this paper, a computationally efficient engineering model for curve squeal in the time-domain is proposed. It is based on a steady-state point-contact model for the tangential wheel/rail contact and a Green's functions approach for wheel and rail dynamics. The squeal model also includes a simple model of sound radiation from the railway wheel from the literature. A validation of the tangential point-contact model against Kalker's transient variational contact model reveals that the point-contact model performs well within the squeal model up to at least 5 kHz. The proposed squeal model is applied to investigate the influence of lateral creepage, friction and wheel/rail contact position on squeal occurrence and amplitude. The study indicates a significant influence of the wheel/rail contact position on squeal frequencies and amplitudes. Friction and lateral creepage show an influence on squeal occurrence and amplitudes, but this is only secondary to the influence of the contact position.
Westerveld, Ard J; Kuck, Alexander; Schouten, Alfred C; Veltink, Peter H; van der Kooij, Herman
2012-01-01
Stroke often has a disabling effect on the ability to use the hand in a functional manner. Accurate finger and thumb positioning is necessary for many activities of daily living. In the current study, the feasibility of novel FES based approaches for positioning the thumb and fingers for grasp and release of differently sized objects is evaluated. Assistance based on these approaches may be used in rehabilitation of grasp and release after stroke. A model predictive controller (MPC) is compared with a proportional (P) feedback controller. Both methods are compared on their performance in tracking reference trajectories and in the capability of grasping, holding and releasing objects. Both methods are able to selectively activate the fingers such that differently sized objects, selected from the Action Research Arm test, can be grasped. The MPC method is easier to use in practice, as this method is based on a single identification of a model of the biological system. The P-controller has more parameters which need to be set correctly, and therefore needs more time to initialise. The current results are very promising. Evaluation in patients will be done to explore the possibilities to apply these methods in rehabilitation of grasp and release after stroke.
An overview of the recent approaches for terroir functional modelling, footprinting and zoning
NASA Astrophysics Data System (ADS)
Vaudour, E.; Costantini, E.; Jones, G. V.; Mocali, S.
2014-11-01
Notions of terroir and their conceptualization through agri-environmental sciences have become popular in many parts of world. Originally developed for wine, terroir now encompasses many other crops including fruits, vegetables, cheese, olive oil, coffee, cacao and other crops, linking the uniqueness and quality of both beverages and foods to the environment where they are produced, giving the consumer a sense of place. Climate, geology, geomorphology, and soil are the main environmental factors which compose the terroir effect at different scales. Often considered immutable at the cultural scale, the natural components of terroir are actually a set of processes, which together create a delicate equilibrium and regulation of its effect on products in both space and time. Due to both a greater need to better understand regional to site variations in crop production and the growth in spatial analytic technologies, the study of terroir has shifted from a largely descriptive regional science to a more applied, technical research field. Furthermore, the explosion of spatial data availability and sensing technologies has made the within-field scale of study more valuable to the individual grower. The result has been greater adoption but also issues associated with both the spatial and temporal scales required for practical applications, as well as the relevant approaches for data synthesis. Moreover, as soil microbial communities are known to be of vital importance for terrestrial processes by driving the major soil geochemical cycles and supporting healthy plant growth, an intensive investigation of the microbial organization and their function is also required. Our objective is to present an overview of existing data and modelling approaches for terroir functional modelling, footprinting and zoning at local and regional scales. This review will focus on three main areas of recent terroir research: (1) quantifying the influences of terroir components on plant growth
A conditional Granger causality model approach for group analysis in functional MRI
Zhou, Zhenyu; Wang, Xunheng; Klahr, Nelson J.; Liu, Wei; Arias, Diana; Liu, Hongzhi; von Deneen, Karen M.; Wen, Ying; Lu, Zuhong; Xu, Dongrong; Liu, Yijun
2011-01-01
Granger causality model (GCM) derived from multivariate vector autoregressive models of data has been employed for identifying effective connectivity in the human brain with functional MR imaging (fMRI) and to reveal complex temporal and spatial dynamics underlying a variety of cognitive processes. In the most recent fMRI effective connectivity measures, pairwise GCM has commonly been applied based on single voxel values or average values from special brain areas at the group level. Although a few novel conditional GCM methods have been proposed to quantify the connections between brain areas, our study is the first to propose a viable standardized approach for group analysis of an fMRI data with GCM. To compare the effectiveness of our approach with traditional pairwise GCM models, we applied a well-established conditional GCM to pre-selected time series of brain regions resulting from general linear model (GLM) and group spatial kernel independent component analysis (ICA) of an fMRI dataset in the temporal domain. Datasets consisting of one task-related and one resting-state fMRI were used to investigate connections among brain areas with the conditional GCM method. With the GLM detected brain activation regions in the emotion related cortex during the block design paradigm, the conditional GCM method was proposed to study the causality of the habituation between the left amygdala and pregenual cingulate cortex during emotion processing. For the resting-state dataset, it is possible to calculate not only the effective connectivity between networks but also the heterogeneity within a single network. Our results have further shown a particular interacting pattern of default mode network (DMN) that can be characterized as both afferent and efferent influences on the medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC). These results suggest that the conditional GCM approach based on a linear multivariate vector autoregressive (MVAR) model can achieve
An overview of the recent approaches to terroir functional modelling, footprinting and zoning
NASA Astrophysics Data System (ADS)
Vaudour, E.; Costantini, E.; Jones, G. V.; Mocali, S.
2015-03-01
Notions of terroir and their conceptualization through agro-environmental sciences have become popular in many parts of world. Originally developed for wine, terroir now encompasses many other crops including fruits, vegetables, cheese, olive oil, coffee, cacao and other crops, linking the uniqueness and quality of both beverages and foods to the environment where they are produced, giving the consumer a sense of place. Climate, geology, geomorphology and soil are the main environmental factors which make up the terroir effect on different scales. Often considered immutable culturally, the natural components of terroir are actually a set of processes, which together create a delicate equilibrium and regulation of its effect on products in both space and time. Due to both a greater need to better understand regional-to-site variations in crop production and the growth in spatial analytic technologies, the study of terroir has shifted from a largely descriptive regional science to a more applied, technical research field. Furthermore, the explosion of spatial data availability and sensing technologies has made the within-field scale of study more valuable to the individual grower. The result has been greater adoption of these technologies but also issues associated with both the spatial and temporal scales required for practical applications, as well as the relevant approaches for data synthesis. Moreover, as soil microbial communities are known to be of vital importance for terrestrial processes by driving the major soil geochemical cycles and supporting healthy plant growth, an intensive investigation of the microbial organization and their function is also required. Our objective is to present an overview of existing data and modelling approaches for terroir functional modelling, footprinting and zoning on local and regional scales. This review will focus on two main areas of recent terroir research: (1) using new tools to unravel the biogeochemical cycles of both
An overview of the recent approaches for terroir functional modelling, footprinting and zoning
NASA Astrophysics Data System (ADS)
Costantini, Edoardo; Emmanuelle, Vaudour; Jones, Gregory; Mocali, Stefano
2014-05-01
Notions of terroir and their conceptualization through agri-environmental sciences have become popular in many parts of world. Originally developed for wine, terroir is now investigated for fruits, vegetables, cheese, olive oil, coffee, cacao and other crops, linking the uniqueness and quality of both beverages and foods to the environment where they are produced, giving the consumer a sense of place. Climate, geology, geomorphology, and soil are the main environmental factors which compose the terroir effect at different scales. Often considered immutable at the cultural scale, the natural components of terroir are actually a set of processes, which together create a delicate equilibrium and regulation of its effect on products in both space and time. Due to both a greater need to better understand regional to site variations in crop production and the growth in spatial analytic technologies, the study of terroir has shifted from a largely descriptive regional science to a more applied, technical research field. Furthermore, the explosion of spatial data availability and elaboration technologies have made the scale of study more valuable to the individual grower, resulting in greater adoption and application. Moreover, as soil microbial communities are known to be of vital importance for terrestrial processes by driving the major soil geochemical cycles and supporting healthy plant growth, an intensive investigation of the microbial organization and their function is also required. Our objective is to present an overview of existing data and modeling approaches for terroir functional modeling, footprinting and zoning at local and regional scales. This review will focus on four main areas of recent terroir research: 1) quantifying the influences of terroir components on plant growth, fruit composition and quality, mostly examining climate-soil-water relationships; 2) the metagenomic approach as new tool to unravel the biogeochemical cycles of both macro- and
Modeling solvation effects in real-space and real-time within density functional approaches
Delgado, Alain; Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea
2015-10-14
The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the OCTOPUS code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.
Modeling solvation effects in real-space and real-time within density functional approaches
NASA Astrophysics Data System (ADS)
Delgado, Alain; Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea
2015-10-01
The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the Octopus code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.
Modeling solvation effects in real-space and real-time within density functional approaches.
Delgado, Alain; Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea
2015-10-14
The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the Octopus code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water. PMID:26472367
ERIC Educational Resources Information Center
Herndon, Mary Anne
1978-01-01
In a model of the functioning of short term memory, the encoding of information for subsequent storage in long term memory is simulated. In the encoding process, semantically equivalent paragraphs are detected for recombination into a macro information unit. (HOD)
Integrative approaches for modeling regulation and function of the respiratory system
Ben-Tal, Alona
2013-01-01
Mathematical models have been central to understanding the interaction between neural control and breathing. Models of the entire respiratory system – which comprises the lungs and the neural circuitry that controls their ventilation - have been derived using simplifying assumptions to compartmentalise each component of the system and to define the interactions between components. These full system models often rely – through necessity - on empirically derived relationships or parameters, in addition to physiological values. In parallel with the development of whole respiratory system models are mathematical models that focus on furthering a detailed understanding of the neural control network, or of the several functions that contribute to gas exchange within the lung. These models are biophysically based, and rely on physiological parameters. They include single-unit models for a breathing lung or neural circuit, through to spatially-distributed models of ventilation and perfusion, or multi-circuit models for neural control. The challenge is to bring together these more recent advances in models of neural control with models of lung function, into a full simulation for the respiratory system that builds upon the more detailed models but remains computationally tractable. This requires first understanding the mathematical models that have been developed for the respiratory system at different levels, and which could be used to study how physiological levels of O2 and CO2 in the blood are maintained. PMID:24591490
Integrative approaches for modeling regulation and function of the respiratory system.
Ben-Tal, Alona; Tawhai, Merryn H
2013-01-01
Mathematical models have been central to understanding the interaction between neural control and breathing. Models of the entire respiratory system-which comprises the lungs and the neural circuitry that controls their ventilation-have been derived using simplifying assumptions to compartmentalize each component of the system and to define the interactions between components. These full system models often rely-through necessity-on empirically derived relationships or parameters, in addition to physiological values. In parallel with the development of whole respiratory system models are mathematical models that focus on furthering a detailed understanding of the neural control network, or of the several functions that contribute to gas exchange within the lung. These models are biophysically based, and rely on physiological parameters. They include single-unit models for a breathing lung or neural circuit, through to spatially distributed models of ventilation and perfusion, or multicircuit models for neural control. The challenge is to bring together these more recent advances in models of neural control with models of lung function, into a full simulation for the respiratory system that builds upon the more detailed models but remains computationally tractable. This requires first understanding the mathematical models that have been developed for the respiratory system at different levels, and which could be used to study how physiological levels of O2 and CO2 in the blood are maintained.
A.V. Efremov, P. Schweitzer, O.V. Teryaev, P. Zavada
2011-03-01
We derive relations between transverse momentum dependent distribution functions (TMDs) and the usual parton distribution functions (PDFs) in the 3D covariant parton model, which follow from Lorentz invariance and the assumption of a rotationally symmetric distribution of parton momenta in the nucleon rest frame. Using the known PDFs f_1(x) and g_1(x) as input we predict the x- and pT-dependence of all twist-2 T-even TMDs.
NASA Astrophysics Data System (ADS)
Choudhury, Pallabee; Chopra, Sumer; Roy, Ketan Singha; Sharma, Jyoti
2016-04-01
In this study, ground motions are estimated for scenario earthquakes of Mw 6.0, 6.5 and 7.0 at 17 sites in Gujarat region using Empirical Green's function technique. The Dholavira earthquake of June 19, 2012 (Mw 5.1) which occurred in the Kachchh region of Gujarat is considered as an element earthquake. We estimated the focal mechanism and source parameters of the element earthquake using standard methodologies. The moment tensor inversion technique is used to determine the fault plane solution (strike = 8°, dip = 51°, and rake = - 7°). The seismic moment and the stress drop are 5.6 × 1016 Nm and 120 bars respectively. The validity of the approach was tested for a smaller earthquake. A few possible directivity scenarios were also tested to find out the effect of directivity on the level of ground motions. Our study reveals that source complexities and site effects play a very important role in deciding the level of ground motions at a site which are difficult to model by GMPEs. Our results shed new light on the expected accelerations in the region and suggest that the Kachchh region can expect maximum acceleration of around 500 cm/s2 at few sites near source and around 200 cm/s2 at most of the sites located within 50 km from the epicentre for a Mw 7.0 earthquake. The estimated ground accelerations can be used by the administrators and planners for providing a guiding framework to undertake mitigation investments and activities in the region.
A Hybrid Approach to Structure and Function Modeling of G Protein-Coupled Receptors.
Latek, Dorota; Bajda, Marek; Filipek, Sławomir
2016-04-25
The recent GPCR Dock 2013 assessment of serotonin receptor 5-HT1B and 5-HT2B, and smoothened receptor SMO targets, exposed the strengths and weaknesses of the currently used computational approaches. The test cases of 5-HT1B and 5-HT2B demonstrated that both the receptor structure and the ligand binding mode can be predicted with the atomic-detail accuracy, as long as the target-template sequence similarity is relatively high. On the other hand, the observation of a low target-template sequence similarity, e.g., between SMO from the frizzled GPCR family and members of the rhodopsin family, hampers the GPCR structure prediction and ligand docking. Indeed, in GPCR Dock 2013, accurate prediction of the SMO target was still beyond the capabilities of most research groups. Another bottleneck in the current GPCR research, as demonstrated by the 5-HT2B target, is the reliable prediction of global conformational changes induced by activation of GPCRs. In this work, we report details of our protocol used during GPCR Dock 2013. Our structure prediction and ligand docking protocol was especially successful in the case of 5-HT1B and 5-HT2B-ergotamine complexes for which we provide one of the most accurate predictions. In addition to a description of the GPCR Dock 2013 results, we propose a novel hybrid computational methodology to improve GPCR structure and function prediction. This computational methodology employs two separate rankings for filtering GPCR models. The first ranking is ligand-based while the second is based on the scoring scheme of the recently published BCL method. In this work, we prove that the use of knowledge-based potentials implemented in BCL is an efficient way to cope with major bottlenecks in the GPCR structure prediction. Thereby, we also demonstrate that the knowledge-based potentials for membrane proteins were significantly improved, because of the recent surge in available experimental structures.
Optimization of global model composed of radial basis functions using the term-ranking approach
Cai, Peng; Tao, Chao Liu, Xiao-Jun
2014-03-15
A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.
NASA Astrophysics Data System (ADS)
Bodegom, P. V.
2015-12-01
In recent years a number of approaches have been developed to provide alternatives to the use of plant functional types (PFTs) with constant vegetation characteristics for simulating vegetation responses to climate changes. In this presentation, an overview of those approaches and their challenges is given. Some new approaches aim at removing PFTs altogether by determining the combination of vegetation characteristics that would fit local conditions best. Others describe the variation in traits within PFTs as a function of environmental drivers, based on community assembly principles. In the first approach, after an equilibrium has been established, vegetation composition and its functional attributes can change by allowing the emergence of a new type that is more fit. In the latter case, changes in vegetation attributes in space and time as assumed to be the result intraspecific variation, genetic adaptation and species turnover, without quantifying their respective importance. Hence, it is assumed that -by whatever mechanism- the community as a whole responds without major time lags to changes in environmental drivers. Recently, we showed that intraspecific variation is highly species- and trait-specific and that none of the current hypotheses on drivers of this variation seems to hold. Also genetic adaptation varies considerably among species and it is uncertain whether it will be fast enough to cope with climate change. Species turnover within a community is especially fast in herbaceous communities, but much slower in forest communities. Hence, it seems that assumptions made may not hold for forested ecosystems, but solutions to deal with this do not yet exist. Even despite the fact that responsiveness of vegetation to environmental change may be overestimated, we showed that -upon implementation of trait-environment relationships- major changes in global vegetation distribution are projected, to similar extents as to those without such responsiveness.
Stress and Resilience in Functional Somatic Syndromes – A Structural Equation Modeling Approach
Fischer, Susanne; Lemmer, Gunnar; Gollwitzer, Mario; Nater, Urs M.
2014-01-01
Background Stress has been suggested to play a role in the development and perpetuation of functional somatic syndromes. The mechanisms of how this might occur are not clear. Purpose We propose a multi-dimensional stress model which posits that childhood trauma increases adult stress reactivity (i.e., an individual's tendency to respond strongly to stressors) and reduces resilience (e.g., the belief in one's competence). This in turn facilitates the manifestation of functional somatic syndromes via chronic stress. We tested this model cross-sectionally and prospectively. Methods Young adults participated in a web survey at two time points. Structural equation modeling was used to test our model. The final sample consisted of 3′054 participants, and 429 of these participated in the follow-up survey. Results Our proposed model fit the data in the cross-sectional (χ2(21) = 48.808, p<.001, CFI = .995, TLI = .992, RMSEA = .021, 90% CI [.013.029]) and prospective analyses (χ2(21) = 32.675, p<.05, CFI = .982, TLI = .969, RMSEA = .036, 90% CI [.001.059]). Discussion Our findings have several clinical implications, suggesting a role for stress management training in the prevention and treatment of functional somatic syndromes. PMID:25396736
NASA Astrophysics Data System (ADS)
Mangazeev, Vladimir V.; Batchelor, Murray T.; Bazhanov, Vladimir V.; Dudalev, Michael Yu
2009-01-01
The universal scaling function of the square lattice Ising model in a magnetic field is obtained numerically via Baxter's variational corner transfer matrix approach. The high precision numerical data are in perfect agreement with the remarkable field theory results obtained by Fonseca and Zamolodchikov, as well as with many previously known exact and numerical results for the 2D Ising model. This includes excellent agreement with analytic results for the magnetic susceptibility obtained by Orrick, Nickel, Guttmann and Perk. In general, the high precision of the numerical results underlines the potential and full power of the variational corner transfer matrix approach.
Vitkin, Edward; Shlomi, Tomer
2012-01-01
Genome-scale metabolic network reconstructions are considered a key step in quantifying the genotype-phenotype relationship. We present a novel gap-filling approach, MetabolIc Reconstruction via functionAl GEnomics (MIRAGE), which identifies missing network reactions by integrating metabolic flux analysis and functional genomics data. MIRAGE's performance is demonstrated on the reconstruction of metabolic network models of E. coli and Synechocystis sp. and validated via existing networks for these species. Then, it is applied to reconstruct genome-scale metabolic network models for 36 sequenced cyanobacteria amenable for constraint-based modeling analysis and specifically for metabolic engineering. The reconstructed network models are supplied via standard SBML files. PMID:23194418
de Vries, Natalie Jane; Carlson, Jamie; Moscato, Pablo
2014-01-01
Online consumer behavior in general and online customer engagement with brands in particular, has become a major focus of research activity fuelled by the exponential increase of interactive functions of the internet and social media platforms and applications. Current research in this area is mostly hypothesis-driven and much debate about the concept of Customer Engagement and its related constructs remains existent in the literature. In this paper, we aim to propose a novel methodology for reverse engineering a consumer behavior model for online customer engagement, based on a computational and data-driven perspective. This methodology could be generalized and prove useful for future research in the fields of consumer behaviors using questionnaire data or studies investigating other types of human behaviors. The method we propose contains five main stages; symbolic regression analysis, graph building, community detection, evaluation of results and finally, investigation of directed cycles and common feedback loops. The 'communities' of questionnaire items that emerge from our community detection method form possible 'functional constructs' inferred from data rather than assumed from literature and theory. Our results show consistent partitioning of questionnaire items into such 'functional constructs' suggesting the method proposed here could be adopted as a new data-driven way of human behavior modeling.
de Vries, Natalie Jane; Carlson, Jamie; Moscato, Pablo
2014-01-01
Online consumer behavior in general and online customer engagement with brands in particular, has become a major focus of research activity fuelled by the exponential increase of interactive functions of the internet and social media platforms and applications. Current research in this area is mostly hypothesis-driven and much debate about the concept of Customer Engagement and its related constructs remains existent in the literature. In this paper, we aim to propose a novel methodology for reverse engineering a consumer behavior model for online customer engagement, based on a computational and data-driven perspective. This methodology could be generalized and prove useful for future research in the fields of consumer behaviors using questionnaire data or studies investigating other types of human behaviors. The method we propose contains five main stages; symbolic regression analysis, graph building, community detection, evaluation of results and finally, investigation of directed cycles and common feedback loops. The ‘communities’ of questionnaire items that emerge from our community detection method form possible ‘functional constructs’ inferred from data rather than assumed from literature and theory. Our results show consistent partitioning of questionnaire items into such ‘functional constructs’ suggesting the method proposed here could be adopted as a new data-driven way of human behavior modeling. PMID:25036766
Sapijanskas, Jurgis; Loreau, Michel
2010-12-01
The influence of diversity on ecosystem functioning and ecosystem services is now well established. Yet predictive mechanistic models that link species traits and community-level processes remain scarce, particularly for multitrophic systems. Here we revisit MacArthur's classical consumer resource model and develop a trait-based approach to predict the effects of consumer diversity on cascading extinctions and aggregated ecosystem processes in a two-trophic-level system. We show that functionally redundant efficient consumers generate top-down cascading extinctions. This counterintuitive result reveals the limits of the functional redundancy concept to predict the consequences of species deletion. Our model also predicts that the biodiversity-ecosystem functioning relationship is different for different ecosystem processes and depends on the range of variation of consumer traits in the regional species pool, which determines the sign of selection effects. Lastly, competition among resources and consumer generalism both weaken complementarity effects, which suggests that selection effects may prevail at higher trophic levels. Our work emphasizes the potential of trait-based approaches for transforming biodiversity and ecosystem functioning research into a more predictive science.
Salazar, Ramon B. E-mail: hilatikh@purdue.edu; Appenzeller, Joerg; Ilatikhameneh, Hesameddin E-mail: hilatikh@purdue.edu; Rahman, Rajib; Klimeck, Gerhard
2015-10-28
A new compact modeling approach is presented which describes the full current-voltage (I-V) characteristic of high-performance (aggressively scaled-down) tunneling field-effect-transistors (TFETs) based on homojunction direct-bandgap semiconductors. The model is based on an analytic description of two key features, which capture the main physical phenomena related to TFETs: (1) the potential profile from source to channel and (2) the elliptic curvature of the complex bands in the bandgap region. It is proposed to use 1D Poisson's equations in the source and the channel to describe the potential profile in homojunction TFETs. This allows to quantify the impact of source/drain doping on device performance, an aspect usually ignored in TFET modeling but highly relevant in ultra-scaled devices. The compact model is validated by comparison with state-of-the-art quantum transport simulations using a 3D full band atomistic approach based on non-equilibrium Green's functions. It is shown that the model reproduces with good accuracy the data obtained from the simulations in all regions of operation: the on/off states and the n/p branches of conduction. This approach allows calculation of energy-dependent band-to-band tunneling currents in TFETs, a feature that allows gaining deep insights into the underlying device physics. The simplicity and accuracy of the approach provide a powerful tool to explore in a quantitatively manner how a wide variety of parameters (material-, size-, and/or geometry-dependent) impact the TFET performance under any bias conditions. The proposed model presents thus a practical complement to computationally expensive simulations such as the 3D NEGF approach.
Veronese, Mattia; Gunn, Roger N; Zamuner, Stefano; Bertoldo, Alessandra
2013-02-01
Quantitative PET studies with arterial blood sampling usually require the correction of the measured total plasma activity for the presence of metabolites. In particular, if labelled metabolites are found in the plasma in significant amounts their presence has to be accounted for, because it is the concentration of the parent tracer which is required for data quantification. This is achieved by fitting a Parent Plasma fraction (PPf) model to discrete metabolite measurements. The commonly used method is based on an individual approach, i.e. for each subject the PPf model parameters are estimated from its own metabolite samples, which are, in general, sparse and noisy. This fact can compromise the quality of the reconstructed arterial input functions, and, consequently, affect the quantification of tissue kinetic parameters. In this study, we proposed a Non-Linear Mixed Effect Modelling (NLMEM) approach to describe metabolite kinetics. Since NLMEM has been developed to provide robust parameter estimates in the case of sparse and/or noisy data, it has the potential to be a reliable method for plasma metabolite correction. Three different PET datasets were considered: [11C]-(+)-PHNO (54 scans), [11C]-PIB (22 scans) and [11C]-DASB (30 scans). For each tracer both simulated and measured data were considered and NLMEM performance was compared with that provided by individual analysis. Results showed that NLMEM provided improved estimates of the plasma parent input function over the individual approach when the metabolite data were sparse or contained outliers.
Perveen, Nazia; Barot, Sébastien; Alvarez, Gaël; Klumpp, Katja; Martin, Raphael; Rapaport, Alain; Herfurth, Damien; Louault, Frédérique; Fontaine, Sébastien
2014-04-01
Integration of the priming effect (PE) in ecosystem models is crucial to better predict the consequences of global change on ecosystem carbon (C) dynamics and its feedbacks on climate. Over the last decade, many attempts have been made to model PE in soil. However, PE has not yet been incorporated into any ecosystem models. Here, we build plant/soil models to explore how PE and microbial diversity influence soil/plant interactions and ecosystem C and nitrogen (N) dynamics in response to global change (elevated CO2 and atmospheric N depositions). Our results show that plant persistence, soil organic matter (SOM) accumulation, and low N leaching in undisturbed ecosystems relies on a fine adjustment of microbial N mineralization to plant N uptake. This adjustment can be modeled in the SYMPHONY model by considering the destruction of SOM through PE, and the interactions between two microbial functional groups: SOM decomposers and SOM builders. After estimation of parameters, SYMPHONY provided realistic predictions on forage production, soil C storage and N leaching for a permanent grassland. Consistent with recent observations, SYMPHONY predicted a CO2 -induced modification of soil microbial communities leading to an intensification of SOM mineralization and a decrease in the soil C stock. SYMPHONY also indicated that atmospheric N deposition may promote SOM accumulation via changes in the structure and metabolic activities of microbial communities. Collectively, these results suggest that the PE and functional role of microbial diversity may be incorporated in ecosystem models with a few additional parameters, improving accuracy of predictions.
Perveen, Nazia; Barot, Sébastien; Alvarez, Gaël; Klumpp, Katja; Martin, Raphael; Rapaport, Alain; Herfurth, Damien; Louault, Frédérique; Fontaine, Sébastien
2014-04-01
Integration of the priming effect (PE) in ecosystem models is crucial to better predict the consequences of global change on ecosystem carbon (C) dynamics and its feedbacks on climate. Over the last decade, many attempts have been made to model PE in soil. However, PE has not yet been incorporated into any ecosystem models. Here, we build plant/soil models to explore how PE and microbial diversity influence soil/plant interactions and ecosystem C and nitrogen (N) dynamics in response to global change (elevated CO2 and atmospheric N depositions). Our results show that plant persistence, soil organic matter (SOM) accumulation, and low N leaching in undisturbed ecosystems relies on a fine adjustment of microbial N mineralization to plant N uptake. This adjustment can be modeled in the SYMPHONY model by considering the destruction of SOM through PE, and the interactions between two microbial functional groups: SOM decomposers and SOM builders. After estimation of parameters, SYMPHONY provided realistic predictions on forage production, soil C storage and N leaching for a permanent grassland. Consistent with recent observations, SYMPHONY predicted a CO2 -induced modification of soil microbial communities leading to an intensification of SOM mineralization and a decrease in the soil C stock. SYMPHONY also indicated that atmospheric N deposition may promote SOM accumulation via changes in the structure and metabolic activities of microbial communities. Collectively, these results suggest that the PE and functional role of microbial diversity may be incorporated in ecosystem models with a few additional parameters, improving accuracy of predictions. PMID:24339186
NASA Astrophysics Data System (ADS)
Knowles, N.; Georgakakos, K. P.
2002-12-01
A new methodology is presented for development of macroscale hydrologic model percolation parameters from a spatial database of soil properties. This approach is applied to three distinct catchments within California- the Kings, American, and lower Eel river basins. Each unique vertical soil profile in these catchments is divided into an upper and a lower layer based on permeability gradients. A one-dimensional numerical unsaturated flow model is applied to each profile to yield percolation from upper to lower layers as a function of lower layer moisture deficit. A Holtan-type power law relationship is postulated to adequately represent profile percolation, with parameters estimated by curve-fitting the numerical model results. These parameters are then aggregated from the scale of the observable soil profiles to the level of the macroscale hydrologic model elements in a mass-conserving manner. In this process, the power law relationship is considered a describing function approximation to the numerical model response. To estimate parameter and flux uncertainties, a Monte Carlo approach is employed in which the soil property values are sampled randomly within the uncertainty ranges provided in the soils database or established by previous studies in the literature. The resulting ensembles of soil profiles are used in the describing function method to generate ensembles of hydrologic parameter spatial distributions for the catchments of interest. These ensembles are used as measures of hydrologic parameter uncertainty. The relationship between aggregate moisture flux and soil parameter distributions is explored, as is the dependence of mean parameter and aggregate flux values and their respective uncertainties on spatial scale. The methodology presented provides a useful means of investigating model-process observability given present-day soils databases, and a robust method for determining physically based macro-scale hydrologic parameters for subsequent
2-D Modeling of Nanoscale MOSFETs: Non-Equilibrium Green's Function Approach
NASA Technical Reports Server (NTRS)
Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan
2001-01-01
We have developed physical approximations and computer code capable of realistically simulating 2-D nanoscale transistors, using the non-equilibrium Green's function (NEGF) method. This is the most accurate full quantum model yet applied to 2-D device simulation. Open boundary conditions and oxide tunneling are treated on an equal footing. Electrons in the ellipsoids of the conduction band are treated within the anisotropic effective mass approximation. Electron-electron interaction is treated within Hartree approximation by solving NEGF and Poisson equations self-consistently. For the calculations presented here, parallelization is performed by distributing the solution of NEGF equations to various processors, energy wise. We present simulation of the "benchmark" MIT 25nm and 90nm MOSFETs and compare our results to those from the drift-diffusion simulator and the quantum-corrected results available. In the 25nm MOSFET, the channel length is less than ten times the electron wavelength, and the electron scattering time is comparable to its transit time. Our main results are: (1) Simulated drain subthreshold current characteristics are shown, where the potential profiles are calculated self-consistently by the corresponding simulation methods. The current predicted by our quantum simulation has smaller subthreshold slope of the Vg dependence which results in higher threshold voltage. (2) When gate oxide thickness is less than 2 nm, gate oxide leakage is a primary factor which determines off-current of a MOSFET (3) Using our 2-D NEGF simulator, we found several ways to drastically decrease oxide leakage current without compromising drive current. (4) Quantum mechanically calculated electron density is much smaller than the background doping density in the poly silicon gate region near oxide interface. This creates an additional effective gate voltage. Different ways to. include this effect approximately will be discussed.
Tabacchi, G; Hutter, J; Mundy, C
2005-04-07
A combined linear response--frozen electron density model has been implemented in a molecular dynamics scheme derived from an extended Lagrangian formalism. This approach is based on a partition of the electronic charge distribution into a frozen region described by Kim-Gordon theory, and a response contribution determined by the instaneous ionic configuration of the system. The method is free from empirical pair-potentials and the parameterization protocol involves only calculations on properly chosen subsystems. They apply this method to a series of alkali halides in different physical phases and are able to reproduce experimental structural and thermodynamic properties with an accuracy comparable to Kohn-Sham density functional calculations.
Toma, Tudor; Bosman, Robert-Jan; Siebes, Arno; Peek, Niels; Abu-Hanna, Ameen
2010-08-01
An important problem in the Intensive Care is how to predict on a given day of stay the eventual hospital mortality for a specific patient. A recent approach to solve this problem suggested the use of frequent temporal sequences (FTSs) as predictors. Methods following this approach were evaluated in the past by inducing a model from a training set and validating the prognostic performance on an independent test set. Although this evaluative approach addresses the validity of the specific models induced in an experiment, it falls short of evaluating the inductive method itself. To achieve this, one must account for the inherent sources of variation in the experimental design. The main aim of this work is to demonstrate a procedure based on bootstrapping, specifically the .632 bootstrap procedure, for evaluating inductive methods that discover patterns, such as FTSs. A second aim is to apply this approach to find out whether a recently suggested inductive method that discovers FTSs of organ functioning status is superior over a traditional method that does not use temporal sequences when compared on each successive day of stay at the Intensive Care Unit. The use of bootstrapping with logistic regression using pre-specified covariates is known in the statistical literature. Using inductive methods of prognostic models based on temporal sequence discovery within the bootstrap procedure is however novel at least in predictive models in the Intensive Care. Our results of applying the bootstrap-based evaluative procedure demonstrate the superiority of the FTS-based inductive method over the traditional method in terms of discrimination as well as accuracy. In addition we illustrate the insights gained by the analyst into the discovered FTSs from the bootstrap samples.
Functional Generalized Additive Models.
McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David
2014-01-01
We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.
NASA Astrophysics Data System (ADS)
Grewe, V.; Frömming, C.; Matthes, S.; Brinkop, S.; Ponater, M.; Dietmüller, S.; Jöckel, P.; Garny, H.; Tsati, E.; Dahlmann, K.; Søvde, O. A.; Fuglestvedt, J.; Berntsen, T. K.; Shine, K. P.; Irvine, E. A.; Champougny, T.; Hullah, P.
2014-01-01
In addition to CO2, the climate impact of aviation is strongly influenced by non-CO2 emissions, such as nitrogen oxides, influencing ozone and methane, and water vapour, which can lead to the formation of persistent contrails in ice-supersaturated regions. Because these non-CO2 emission effects are characterised by a short lifetime, their climate impact largely depends on emission location and time; that is to say, emissions in certain locations (or times) can lead to a greater climate impact (even on the global average) than the same emission in other locations (or times). Avoiding these climate-sensitive regions might thus be beneficial to climate. Here, we describe a modelling chain for investigating this climate impact mitigation option. This modelling chain forms a multi-step modelling approach, starting with the simulation of the fate of emissions released at a certain location and time (time-region grid points). This is performed with the chemistry-climate model EMAC, extended via the two submodels AIRTRAC (V1.0) and CONTRAIL (V1.0), which describe the contribution of emissions to the composition of the atmosphere and to contrail formation, respectively. The impact of emissions from the large number of time-region grid points is efficiently calculated by applying a Lagrangian scheme. EMAC also includes the calculation of radiative impacts, which are, in a second step, the input to climate metric formulas describing the global climate impact of the emission at each time-region grid point. The result of the modelling chain comprises a four-dimensional data set in space and time, which we call climate cost functions and which describes the global climate impact of an emission at each grid point and each point in time. In a third step, these climate cost functions are used in an air traffic simulator (SAAM) coupled to an emission tool (AEM) to optimise aircraft trajectories for the North Atlantic region. Here, we describe the details of this new modelling
NASA Astrophysics Data System (ADS)
Grewe, V.; Frömming, C.; Matthes, S.; Brinkop, S.; Ponater, M.; Dietmüller, S.; Jöckel, P.; Garny, H.; Tsati, E.; Søvde, O. A.; Fuglestvedt, J.; Berntsen, T. K.; Shine, K. P.; Irvine, E. A.; Champougny, T.; Hullah, P.
2013-08-01
In addition to CO2, the climate impact of aviation is strongly influenced by non-CO2 emissions, such as nitrogen oxides, influencing ozone and methane, and water vapour, forming contrails. Because these non-CO2 emission effects are characterised by a short lifetime, their climate impact largely depends on emission location and time, i.e. emissions in certain locations (or times) can lead to a greater climate impact (even on the global average) than the same emission in other locations (or times). Avoiding these climate sensitive regions might thus be beneficial to climate. Here, we describe a modelling chain for investigating this climate impact mitigation option. It forms a multi-step modelling approach, starting with the simulation of the fate of emissions released at a certain location and time (time-region). This is performed with the chemistry-climate model EMAC, extended by the two submodels AIRTRAC 1.0 and CONTRAIL 1.0, which describe the contribution of emissions to the composition of the atmosphere and the contrail formation. Numerous time-regions are efficiently calculated by applying a Lagrangian scheme. EMAC also includes the calculation of radiative impacts, which are, in a second step, the input to climate metric formulas describing the climate impact of the time-region emission. The result of the modelling chain comprises a four dimensional dataset in space and time, which we call climate cost functions, and which describe at each grid point and each point in time, the climate impact of an emission. In a third step, these climate cost functions are used in a traffic simulator (SAAM), coupled to an emission tool (AEM) to optimise aircraft trajectories for the North Atlantic region. Here, we describe the details of this new modelling approach and show some example results. A number of sensitivity analyses are performed to motivate the settings of individual parameters. A stepwise sanity check of the results of the modelling chain is undertaken to
Turan, Başak; Selçuki, Cenk
2014-09-01
Amino acids are constituents of proteins and enzymes which take part almost in all metabolic reactions. Glutamic acid, with an ability to form a negatively charged side chain, plays a major role in intra and intermolecular interactions of proteins, peptides, and enzymes. An exhaustive conformational analysis has been performed for all eight possible forms at B3LYP/cc-pVTZ level. All possible neutral, zwitterionic, protonated, and deprotonated forms of glutamic acid structures have been investigated in solution by using polarizable continuum model mimicking water as the solvent. Nine families based on the dihedral angles have been classified for eight glutamic acid forms. The electrostatic effects included in the solvent model usually stabilize the charged forms more. However, the stability of the zwitterionic form has been underestimated due to the lack of hydrogen bonding between the solute and solvent; therefore, it is observed that compact neutral glutamic acid structures are more stable in solution than they are in vacuum. Our calculations have shown that among all eight possible forms, some are not stable in solution and are immediately converted to other more stable forms. Comparison of isoelectronic glutamic acid forms indicated that one of the structures among possible zwitterionic and anionic forms may dominate over the other possible forms. Additional investigations using explicit solvent models are necessary to determine the stability of charged forms of glutamic acid in solution as our results clearly indicate that hydrogen bonding and its type have a major role in the structure and energy of conformers.
Turan, Başak; Selçuki, Cenk
2014-09-01
Amino acids are constituents of proteins and enzymes which take part almost in all metabolic reactions. Glutamic acid, with an ability to form a negatively charged side chain, plays a major role in intra and intermolecular interactions of proteins, peptides, and enzymes. An exhaustive conformational analysis has been performed for all eight possible forms at B3LYP/cc-pVTZ level. All possible neutral, zwitterionic, protonated, and deprotonated forms of glutamic acid structures have been investigated in solution by using polarizable continuum model mimicking water as the solvent. Nine families based on the dihedral angles have been classified for eight glutamic acid forms. The electrostatic effects included in the solvent model usually stabilize the charged forms more. However, the stability of the zwitterionic form has been underestimated due to the lack of hydrogen bonding between the solute and solvent; therefore, it is observed that compact neutral glutamic acid structures are more stable in solution than they are in vacuum. Our calculations have shown that among all eight possible forms, some are not stable in solution and are immediately converted to other more stable forms. Comparison of isoelectronic glutamic acid forms indicated that one of the structures among possible zwitterionic and anionic forms may dominate over the other possible forms. Additional investigations using explicit solvent models are necessary to determine the stability of charged forms of glutamic acid in solution as our results clearly indicate that hydrogen bonding and its type have a major role in the structure and energy of conformers. PMID:25135067
Density functional approach for modeling CO2 pressurized polymer thin films in equilibrium.
Talreja, Manish; Kusaka, Isamu; Tomasko, David L
2009-02-28
We have used polymer density functional theory to analyze the equilibrium density profiles and interfacial properties of thin films of polymer in the presence of CO(2). Surface tension, surface excess adsorption of CO(2) on polymer surface, and width of the interface are discussed. We have shown the changes in these properties in the presence of CO(2) and with increasing film thickness and their inverse linear relationship with increasing chain length. One of our important findings is the evidence of segregation of end segments toward the interface. We have introduced a new method of representing this phenomenon by means of Delta profiles that show increase in segregation owing to the presence of CO(2) and with increasing chain length. We also make predictions for the octacosane-CO(2) binary system near the critical point of CO(2). Our results indicate qualitative trends that are comparable to the similar experimental and simulation studies.
Medical Spanish: A Functional Approach.
ERIC Educational Resources Information Center
Hendrickson, James M.
A functional approach to language teaching begins with knowing how students intend to use the foreign language for specific purposes and in specific situations. Instructors of medical Spanish can begin by determining the specific language functions that their students must be able to express when communicating with Hispanic patients, by means of a…
NASA Astrophysics Data System (ADS)
Maitra, Subrata; Banerjee, Debamalya
2010-10-01
Present article is based on application of the product quality and improvement of design related with the nature of failure of machineries and plant operational problems of an industrial blower fan Company. The project aims at developing the product on the basis of standardized production parameters for selling its products in the market. Special attention is also being paid to the blower fans which have been ordered directly by the customer on the basis of installed capacity of air to be provided by the fan. Application of quality function deployment is primarily a customer oriented approach. Proposed model of QFD integrated with AHP to select and rank the decision criterions on the commercial and technical factors and the measurement of the decision parameters for selection of best product in the compettitive environment. The present AHP-QFD model justifies the selection of a blower fan with the help of the group of experts' opinion by pairwise comparison of the customer's and ergonomy based technical design requirements. The steps invoved in implementation of the QFD—AHP and selection of weighted criterion may be helpful for all similar purpose industries maintaining cost and utility for competitive product.
[Partial lease squares approach to functional analysis].
Preda, C
2006-01-01
We extend the partial least squares (PLS) approach to functional data represented in our models by sample paths of stochastic process with continuous time. Due to the infinite dimension, when functional data are used as a predictor for linear regression and classification models, the estimation problem is an ill-posed one. In this context, PLS offers a simple and efficient alternative to the methods based on the principal components of the stochastic process. We compare the results given by the PLS approach and other linear models using several datasets from economy, industry and medical fields. PMID:17124795
Ian Robertson
2007-04-28
Development and validation of constitutive models for polycrystalline materials subjected to high strain-rate loading over a range of temperatures are needed to predict the response of engineering materials to in-service type conditions. To account accurately for the complex effects that can occur during extreme and variable loading conditions, requires significant and detailed computational and modeling efforts. These efforts must be integrated fully with precise and targeted experimental measurements that not only verify the predictions of the models, but also provide input about the fundamental processes responsible for the macroscopic response. Achieving this coupling between modeling and experiment is the guiding principle of this program. Specifically, this program seeks to bridge the length scale between discrete dislocation interactions with grain boundaries and continuum models for polycrystalline plasticity. Achieving this goal requires incorporating these complex dislocation-interface interactions into the well-defined behavior of single crystals. Despite the widespread study of metal plasticity, this aspect is not well understood for simple loading conditions, let alone extreme ones. Our experimental approach includes determining the high-strain rate response as a function of strain and temperature with post-mortem characterization of the microstructure, quasi-static testing of pre-deformed material, and direct observation of the dislocation behavior during reloading by using the in situ transmission electron microscope deformation technique. These experiments will provide the basis for development and validation of physically-based constitutive models. One aspect of the program involves the direct observation of specific mechanisms of micro-plasticity, as these indicate the boundary value problem that should be addressed. This focus on the pre-yield region in the quasi-static effort (the elasto-plastic transition) is also a tractable one from an
Various modeling approaches have been developed for metal binding on humic substances. However, most of these models are still curve-fitting exercises-- the resulting set of parameters such as affinity constants (or the distribution of them) is found to depend on pH, ionic stren...
Tang, Jau
1996-02-01
As an alternative to better physical explanations of the mechanisms of quantum interference and the origins of uncertainty broadening, a linear hopping model is proposed with ``color-varying`` dynamics to reflect fast exchange between time-reversed states. Intricate relations between this model, particle-wave dualism, and relativity are discussed. The wave function is shown to possess dual characteristics of a stable, localized ``soliton-like`` de Broglie wavelet and a delocalized, interfering Schroedinger carrier wave function.
NASA Astrophysics Data System (ADS)
Pavlick, R.; Drewry, D. T.; Bohn, K.; Reu, B.; Kleidon, A.
2013-06-01
Terrestrial biosphere models typically abstract the immense diversity of vegetation forms and functioning into a relatively small set of predefined semi-empirical plant functional types (PFTs). There is growing evidence, however, from the field ecology community as well as from modelling studies that current PFT schemes may not adequately represent the observed variations in plant functional traits and their effect on ecosystem functioning. In this paper, we introduce the Jena Diversity-Dynamic Global Vegetation Model (JeDi-DGVM) as a new approach to terrestrial biosphere modelling with a richer representation of functional diversity than traditional modelling approaches based on a small number of fixed PFTs. JeDi-DGVM simulates the performance of a large number of randomly generated plant growth strategies, each defined by a set of 15 trait parameters which characterize various aspects of plant functioning including carbon allocation, ecophysiology and phenology. Each trait parameter is involved in one or more functional trade-offs. These trade-offs ultimately determine whether a strategy is able to survive under the climatic conditions in a given model grid cell and its performance relative to the other strategies. The biogeochemical fluxes and land surface properties of the individual strategies are aggregated to the grid-cell scale using a mass-based weighting scheme. We evaluate the simulated global biogeochemical patterns against a variety of field and satellite-based observations following a protocol established by the Carbon-Land Model Intercomparison Project. The land surface fluxes and vegetation structural properties are reasonably well simulated by JeDi-DGVM, and compare favourably with other state-of-the-art global vegetation models. We also evaluate the simulated patterns of functional diversity and the sensitivity of the JeDi-DGVM modelling approach to the number of sampled strategies. Altogether, the results demonstrate the parsimonious and flexible
Choi, Eunhee; Tang, Fengyan; Kim, Sung-Geun; Turk, Phillip
2016-10-01
This study examined the longitudinal relationships between functional health in later years and three types of productive activities: volunteering, full-time, and part-time work. Using the data from five waves (2000-2008) of the Health and Retirement Study, we applied multivariate latent growth curve modeling to examine the longitudinal relationships among individuals 50 or over. Functional health was measured by limitations in activities of daily living. Individuals who volunteered, worked either full time or part time exhibited a slower decline in functional health than nonparticipants. Significant associations were also found between initial functional health and longitudinal changes in productive activity participation. This study provides additional support for the benefits of productive activities later in life; engagement in volunteering and employment are indeed associated with better functional health in middle and old age. PMID:27461262
ERIC Educational Resources Information Center
Lloyd, Rebecca
2015-01-01
Background: Physical Education (PE) programmes are expanding to include alternative activities yet what is missing is a conceptual model that facilitates how the learning process may be understood and assessed beyond the dominant sport-technique paradigm. Purpose: The purpose of this article was to feature the emergence of a Function-to-Flow (F2F)…
Hadjipantelis, P. Z.; Aston, J. A. D.; Müller, H. G.; Evans, J. P.
2015-01-01
Mandarin Chinese is characterized by being a tonal language; the pitch (or F 0) of its utterances carries considerable linguistic information. However, speech samples from different individuals are subject to changes in amplitude and phase, which must be accounted for in any analysis that attempts to provide a linguistically meaningful description of the language. A joint model for amplitude, phase, and duration is presented, which combines elements from functional data analysis, compositional data analysis, and linear mixed effects models. By decomposing functions via a functional principal component analysis, and connecting registration functions to compositional data analysis, a joint multivariate mixed effect model can be formulated, which gives insights into the relationship between the different modes of variation as well as their dependence on linguistic and nonlinguistic covariates. The model is applied to the COSPRO-1 dataset, a comprehensive database of spoken Taiwanese Mandarin, containing approximately 50,000 phonetically diverse sample F 0 contours (syllables), and reveals that phonetic information is jointly carried by both amplitude and phase variation. Supplementary materials for this article are available online. PMID:26692591
Pe'er, Guy; Henle, Klaus; Dislich, Claudia; Frank, Karin
2011-01-01
Landscape connectivity is a key factor determining the viability of populations in fragmented landscapes. Predicting ‘functional connectivity’, namely whether a patch or a landscape functions as connected from the perspective of a focal species, poses various challenges. First, empirical data on the movement behaviour of species is often scarce. Second, animal-landscape interactions are bound to yield complex patterns. Lastly, functional connectivity involves various components that are rarely assessed separately. We introduce the spatially explicit, individual-based model FunCon as means to distinguish between components of functional connectivity and to assess how each of them affects the sensitivity of species and communities to landscape structures. We then present the results of exploratory simulations over six landscapes of different fragmentation levels and across a range of hypothetical bird species that differ in their response to habitat edges. i) Our results demonstrate that estimations of functional connectivity depend not only on the response of species to edges (avoidance versus penetration into the matrix), the movement mode investigated (home range movements versus dispersal), and the way in which the matrix is being crossed (random walk versus gap crossing), but also on the choice of connectivity measure (in this case, the model output examined). ii) We further show a strong effect of the mortality scenario applied, indicating that movement decisions that do not fully match the mortality risks are likely to reduce connectivity and enhance sensitivity to fragmentation. iii) Despite these complexities, some consistent patterns emerged. For instance, the ranking order of landscapes in terms of functional connectivity was mostly consistent across the entire range of hypothetical species, indicating that simple landscape indices can potentially serve as valuable surrogates for functional connectivity. Yet such simplifications must be carefully
NASA Astrophysics Data System (ADS)
Pavlick, R.; Drewry, D. T.; Bohn, K.; Reu, B.; Kleidon, A.
2012-04-01
Dynamic Global Vegetation Models (DGVMs) typically abstract the immense diversity of vegetation forms and functioning into a relatively small set of predefined semi-empirical Plant Functional Types (PFTs). There is growing evidence, however, from the field ecology community as well as from modelling studies that current PFT schemes may not adequately represent the observed variations in plant functional traits and their effect on ecosystem functioning. In this paper, we introduce the Jena Diversity DGVM (JeDi-DGVM) as a new approach to global vegetation modelling with a richer representation of functional diversity than traditional modelling approaches based on a small number of fixed PFTs. JeDi-DGVM simulates the performance of a large number of randomly-generated plant growth strategies (PGSs), each defined by a set of 15 trait parameters which characterize various aspects of plant functioning including carbon allocation, ecophysiology and phenology. Each trait parameter is involved in one or more functional trade-offs. These trade-offs ultimately determine whether a PGS is able to survive under the climatic conditions in a given model grid cell and its performance relative to the other PGSs. The biogeochemical fluxes and land-surface properties of the individual PGSs are aggregated to the grid cell scale using a mass-based weighting scheme. Simulated global biogeochemical and biogeographical patterns are evaluated against a variety of field and satellite-based observations following a protocol established by the Carbon-Land Model Intercomparison Project. The land surface fluxes and vegetation structural properties are reasonably well simulated by JeDi-DGVM, and compare favorably with other state-of-the-art terrestrial biosphere models. This is despite the parameters describing the ecophysiological functioning and allometry of JeDi-DGVM plants evolving as a function of vegetation survival in a given climate, as opposed to typical approaches that fix land surface
Modeling approaches for active systems
NASA Astrophysics Data System (ADS)
Herold, Sven; Atzrodt, Heiko; Mayer, Dirk; Thomaier, Martin
2006-03-01
To solve a wide range of vibration problems with the active structures technology, different simulation approaches for several models are needed. The selection of an appropriate modeling strategy is depending, amongst others, on the frequency range, the modal density and the control target. An active system consists of several components: the mechanical structure, at least one sensor and actuator, signal conditioning electronics and the controller. For each individual part of the active system the simulation approaches can be different. To integrate the several modeling approaches into an active system simulation and to ensure a highly efficient and accurate calculation, all sub models must harmonize. For this purpose, structural models considered in this article are modal state-space formulations for the lower frequency range and transfer function based models for the higher frequency range. The modal state-space formulations are derived from finite element models and/or experimental modal analyses. Consequently, the structure models which are based on transfer functions are directly derived from measurements. The transfer functions are identified with the Steiglitz-McBride iteration method. To convert them from the z-domain to the s-domain a least squares solution is implemented. An analytical approach is used to derive models of active interfaces. These models are transferred into impedance formulations. To couple mechanical and electrical sub-systems with the active materials, the concept of impedance modeling was successfully tested. The impedance models are enhanced by adapting them to adequate measurements. The controller design strongly depends on the frequency range and the number of modes to be controlled. To control systems with a small number of modes, techniques such as active damping or independent modal space control may be used, whereas in the case of systems with a large number of modes or with modes that are not well separated, other control
Zhang, Jeff L.; Rusinek, Henry; Bokacheva, Louisa; Lerman, Lilach O.; Chen, Qun; Prince, Chekema; Oesingmann, Niels; Song, Ting; Lee, Vivian S.
2009-01-01
A three-compartment model is proposed for analyzing magnetic resonance renography (MRR) and computed tomography renography (CTR) data to derive clinically useful parameters such as glomerular filtration rate (GFR) and renal plasma flow (RPF). The model fits the convolution of the measured input and the predefined impulse retention functions to the measured tissue curves. A MRR study of 10 patients showed that relative root mean square errors by the model were significantly lower than errors for a previously reported three-compartmental model (11.6% ± 4.9 vs 15.5% ± 4.1; P < 0.001). GFR estimates correlated well with reference values by 99mTc-DTPA scintigraphy (correlation coefficient r = 0.82), and for RPF, r = 0.80. Parameter-sensitivity analysis and Monte Carlo simulation indicated that model parameters could be reliably identified. When the model was applied to CTR in five pigs, expected increases in RPF and GFR due to acetylcholine were detected with greater consistency than with the previous model. These results support the reliability and validity of the new model in computing GFR, RPF, and renal mean transit times from MR and CT data. PMID:18228576
Pan, Shin-Liang; Wu, Hui-Min; Yen, Amy Ming-Fang; Chen, Tony Hsiu-Hsi
2007-12-20
Few attempts have been made to model the dynamics of stroke-related disability. It is possible though, using panel data and multi-state Markov regression models that incorporate measured covariates and latent variables (random effects). This study aimed to model a series of functional transitions (following a first stroke) using a three-state Markov model with or without considering random effects. Several proportional hazards parameterizations were considered. A Bayesian approach that utilizes the Markov Chain Monte Carlo (MCMC) and Gibbs sampling functionality of WinBUGS (a Windows-based Bayesian software package) was developed to generate the marginal posterior distributions of the various transition parameters (e.g. the transition rates and transition probabilities). Model building and comparisons was guided by reference to the deviance information criteria (DIC). Of the four proportional hazards models considered, exponential regression was preferred because it led to the smallest deviances. Adding random effects further improved the model fit. Of the covariates considered, only age, infarct size, and baseline functional status were significant. By using our final model we were able to make individual predictions about functional recovery in stroke patients. PMID:17676712
Two-dimensional N=(2,2) Wess-Zumino model in the functional renormalization group approach
Synatschke-Czerwonka, Franziska; Fischbacher, Thomas; Bergner, Georg
2010-10-15
We study the supersymmetric N=(2,2) Wess-Zumino model in two dimensions with the functional renormalization group. At leading order in the supercovariant derivative expansion we recover the nonrenormalization theorem which states that the superpotential has no running couplings. Beyond leading order the renormalization of the bare mass is caused by a momentum-dependent wave function renormalization. To deal with the partial differential equations we have developed a numerical toolbox called FlowPy. For weak couplings the quantum corrections to the bare mass found in lattice simulations are reproduced with high accuracy. But in the regime with intermediate couplings higher-order operators that are not constrained by the nonrenormalization theorem yield the dominating contribution to the renormalized mass.
Introducing linear functions: an alternative statistical approach
NASA Astrophysics Data System (ADS)
Nolan, Caroline; Herbert, Sandra
2015-12-01
The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be `threshold concepts'. There is recognition that linear functions can be taught in context through the exploration of linear modelling examples, but this has its limitations. Currently, statistical data is easily attainable, and graphics or computer algebra system (CAS) calculators are common in many classrooms. The use of this technology provides ease of access to different representations of linear functions as well as the ability to fit a least-squares line for real-life data. This means these calculators could support a possible alternative approach to the introduction of linear functions. This study compares the results of an end-of-topic test for two classes of Australian middle secondary students at a regional school to determine if such an alternative approach is feasible. In this study, test questions were grouped by concept and subjected to concept by concept analysis of the means of test results of the two classes. This analysis revealed that the students following the alternative approach demonstrated greater competence with non-standard questions.
NASA Astrophysics Data System (ADS)
Leung, H. S.; Ngan, A. H. W.
2016-06-01
It has long been recognized that a successful strategy for computational plasticity will have to bridge across the meso scale in which the interactions of high quantities of dislocations dominate. In this work, a new meso-scale scheme based on the full dynamics of dislocation-density functions is proposed. In this scheme, the evolution of the dislocation-density functions is derived from a coarse-graining procedure which clearly defines the relationship between the discrete-line and density representations of the dislocation microstructure. Full dynamics of the dislocation-density functions are considered based on an "all-dislocation" concept in which statistically stored dislocations are preserved and treated in the same way as geometrically necessary dislocations. Elastic interactions between dislocations in a 3D space are treated in accordance with Mura's formula for eigen stress. Dislocation generation is considered as a consequence of dislocations to maintain their connectivity, and a special scheme is devised for this purpose. The model is applied to simulate a number of intensive microstructures involving discrete dislocation events, including loop expansion and shrinkage under applied and self stress, dipole annihilation, and Orowan looping. The scheme can also handle high densities of dislocations present in extensive microstructures.
Kavitha, Rengarajan; Karunagaran, Subramanian; Chandrabose, Subramaniam Subhash; Lee, Keun Woo; Meganathan, Chandrasekaran
2015-12-01
Fructose catabolism starts with phosphorylation of d-fructose to fructose 1-phosphate, which is performed by ketohexokinase (KHK). Fructose metabolism may be the key to understand the long-term consumption of fructose in human's obesity, diabetes and metabolic states in western populations. The inhibition of KHK has medicinally potential roles in fructose metabolism and the metabolic syndrome. To identify the essential chemical features for KHK inhibition, a three-dimensional (3D) chemical-feature-based QSAR pharmacophore model was developed for the first time by using Discovery Studio v2.5 (DS). The best pharmacophore hypothesis (Hypo1) consisting two hydrogen bond donor, two hydrophobic features and has exhibited high correlation co-efficient (0.97), cost difference (76.1) and low RMS (0.66) value. The robustness and predictability of Hypo1 was validated by fisher's randomization method, test set, and the decoy set. Subsequently, chemical databases like NCI, Chembridge and Maybridge were screened for validated Hypo1. The screened compounds were further analyzed by applying drug-like filters such as Lipinski's rule of five, ADME properties, and molecular docking studies. Further, the highest occupied molecular orbital, lowest unoccupied molecular orbital and energy gap values were calculated for the hits compounds using density functional theory. Finally, 3 hit compounds were selected based on their good molecular interactions with key amino acids in the KHK active site, GOLD fitness score, and lowest energy gaps.
NASA Astrophysics Data System (ADS)
Hibbard, Bill
2012-05-01
Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.
Menouar, Salah; Maamache, Mustapha; Choi, Jeong Ryeol
2010-08-15
The quantum states of time-dependent coupled oscillator model for charged particles subjected to variable magnetic field are investigated using the invariant operator methods. To do this, we have taken advantage of an alternative method, so-called unitary transformation approach, available in the framework of quantum mechanics, as well as a generalized canonical transformation method in the classical regime. The transformed quantum Hamiltonian is obtained using suitable unitary operators and is represented in terms of two independent harmonic oscillators which have the same frequencies as that of the classically transformed one. Starting from the wave functions in the transformed system, we have derived the full wave functions in the original system with the help of the unitary operators. One can easily take a complete description of how the charged particle behaves under the given Hamiltonian by taking advantage of these analytical wave functions.
Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2013-01-01
A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.
Approaches for modeling magnetic nanoparticle dynamics
Reeves, Daniel B; Weaver, John B
2014-01-01
Magnetic nanoparticles are useful biological probes as well as therapeutic agents. There have been several approaches used to model nanoparticle magnetization dynamics for both Brownian as well as Néel rotation. The magnetizations are often of interest and can be compared with experimental results. Here we summarize these approaches including the Stoner-Wohlfarth approach, and stochastic approaches including thermal fluctuations. Non-equilibrium related temperature effects can be described by a distribution function approach (Fokker-Planck equation) or a stochastic differential equation (Langevin equation). Approximate models in several regimes can be derived from these general approaches to simplify implementation. PMID:25271360
Gehring, Ulrike; Hoek, Gerard; Keuken, Menno; Jonkers, Sander; Beelen, Rob; Eeftens, Marloes; Postma, Dirkje S.; Brunekreef, Bert
2015-01-01
. 2015. Air pollution and lung function in Dutch children: a comparison of exposure estimates and associations based on land use regression and dispersion exposure modeling approaches. Environ Health Perspect 123:847–851; http://dx.doi.org/10.1289/ehp.1408541 PMID:25839747
Pineda, Jaime A.; Friedrich, Elisabeth V. C.; LaMarca, Kristen
2014-01-01
Autism Spectrum Disorder (ASD) is an increasingly prevalent condition with core deficits in the social domain. Understanding its neuroetiology is critical to providing insights into the relationship between neuroanatomy, physiology and social behaviors, including imitation learning, language, empathy, theory of mind, and even self-awareness. Equally important is the need to find ways to arrest its increasing prevalence and to ameliorate its symptoms. In this review, we highlight neurofeedback studies as viable treatment options for high-functioning as well as low-functioning children with ASD. Lower-functioning groups have the greatest need for diagnosis and treatment, the greatest barrier to communication, and may experience the greatest benefit if a treatment can improve function or prevent progression of the disorder at an early stage. Therefore, we focus on neurofeedback interventions combined with other kinds of behavioral conditioning to induce neuroplastic changes that can address the full spectrum of the autism phenotype. PMID:25147521
Röling, Wilfred F M; van Bodegom, Peter M
2014-01-01
Molecular ecology approaches are rapidly advancing our insights into the microorganisms involved in the degradation of marine oil spills and their metabolic potentials. Yet, many questions remain open: how do oil-degrading microbial communities assemble in terms of functional diversity, species abundances and organization and what are the drivers? How do the functional properties of microorganisms scale to processes at the ecosystem level? How does mass flow among species, and which factors and species control and regulate fluxes, stability and other ecosystem functions? Can generic rules on oil-degradation be derived, and what drivers underlie these rules? How can we engineer oil-degrading microbial communities such that toxic polycyclic aromatic hydrocarbons are degraded faster? These types of questions apply to the field of microbial ecology in general. We outline how recent advances in single-species systems biology might be extended to help answer these questions. We argue that bottom-up mechanistic modeling allows deciphering the respective roles and interactions among microorganisms. In particular constraint-based, metagenome-derived community-scale flux balance analysis appears suited for this goal as it allows calculating degradation-related fluxes based on physiological constraints and growth strategies, without needing detailed kinetic information. We subsequently discuss what is required to make these approaches successful, and identify a need to better understand microbial physiology in order to advance microbial ecology. We advocate the development of databases containing microbial physiological data. Answering the posed questions is far from trivial. Oil-degrading communities are, however, an attractive setting to start testing systems biology-derived models and hypotheses as they are relatively simple in diversity and key activities, with several key players being isolated and a high availability of experimental data and approaches.
Röling, Wilfred F. M.; van Bodegom, Peter M.
2014-01-01
Molecular ecology approaches are rapidly advancing our insights into the microorganisms involved in the degradation of marine oil spills and their metabolic potentials. Yet, many questions remain open: how do oil-degrading microbial communities assemble in terms of functional diversity, species abundances and organization and what are the drivers? How do the functional properties of microorganisms scale to processes at the ecosystem level? How does mass flow among species, and which factors and species control and regulate fluxes, stability and other ecosystem functions? Can generic rules on oil-degradation be derived, and what drivers underlie these rules? How can we engineer oil-degrading microbial communities such that toxic polycyclic aromatic hydrocarbons are degraded faster? These types of questions apply to the field of microbial ecology in general. We outline how recent advances in single-species systems biology might be extended to help answer these questions. We argue that bottom-up mechanistic modeling allows deciphering the respective roles and interactions among microorganisms. In particular constraint-based, metagenome-derived community-scale flux balance analysis appears suited for this goal as it allows calculating degradation-related fluxes based on physiological constraints and growth strategies, without needing detailed kinetic information. We subsequently discuss what is required to make these approaches successful, and identify a need to better understand microbial physiology in order to advance microbial ecology. We advocate the development of databases containing microbial physiological data. Answering the posed questions is far from trivial. Oil-degrading communities are, however, an attractive setting to start testing systems biology-derived models and hypotheses as they are relatively simple in diversity and key activities, with several key players being isolated and a high availability of experimental data and approaches. PMID:24723922
NASA Astrophysics Data System (ADS)
Lee, Ji-Hwan; Tak, Youngjoo; Lee, Taehun; Soon, Aloysius
Ceria (CeO2-x) is widely studied as a choice electrolyte material for intermediate-temperature (~ 800 K) solid oxide fuel cells. At this temperature, maintaining its chemical stability and thermal-mechanical integrity of this oxide are of utmost importance. To understand their thermal-elastic properties, we firstly test the influence of various approximations to the density-functional theory (DFT) xc functionals on specific thermal-elastic properties of both CeO2 and Ce2O3. Namely, we consider the local-density approximation (LDA), the generalized gradient approximation (GGA-PBE) with and without additional Hubbard U as applied to the 4 f electron of Ce, as well as the recently popularized hybrid functional due to Heyd-Scuseria-Ernzehof (HSE06). Next, we then couple this to a volume-dependent Debye-Grüneisen model to determine the thermodynamic quantities of ceria at arbitrary temperatures. We find an explicit description of the strong correlation (e.g. via the DFT + U and hybrid functional approach) is necessary to have a good agreement with experimental values, in contrast to the mean-field treatment in standard xc approximations (such as LDA or GGA-PBE). We acknowledge support from Samsung Research Funding Center of Samsung Electronics (SRFC-MA1501-03).
Modeling mitochondrial function.
Balaban, Robert S
2006-12-01
The mitochondrion represents a unique opportunity to apply mathematical modeling to a complex biological system. Understanding mitochondrial function and control is important since this organelle is critical in energy metabolism as well as playing key roles in biochemical synthesis, redox control/signaling, and apoptosis. A mathematical model, or hypothesis, provides several useful insights including a rigorous test of the consensus view of the operation of a biological process as well as providing methods of testing and creating new hypotheses. The advantages of the mitochondrial system for applying a mathematical model include the relative simplicity and understanding of the matrix reactions, the ability to study the mitochondria as a independent contained organelle, and, most importantly, one can dynamically measure many of the internal reaction intermediates, on line. The developing ability to internally monitor events within the metabolic network, rather than just the inflow and outflow, is extremely useful in creating critical bounds on complex mathematical models using the individual reaction mechanisms available. However, many serious problems remain in creating a working model of mitochondrial function including the incomplete definition of metabolic pathways, the uncertainty of using in vitro enzyme kinetics, as well as regulatory data in the intact system and the unknown chemical activities of relevant molecules in the matrix. Despite these formidable limitations, the advantages of the mitochondrial system make it one of the best defined mammalian metabolic networks that can be used as a model system for understanding the application and use of mathematical models to study biological systems.
NASA Astrophysics Data System (ADS)
Schafroth, S.; Rodríguez-Núñez, J. J.
1999-08-01
We evaluate the one-particle and double-occupied Green functions for the Hubbard model at half-filling using the moment approach of Nolting [Z. Phys. 255, 25 (1972); Grund Kurs: Theoretische Physik. 7 Viel-Teilchen-Theorie (Verlag Zimmermann-Neufang, Ulmen, 1992)]. Our starting point is a self-energy, Σ(k-->,ω), which has a single pole, Ω(k-->), with spectral weight, α(k-->), and quasiparticle lifetime, γ(k-->) [J. J. Rodríguez-Núñez and S. Schafroth, J. Phys. Condens. Matter 10, L391 (1998); J. J. Rodríguez-Núñez, S. Schafroth, and H. Beck, Physica B (to be published); (unpublished)]. In our approach, Σ(k-->,ω) becomes the central feature of the many-body problem and due to three unknown k--> parameters we have to satisfy only the first three sum rules instead of four as in the canonical formulation of Nolting [Z. Phys. 255, 25 (1972); Grund Kurs: Theoretische Physik. 7 Viel-Teilchen-Theorie (Verlag Zimmermann-Neufang, Ulmen, 1992)]. This self-energy choice forces our system to be a non-Fermi liquid for any value of the interaction, since it does not vanish at zero frequency. The one-particle Green function, G(k-->,ω), shows the fingerprint of a strongly correlated system, i.e., a double peak structure in the one-particle spectral density, A(k-->,ω), vs ω for intermediate values of the interaction. Close to the Mott insulator-transition, A(k-->,ω) becomes a wide single peak, signaling the absence of quasiparticles. Similar behavior is observed for the real and imaginary parts of the self-energy, Σ(k-->,ω). The double-occupied Green function, G2(q-->,ω), has been obtained from G(k-->,ω) by means of the equation of motion. The relation between G2(q-->,ω) and the self-energy, Σ(k-->,ω), is formally established and numerical results for the spectral function of G2(k-->,ω), χ(2)(k-->,ω)≡-(1/π)δ-->0+Im[G2(k-->,ω)], are given. Our approach represents the simplest way to include (1) lifetime effects in the moment approach of Nolting, as
Li, Xin; Carravetta, Vincenzo; Li, Cui; Monti, Susanna; Rinkevicius, Zilvinas; Ågren, Hans
2016-07-12
Motivated by the growing importance of organometallic nanostructured materials and nanoparticles as microscopic devices for diagnostic and sensing applications, and by the recent considerable development in the simulation of such materials, we here choose a prototype system - para-nitroaniline (pNA) on gold nanoparticles - to demonstrate effective strategies for designing metal nanoparticles with organic conjugates from fundamental principles. We investigated the motion, adsorption mode, and physical chemistry properties of gold-pNA particles, increasing in size, through classical molecular dynamics (MD) simulations in connection with quantum chemistry (QC) calculations. We apply the quantum mechanics-capacitance molecular mechanics method [Z. Rinkevicius et al. J. Chem. Theory Comput. 2014, 10, 989] for calculations of the properties of the conjugate nanoparticles, where time dependent density functional theory is used for the QM part and a capacitance-polarizability parametrization of the MM part, where induced dipoles and charges by metallic charge transfer are considered. Dispersion and short-range repulsion forces are included as well. The scheme is applied to one- and two-photon absorption of gold-pNA clusters increasing in size toward the nanometer scale. Charge imaging of the surface introduces red-shifts both because of altered excitation energy dependence and variation of the relative intensity of the inherent states making up for the total band profile. For the smaller nanoparticles the difference in the crystal facets are important for the spectral outcome which is also influenced by the surrounding MM environment. PMID:27224666
Li, Xin; Carravetta, Vincenzo; Li, Cui; Monti, Susanna; Rinkevicius, Zilvinas; Ågren, Hans
2016-07-12
Motivated by the growing importance of organometallic nanostructured materials and nanoparticles as microscopic devices for diagnostic and sensing applications, and by the recent considerable development in the simulation of such materials, we here choose a prototype system - para-nitroaniline (pNA) on gold nanoparticles - to demonstrate effective strategies for designing metal nanoparticles with organic conjugates from fundamental principles. We investigated the motion, adsorption mode, and physical chemistry properties of gold-pNA particles, increasing in size, through classical molecular dynamics (MD) simulations in connection with quantum chemistry (QC) calculations. We apply the quantum mechanics-capacitance molecular mechanics method [Z. Rinkevicius et al. J. Chem. Theory Comput. 2014, 10, 989] for calculations of the properties of the conjugate nanoparticles, where time dependent density functional theory is used for the QM part and a capacitance-polarizability parametrization of the MM part, where induced dipoles and charges by metallic charge transfer are considered. Dispersion and short-range repulsion forces are included as well. The scheme is applied to one- and two-photon absorption of gold-pNA clusters increasing in size toward the nanometer scale. Charge imaging of the surface introduces red-shifts both because of altered excitation energy dependence and variation of the relative intensity of the inherent states making up for the total band profile. For the smaller nanoparticles the difference in the crystal facets are important for the spectral outcome which is also influenced by the surrounding MM environment.
I. M. Robertson; A. Beaudoin; J. Lambros
2004-01-05
OAK-135 Development and validation of constitutive models for polycrystalline materials subjected to high strain rate loading over a range of temperatures are needed to predict the response of engineering materials to in-service type conditions (foreign object damage, high-strain rate forging, high-speed sheet forming, deformation behavior during forming, response to extreme conditions, etc.). To account accurately for the complex effects that can occur during extreme and variable loading conditions, requires significant and detailed computational and modeling efforts. These efforts must be closely coupled with precise and targeted experimental measurements that not only verify the predictions of the models, but also provide input about the fundamental processes responsible for the macroscopic response. Achieving this coupling between modeling and experimentation is the guiding principle of this program. Specifically, this program seeks to bridge the length scale between discrete dislocation interactions with grain boundaries and continuum models for polycrystalline plasticity. Achieving this goal requires incorporating these complex dislocation-interface interactions into the well-defined behavior of single crystals. Despite the widespread study of metal plasticity, this aspect is not well understood for simple loading conditions, let alone extreme ones. Our experimental approach includes determining the high-strain rate response as a function of strain and temperature with post-mortem characterization of the microstructure, quasi-static testing of pre-deformed material, and direct observation of the dislocation behavior during reloading by using the in situ transmission electron microscope deformation technique. These experiments will provide the basis for development and validation of physically-based constitutive models, which will include dislocation-grain boundary interactions for polycrystalline systems. One aspect of the program will involve the dire ct
Borodovsky, M.
2013-04-11
Algorithmic methods for gene prediction have been developed and successfully applied to many different prokaryotic genome sequences. As the set of genes in a particular genome is not homogeneous with respect to DNA sequence composition features, the GeneMark.hmm program utilizes two Markov models representing distinct classes of protein coding genes denoted "typical" and "atypical". Atypical genes are those whose DNA features deviate significantly from those classified as typical and they represent approximately 10% of any given genome. In addition to the inherent interest of more accurately predicting genes, the atypical status of these genes may also reflect their separate evolutionary ancestry from other genes in that genome. We hypothesize that atypical genes are largely comprised of those genes that have been relatively recently acquired through lateral gene transfer (LGT). If so, what fraction of atypical genes are such bona fide LGTs? We have made atypical gene predictions for all fully completed prokaryotic genomes; we have been able to compare these results to other "surrogate" methods of LGT prediction.
Muccioli, Luca; D'Avino, Gabriele; Berardi, Roberto; Orlandi, Silvia; Pizzirusso, Antonio; Ricci, Matteo; Roscioni, Otello Maria; Zannoni, Claudio
2014-01-01
The molecular organization of functional organic materials is one of the research areas where the combination of theoretical modeling and experimental determinations is most fruitful. Here we present a brief summary of the simulation approaches used to investigate the inner structure of organic materials with semiconducting behavior, paying special attention to applications in organic photovoltaics and clarifying the often obscure jargon hindering the access of newcomers to the literature of the field. Special attention is paid to the choice of the computational "engine" (Monte Carlo or Molecular Dynamics) used to generate equilibrium configurations of the molecular system under investigation and, more importantly, to the choice of the chemical details in describing the molecular interactions. Recent literature dealing with the simulation of organic semiconductors is critically reviewed in order of increasing complexity of the system studied, from low molecular weight molecules to semiflexible polymers, including the challenging problem of determining the morphology of heterojunctions between two different materials. PMID:24322782
Muccioli, Luca; D'Avino, Gabriele; Berardi, Roberto; Orlandi, Silvia; Pizzirusso, Antonio; Ricci, Matteo; Roscioni, Otello Maria; Zannoni, Claudio
2014-01-01
The molecular organization of functional organic materials is one of the research areas where the combination of theoretical modeling and experimental determinations is most fruitful. Here we present a brief summary of the simulation approaches used to investigate the inner structure of organic materials with semiconducting behavior, paying special attention to applications in organic photovoltaics and clarifying the often obscure jargon hindering the access of newcomers to the literature of the field. Special attention is paid to the choice of the computational "engine" (Monte Carlo or Molecular Dynamics) used to generate equilibrium configurations of the molecular system under investigation and, more importantly, to the choice of the chemical details in describing the molecular interactions. Recent literature dealing with the simulation of organic semiconductors is critically reviewed in order of increasing complexity of the system studied, from low molecular weight molecules to semiflexible polymers, including the challenging problem of determining the morphology of heterojunctions between two different materials.
Modelling approaches in biomechanics.
Alexander, R McN
2003-01-01
Conceptual, physical and mathematical models have all proved useful in biomechanics. Conceptual models, which have been used only occasionally, clarify a point without having to be constructed physically or analysed mathematically. Some physical models are designed to demonstrate a proposed mechanism, for example the folding mechanisms of insect wings. Others have been used to check the conclusions of mathematical modelling. However, others facilitate observations that would be difficult to make on real organisms, for example on the flow of air around the wings of small insects. Mathematical models have been used more often than physical ones. Some of them are predictive, designed for example to calculate the effects of anatomical changes on jumping performance, or the pattern of flow in a 3D assembly of semicircular canals. Others seek an optimum, for example the best possible technique for a high jump. A few have been used in inverse optimization studies, which search for variables that are optimized by observed patterns of behaviour. Mathematical models range from the extreme simplicity of some models of walking and running, to the complexity of models that represent numerous body segments and muscles, or elaborate bone shapes. The simpler the model, the clearer it is which of its features is essential to the calculated effect. PMID:14561333
Lianou, Alexandra; Koutsoumanis, Konstantinos P
2011-10-01
Strain variability of the growth behavior of foodborne pathogens has been acknowledged as an important issue in food safety management. A stochastic model providing predictions of the maximum specific growth rate (μ(max)) of Salmonella enterica as a function of pH and water activity (a(w)) and integrating intra-species variability data was developed. For this purpose, growth kinetic data of 60 S. enterica isolates, generated during monitoring of growth in tryptone soy broth of different pH (4.0-7.0) and a(w) (0.964-0.992) values, were used. The effects of the environmental parameters on μ(max) were modeled for each tested S. enterica strain using cardinal type and gamma concept models for pH and a(w), respectively. A multiplicative without interaction-type model, combining the models for pH and a(w), was used to describe the combined effect of these two environmental parameters on μ(max). The strain variability of the growth behavior of S. enterica was incorporated in the modeling procedure by using the cumulative probability distributions of the values of pH(min), pH(opt) and a(wmin) as inputs to the growth model. The cumulative probability distribution of the observed μ(max) values corresponding to growth at pH 7.0-a(w) 0.992 was introduced in the place of the model's parameter μ(opt). The introduction of the above distributions into the growth model resulted, using Monte Carlo simulation, in a stochastic model with its predictions being distributions of μ(max) values characterizing the strain variability. The developed model was further validated using independent growth kinetic data (μ(max) values) generated for the 60 strains of the pathogen at pH 5.0-a(w) 0.977, and exhibited a satisfactory performance. The mean, standard deviation, and the 5th and 95th percentiles of the predicted μ(max) distribution were 0.83, 0.08, and 0.69 and 0.96h(-1), respectively, while the corresponding values of the observed distribution were 0.73, 0.09, and 0.61 and 0.85h
Functional Risk Modeling for Lunar Surface Systems
NASA Technical Reports Server (NTRS)
Thomson, Fraser; Mathias, Donovan; Go, Susie; Nejad, Hamed
2010-01-01
We introduce an approach to risk modeling that we call functional modeling , which we have developed to estimate the capabilities of a lunar base. The functional model tracks the availability of functions provided by systems, in addition to the operational state of those systems constituent strings. By tracking functions, we are able to identify cases where identical functions are provided by elements (rovers, habitats, etc.) that are connected together on the lunar surface. We credit functional diversity in those cases, and in doing so compute more realistic estimates of operational mode availabilities. The functional modeling approach yields more realistic estimates of the availability of the various operational modes provided to astronauts by the ensemble of surface elements included in a lunar base architecture. By tracking functional availability the effects of diverse backup, which often exists when two or more independent elements are connected together, is properly accounted for.
Thomas, Philipp; Rammsayer, Thomas; Schweizer, Karl; Troche, Stefan
2015-01-01
Numerous studies reported a strong link between working memory capacity (WMC) and fluid intelligence (Gf), although views differ in respect to how close these two constructs are related to each other. In the present study, we used a WMC task with five levels of task demands to assess the relationship between WMC and Gf by means of a new methodological approach referred to as fixed-links modeling. Fixed-links models belong to the family of confirmatory factor analysis (CFA) and are of particular interest for experimental, repeated-measures designs. With this technique, processes systematically varying across task conditions can be disentangled from processes unaffected by the experimental manipulation. Proceeding from the assumption that experimental manipulation in a WMC task leads to increasing demands on WMC, the processes systematically varying across task conditions can be assumed to be WMC-specific. Processes not varying across task conditions, on the other hand, are probably independent of WMC. Fixed-links models allow for representing these two kinds of processes by two independent latent variables. In contrast to traditional CFA where a common latent variable is derived from the different task conditions, fixed-links models facilitate a more precise or purified representation of the WMC-related processes of interest. By using fixed-links modeling to analyze data of 200 participants, we identified a non-experimental latent variable, representing processes that remained constant irrespective of the WMC task conditions, and an experimental latent variable which reflected processes that varied as a function of experimental manipulation. This latter variable represents the increasing demands on WMC and, hence, was considered a purified measure of WMC controlled for the constant processes. Fixed-links modeling showed that both the purified measure of WMC (β = .48) as well as the constant processes involved in the task (β = .45) were related to Gf. Taken
Approaches to the Hubbard Model
NASA Astrophysics Data System (ADS)
Maguire, Cary Mcilwaine, Jr.
This thesis analyzes several theoretical approaches to the one band Hubbard model in hopes of extracting selected physical quantities in limits most closely corresponding to real materials. Along the way, three rather remarkable theorems of a much broader scope are proven. It is hoped that these may be of general interest in a variety of related physical and mathematical disciplines. In chapter one, the well-known mean field theory developed by Affleck and Marston is studied in the presence of a magnetic field. Through a rather straightforward numerical procedure, phase diagrams in t/delta ^ace are generated as a function of field. The results of this study are then extended to a magnetic susceptibility calculation and to the analysis of the phase diagram of fan alternate mean field theory, the "generalized flux phases" proposed by Anderson. Several interesting properties and symmetries of the solutions are then briefly discussed. In chapter two, the Gutzwiller projector is analyzed both analytically and numerically, with the results being used to calculate the momentum density function for a trial wavefunction also proposed by Anderson. Two of the above mentioned theorems are developed in this chapter, the one prescribing the expansion of a general restricted sum in terms of its related unrestricted sums, and the other presenting the exact diagonilization of a component of the projector which is equivalent through a U(1) gauge transformation to the total spin operator. In chapter three, we discuss the exact solutions to the one dimensional Hubbard model first derived by Lieb and Wu. From their large U limiting behavior, we extract the phonon scattering matrix elements and first order single particle energies for some finite systems. The third potentially general theorem, which related charge determinants with an arbitrary number of "gaps" between their rows to a comparatively simple function of the corresponding van der Monde determinants, is proven here.
A functional approach to the TMJ disorders.
Deodato, F; Cristiano, S; Trusendi, R; Giorgetti, R
2003-01-01
This manuscript describes our conservative approach to treatment of TMJ disorders. The method we use had been suggested by Rocabado - its aims are: joint distraction by the elimination of compression, restoration of physiologic articular rest, mobilization of the soft tissues, and whenever possible, to improve the condyle-disk-glenoid fossa relationship. To support these claims two clinical cases are presented where the non-invasive therapy was successful. The results obtained confirm the validity of this functional approach.
Shankar Subramaniam
2009-04-01
This final project report summarizes progress made towards the objectives described in the proposal entitled “Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach”. Substantial progress has been made in theory, modeling and numerical simulation of turbulent multiphase flows. The consistent mathematical framework based on probability density functions is described. New models are proposed for turbulent particle-laden flows and sprays.
I. Robertson; A. Beaudoin; J. Lambros
2005-01-31
Development and validation of constitutive models for polycrystalline materials subjected to high strain rate loading over a range of temperatures are needed to predict the response of engineering materials to in-service type conditions (foreign object damage, high-strain rate forging, high-speed sheet forming, deformation behavior during forming, response to extreme conditions, etc.). To account accurately for the complex effects that can occur during extreme and variable loading conditions, requires significant and detailed computational and modeling efforts. These efforts must be closely coupled with precise and targeted experimental measurements that not only verify the predictions of the models, but also provide input about the fundamental processes responsible for the macroscopic response. Achieving this coupling between modeling and experimentation is the guiding principle of this program. Specifically, this program seeks to bridge the length scale between discrete dislocation interactions with grain boundaries and continuum models for polycrystalline plasticity. Achieving this goal requires incorporating these complex dislocation-interface interactions into the well-defined behavior of single crystals. Despite the widespread study of metal plasticity, this aspect is not well understood for simple loading conditions, let alone extreme ones. Our experimental approach includes determining the high-strain rate response as a function of strain and temperature with post-mortem characterization of the microstructure, quasi-static testing of pre-deformed material, and direct observation of the dislocation behavior during reloading by using the in situ transmission electron microscope deformation technique. These experiments will provide the basis for development and validation of physically-based constitutive models, which will include dislocation-grain boundary interactions for polycrystalline systems. One aspect of the program will involve the direct observation
Detection of Differential Item Functioning Using the Lasso Approach
ERIC Educational Resources Information Center
Magis, David; Tuerlinckx, Francis; De Boeck, Paul
2015-01-01
This article proposes a novel approach to detect differential item functioning (DIF) among dichotomously scored items. Unlike standard DIF methods that perform an item-by-item analysis, we propose the "LR lasso DIF method": logistic regression (LR) model is formulated for all item responses. The model contains item-specific intercepts,…
Functional testing: Approaches and injury management integration.
Johnson, Laurie J.; Miller, Margot
2001-01-01
Functional Capacity Evaluations (FCEs) have become the standard for identifying an individual's physical abilities and/or limitations following injury or illness. While philosophies and approaches differ, the focus of most FCE systems is to identify the individual's maximum capabilities. This article will discuss the usefulness of the FCE information and how FCEs are impacted by multiple factors including APTA guidelines, machine based testing, therapist expertise, medical legal credibility and court testimony, IMEs and FCE, and return to work/return to function.
Schoville, Benjamin J.; Brown, Kyle S.; Harris, Jacob A.; Wilkins, Jayne
2016-01-01
The Middle Stone Age (MSA) is associated with early evidence for symbolic material culture and complex technological innovations. However, one of the most visible aspects of MSA technologies are unretouched triangular stone points that appear in the archaeological record as early as 500,000 years ago in Africa and persist throughout the MSA. How these tools were being used and discarded across a changing Pleistocene landscape can provide insight into how MSA populations prioritized technological and foraging decisions. Creating inferential links between experimental and archaeological tool use helps to establish prehistoric tool function, but is complicated by the overlaying of post-depositional damage onto behaviorally worn tools. Taphonomic damage patterning can provide insight into site formation history, but may preclude behavioral interpretations of tool function. Here, multiple experimental processes that form edge damage on unretouched lithic points from taphonomic and behavioral processes are presented. These provide experimental distributions of wear on tool edges from known processes that are then quantitatively compared to the archaeological patterning of stone point edge damage from three MSA lithic assemblages—Kathu Pan 1, Pinnacle Point Cave 13B, and Die Kelders Cave 1. By using a model-fitting approach, the results presented here provide evidence for variable MSA behavioral strategies of stone point utilization on the landscape consistent with armature tips at KP1, and cutting tools at PP13B and DK1, as well as damage contributions from post-depositional sources across assemblages. This study provides a method with which landscape-scale questions of early modern human tool-use and site-use can be addressed. PMID:27736886
Exact response functions within the time-dependent Gutzwiller approach
NASA Astrophysics Data System (ADS)
Bünemann, J.; Wasner, S.; Oelsen, E. v.; Seibold, G.
2015-02-01
We investigate the applicability of the two existing versions of a time-dependent Gutzwiller approach (TDGA) beyond the frequently used limit of infinite spatial dimensions. To this end, we study the two-particle response functions of a two-site Hubbard model where we can compare the exact results and those derived from the TDGA. It turns out that only the more recently introduced version of the TDGA can be combined with a diagrammatic approach which allows for the evaluation of Gutzwiller wave functions in finite dimensions. For this TDGA method, we derive the time-dependent Lagrangian for general single-band Hubbard models.
Modelling of graphene functionalization.
Pykal, Martin; Jurečka, Petr; Karlický, František; Otyepka, Michal
2016-03-01
Graphene has attracted great interest because of its remarkable properties and numerous potential applications. A comprehensive understanding of its structural and dynamic properties and those of its derivatives will be required to enable the design and optimization of sophisticated new nanodevices. While it is challenging to perform experimental studies on nanoscale systems at the atomistic level, this is the 'native' scale of computational chemistry. Consequently, computational methods are increasingly being used to complement experimental research in many areas of chemistry and nanotechnology. However, it is difficult for non-experts to get to grips with the plethora of computational tools that are available and their areas of application. This perspective briefly describes the available theoretical methods and models for simulating graphene functionalization based on quantum and classical mechanics. The benefits and drawbacks of the individual methods are discussed, and we provide numerous examples showing how computational methods have provided new insights into the physical and chemical features of complex systems including graphene and graphene derivatives. We believe that this overview will help non-expert readers to understand this field and its great potential. PMID:26323438
Modeling of functional brain imaging data
NASA Astrophysics Data System (ADS)
Horwitz, Barry
1999-03-01
The richness and complexity of data sets obtained from functional neuroimaging studies of human cognitive behavior, using techniques such as positron emission tomography and functional magnetic resonance imaging, have until recently not been exploited by computational neural modeling methods. In this article, following a brief introduction to functional neuroimaging methodology, two neural modeling approaches for use with functional brain imaging data are described. One, which uses structural equation modeling, examines the effective functional connections between various brain regions during specific cognitive tasks. The second employs large-scale neural modeling to relate functional neuroimaging signals in multiple, interconnected brain regions to the underlying neurobiological time-varying activities in each region. These two modeling procedures are illustrated using a visual processing paradigm.
ERIC Educational Resources Information Center
Metzger, Jesse A.
2010-01-01
The aims of this research were to 1) examine the qualities for which applicants are selected for entrance into clinical psychology Ph.D. programs, and 2) investigate the prevalence and impact of the mentor-model approach to admissions on multiple domains of programs and the field at large. Fifty Directors of Clinical Training (DCTs) provided data…
Modeling Protein Domain Function
ERIC Educational Resources Information Center
Baker, William P.; Jones, Carleton "Buck"; Hull, Elizabeth
2007-01-01
This simple but effective laboratory exercise helps students understand the concept of protein domain function. They use foam beads, Styrofoam craft balls, and pipe cleaners to explore how domains within protein active sites interact to form a functional protein. The activity allows students to gain content mastery and an understanding of the…
Ricken, T; Werner, D; Holzhütter, H G; König, M; Dahmen, U; Dirsch, O
2015-06-01
This study focuses on a two-scale, continuum multicomponent model for the description of blood perfusion and cell metabolism in the liver. The model accounts for a spatial and time depending hydro-diffusion-advection-reaction description. We consider a solid-phase (tissue) containing glycogen and a fluid-phase (blood) containing glucose as well as lactate. The five-component model is enhanced by a two-scale approach including a macroscale (sinusoidal level) and a microscale (cell level). The perfusion on the macroscale within the lobules is described by a homogenized multiphasic approach based on the theory of porous media (mixture theory combined with the concept of volume fraction). On macro level, we recall the basic mixture model, the governing equations as well as the constitutive framework including the solid (tissue) stress, blood pressure and solutes chemical potential. In view of the transport phenomena, we discuss the blood flow including transverse isotropic permeability, as well as the transport of solute concentrations including diffusion and advection. The continuum multicomponent model on the macroscale finally leads to a coupled system of partial differential equations (PDE). In contrast, the hepatic metabolism on the microscale (cell level) was modeled via a coupled system of ordinary differential equations (ODE). Again, we recall the constitutive relations for cell metabolism level. A finite element implementation of this framework is used to provide an illustrative example, describing the spatial and time-depending perfusion-metabolism processes in liver lobules that integrates perfusion and metabolism of the liver.
Functional genomics approaches in parasitic helminths.
Hagen, J; Lee, E F; Fairlie, W D; Kalinna, B H
2012-01-01
As research on parasitic helminths is moving into the post-genomic era, an enormous effort is directed towards deciphering gene function and to achieve gene annotation. The sequences that are available in public databases undoubtedly hold information that can be utilized for new interventions and control but the exploitation of these resources has until recently remained difficult. Only now, with the emergence of methods to genetically manipulate and transform parasitic worms will it be possible to gain a comprehensive understanding of the molecular mechanisms involved in nutrition, metabolism, developmental switches/maturation and interaction with the host immune system. This review focuses on functional genomics approaches in parasitic helminths that are currently used, to highlight potential applications of these technologies in the areas of cell biology, systems biology and immunobiology of parasitic helminths.
Pharmacological approaches to restore mitochondrial function
Andreux, Pénélope A.; Houtkooper, Riekelt H.; Auwerx, Johan
2014-01-01
Mitochondrial dysfunction is not only a hallmark of rare inherited mitochondrial disorders, but is also implicated in age-related diseases, including those that affect the metabolic and nervous system, such as type 2 diabetes and Parkinson’s disease. Numerous pathways maintain and/or restore proper mitochondrial function, including mitochondrial biogenesis, mitochondrial dynamics, mitophagy, and the mitochondrial unfolded protein response. New and powerful phenotypic assays in cell-based models, as well as multicellular organisms, have been developed to explore these different aspects of mitochondrial function. Modulating mitochondrial function has therefore emerged as an attractive therapeutic strategy for a range of diseases, which has spurred active drug discovery efforts in this area. PMID:23666487
Nonlinear aerodynamic modeling using multivariate orthogonal functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1993-01-01
A technique was developed for global modeling of nonlinear aerodynamic coefficients using multivariate orthogonal functions based on the data. Each orthogonal function retained in the model was decomposed into an expansion of ordinary polynomials in the independent variables, so that the final model could be interpreted as selectively retained terms from a multivariable power series expansion. A predicted squared-error metric was used to determine the orthogonal functions to be retained in the model; analytical derivatives were easily computed. The approach was demonstrated on the Z-body axis aerodynamic force coefficient (Cz) wind tunnel data for an F-18 research vehicle which came from a tabular wind tunnel and covered the entire subsonic flight envelope. For a realistic case, the analytical model predicted experimental values of Cz very well. The modeling technique is shown to be capable of generating a compact, global analytical representation of nonlinear aerodynamics. The polynomial model has good predictive capability, global validity, and analytical differentiability.
Computational Models for Neuromuscular Function
Valero-Cuevas, Francisco J.; Hoffmann, Heiko; Kurse, Manish U.; Kutch, Jason J.; Theodorou, Evangelos A.
2011-01-01
Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data. PMID:21687779
Response Surface Modeling Using Multivariate Orthogonal Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2001-01-01
A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.
Sadeghi, Mohsen; Emadi Andani, Mehran; Bahrami, Fariba; Parnianpour, Mohamad
2013-08-01
The purpose of this work is to develop a computational model to describe the task of sit to stand (STS). STS is an important movement skill which is frequently performed in human daily activities, but has rarely been studied from the perspective of optimization principles. In this study, we compared the recorded trajectories of STS with the trajectories generated by several conventional optimization-based models (i.e., minimum torque, minimum torque change and kinetic energy cost models) and also with the trajectories produced by a novel multi-phase cost model (MPCM). In the MPCM, we suggested that any complex task, such as STS, is decomposable into successive motion phases, so that each phase requires a distinct strategy to be performed. In this way, we proposed a multi-phase cost function to describe the STS task. The results revealed that the conventional optimization-based models failed to correctly predict the invariable features of STS, such as hip flexion and ankle dorsiflexion movements. However, the MPCM not only predicted the general features of STS with a sufficient accuracy, but also showed a potential flexibility to distinguish between the movement strategies from one subject to the other. According to the results, it seems plausible to hypothesize that the central nervous system might apply different strategies when planning different phases of a complex task. The application areas of the proposed model could be generating optimized trajectories of STS for clinical applications (such as functional electrical stimulation) or providing clinical and engineering insights to develop more efficient rehabilitation devices and protocols. PMID:23807475
HEDR modeling approach: Revision 1
Shipler, D.B.; Napier, B.A.
1994-05-01
This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies.
Evolutionary modeling-based approach for model errors correction
NASA Astrophysics Data System (ADS)
Wan, S. Q.; He, W. P.; Wang, L.; Jiang, W.; Zhang, W.
2012-08-01
The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data." On the basis of the intelligent features of evolutionary modeling (EM), including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.
Unified approach to partition functions of RNA secondary structures.
Bundschuh, Ralf
2014-11-01
RNA secondary structure formation is a field of considerable biological interest as well as a model system for understanding generic properties of heteropolymer folding. This system is particularly attractive because the partition function and thus all thermodynamic properties of RNA secondary structure ensembles can be calculated numerically in polynomial time for arbitrary sequences and homopolymer models admit analytical solutions. Such solutions for many different aspects of the combinatorics of RNA secondary structure formation share the property that the final solution depends on differences of statistical weights rather than on the weights alone. Here, we present a unified approach to a large class of problems in the field of RNA secondary structure formation. We prove a generic theorem for the calculation of RNA folding partition functions. Then, we show that this approach can be applied to the study of the molten-native transition, denaturation of RNA molecules, as well as to studies of the glass phase of random RNA sequences.
Unified approach to partition functions of RNA secondary structures.
Bundschuh, Ralf
2014-11-01
RNA secondary structure formation is a field of considerable biological interest as well as a model system for understanding generic properties of heteropolymer folding. This system is particularly attractive because the partition function and thus all thermodynamic properties of RNA secondary structure ensembles can be calculated numerically in polynomial time for arbitrary sequences and homopolymer models admit analytical solutions. Such solutions for many different aspects of the combinatorics of RNA secondary structure formation share the property that the final solution depends on differences of statistical weights rather than on the weights alone. Here, we present a unified approach to a large class of problems in the field of RNA secondary structure formation. We prove a generic theorem for the calculation of RNA folding partition functions. Then, we show that this approach can be applied to the study of the molten-native transition, denaturation of RNA molecules, as well as to studies of the glass phase of random RNA sequences. PMID:24177391
Decomposition approach to model smart suspension struts
NASA Astrophysics Data System (ADS)
Song, Xubin
2008-10-01
Model and simulation study is the starting point for engineering design and development, especially for developing vehicle control systems. This paper presents a methodology to build models for application of smart struts for vehicle suspension control development. The modeling approach is based on decomposition of the testing data. Per the strut functions, the data is dissected according to both control and physical variables. Then the data sets are characterized to represent different aspects of the strut working behaviors. Next different mathematical equations can be built and optimized to best fit the corresponding data sets, respectively. In this way, the model optimization can be facilitated in comparison to a traditional approach to find out a global optimum set of model parameters for a complicated nonlinear model from a series of testing data. Finally, two struts are introduced as examples for this modeling study: magneto-rheological (MR) dampers and compressible fluid (CF) based struts. The model validation shows that this methodology can truly capture macro-behaviors of these struts.
Distribution function approach to redshift space distortions
Seljak, Uroš; McDonald, Patrick E-mail: pvmcdonald@lbl.gov
2011-11-01
We develop a phase space distribution function approach to redshift space distortions (RSD), in which the redshift space density can be written as a sum over velocity moments of the distribution function. These moments are density weighted and have well defined physical interpretation: their lowest orders are density, momentum density, and stress energy density. The series expansion is convergent if kμu/aH < 1, where k is the wavevector, H the Hubble parameter, u the typical gravitational velocity and μ = cos θ, with θ being the angle between the Fourier mode and the line of sight. We perform an expansion of these velocity moments into helicity modes, which are eigenmodes under rotation around the axis of Fourier mode direction, generalizing the scalar, vector, tensor decomposition of perturbations to an arbitrary order. We show that only equal helicity moments correlate and derive the angular dependence of the individual contributions to the redshift space power spectrum. We show that the dominant term of μ{sup 2} dependence on large scales is the cross-correlation between the density and scalar part of momentum density, which can be related to the time derivative of the matter power spectrum. Additional terms contributing to μ{sup 2} and dominating on small scales are the vector part of momentum density-momentum density correlations, the energy density-density correlations, and the scalar part of anisotropic stress density-density correlations. The second term is what is usually associated with the small scale Fingers-of-God damping and always suppresses power, but the first term comes with the opposite sign and always adds power. Similarly, we identify 7 terms contributing to μ{sup 4} dependence. Some of the advantages of the distribution function approach are that the series expansion converges on large scales and remains valid in multi-stream situations. We finish with a brief discussion of implications for RSD in galaxies relative to dark matter
A proteomic approach to neuropeptide function elucidation.
Temmerman, L; Bogaerts, A; Meelkop, E; Cardoen, D; Boerjan, B; Janssen, T; Schoofs, L
2012-03-01
Many of the diverse functions of neuropeptides are still elusive. As they are ideally suited to modulate traditional signaling, their added actions are not always detectable under standard laboratory conditions. The search for function assignment to peptide encoding genes can therefore greatly benefit from molecular information. Specific molecular changes resulting from neuropeptide signaling may direct researchers to yet unknown processes or conditions, for which studying these signaling systems may eventually lead to phenotypic confirmation. Here, we applied gel-based proteomics after pdf-1 neuropeptide gene knockout in the model organism Caenorhabditis elegans. It has previously been described that pdf-1 null mutants display a locomotion defect, being slower and making more turns and reversals than wild type worms. The vertebrate functional homolog of PDF-1, vasocative intestinal peptide (VIP), is known to influence a plethora of processes, which have so far not been investigated for pdf-1. Because proteins represent the actual effectors inside an organism, proteomic analysis can guide our view to novel pdf-1 actions in the nematode worm. Our data show that knocking out pdf-1 results in alteration of levels of proteins involved in fat metabolism, stress resistance and development. This indicates a possible conservation of VIP-like actions for pdf-1 in C. elegans.
Modeling Approaches in Planetary Seismology
NASA Technical Reports Server (NTRS)
Weber, Renee; Knapmeyer, Martin; Panning, Mark; Schmerr, Nick
2014-01-01
Of the many geophysical means that can be used to probe a planet's interior, seismology remains the most direct. Given that the seismic data gathered on the Moon over 40 years ago revolutionized our understanding of the Moon and are still being used today to produce new insight into the state of the lunar interior, it is no wonder that many future missions, both real and conceptual, plan to take seismometers to other planets. To best facilitate the return of high-quality data from these instruments, as well as to further our understanding of the dynamic processes that modify a planet's interior, various modeling approaches are used to quantify parameters such as the amount and distribution of seismicity, tidal deformation, and seismic structure on and of the terrestrial planets. In addition, recent advances in wavefield modeling have permitted a renewed look at seismic energy transmission and the effects of attenuation and scattering, as well as the presence and effect of a core, on recorded seismograms. In this chapter, we will review these approaches.
Neural modeling and functional neuroimaging.
Horwitz, B; Sporns, O
1994-01-01
Two research areas that so far have had little interaction with one another are functional neuroimaging and computational neuroscience. The application of computational models and techniques to the inherently rich data sets generated by "standard" neurophysiological methods has proven useful for interpreting these data sets and for providing predictions and hypotheses for further experiments. We suggest that both theory- and data-driven computational modeling of neuronal systems can help to interpret data generated by functional neuroimaging methods, especially those used with human subjects. In this article, we point out four sets of questions, addressable by computational neuroscientists whose answere would be of value and interest to those who perform functional neuroimaging. The first set consist of determining the neurobiological substrate of the signals measured by functional neuroimaging. The second set concerns developing systems-level models of functional neuroimaging data. The third set of questions involves integrating functional neuroimaging data across modalities, with a particular emphasis on relating electromagnetic with hemodynamic data. The last set asks how one can relate systems-level models to those at the neuronal and neural ensemble levels. We feel that there are ample reasons to link functional neuroimaging and neural modeling, and that combining the results from the two disciplines will result in furthering our understanding of the central nervous system. © 1994 Wiley-Liss, Inc. This Article is a US Goverment work and, as such, is in the public domain in the United State of America.
Experimental model updating using frequency response functions
NASA Astrophysics Data System (ADS)
Hong, Yu; Liu, Xi; Dong, Xinjun; Wang, Yang; Pu, Qianhui
2016-04-01
In order to obtain a finite element (FE) model that can more accurately describe structural behaviors, experimental data measured from the actual structure can be used to update the FE model. The process is known as FE model updating. In this paper, a frequency response function (FRF)-based model updating approach is presented. The approach attempts to minimize the difference between analytical and experimental FRFs, while the experimental FRFs are calculated using simultaneously measured dynamic excitation and corresponding structural responses. In this study, the FRF-based model updating method is validated through laboratory experiments on a four-story shear-frame structure. To obtain the experimental FRFs, shake table tests and impact hammer tests are performed. The FRF-based model updating method is shown to successfully update the stiffness, mass and damping parameters of the four-story structure, so that the analytical and experimental FRFs match well with each other.
Leveraging modeling approaches: reaction networks and rules.
Blinov, Michael L; Moraru, Ion I
2012-01-01
We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high-resolution and/or high-throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatiotemporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks - the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks.
The NJL Model for Quark Fragmentation Functions
T. Ito, W. Bentz, I. Cloet, A W Thomas, K. Yazaki
2009-10-01
A description of fragmentation functions which satisfy the momentum and isospin sum rules is presented in an effective quark theory. Concentrating on the pion fragmentation function, we first explain the reason why the elementary (lowest order) fragmentation process q → qπ is completely inadequate to describe the empirical data, although the “crossed” process π → qq describes the quark distribution functions in the pion reasonably well. Then, taking into account cascade-like processes in a modified jet-model approach, we show that the momentum and isospin sum rules can be satisfied naturally without introducing any ad-hoc parameters. We present numerical results for the Nambu-Jona-Lasinio model in the invariant mass regularization scheme, and compare the results with the empirical parametrizations. We argue that this NJL-jet model provides a very useful framework to calculate the fragmentation functions in an effective chiral quark theory.
Soil eukaryotic functional diversity, a metatranscriptomic approach.
Bailly, Julie; Fraissinet-Tachet, Laurence; Verner, Marie-Christine; Debaud, Jean-Claude; Lemaire, Marc; Wésolowski-Louvel, Micheline; Marmeisse, Roland
2007-11-01
To appreciate the functional diversity of communities of soil eukaryotic micro-organisms we evaluated an experimental approach based on the construction and screening of a cDNA library using polyadenylated mRNA extracted from a forest soil. Such a library contains genes that are expressed by each of the different organisms forming the community and represents its metatranscriptome. The diversity of the organisms that contributed to this library was evaluated by sequencing a portion of the 18S rDNA gene amplified from either soil DNA or reverse-transcribed RNA. More than 70% of the sequences were from fungi and unicellular eukaryotes (protists) while the other most represented group was the metazoa. Calculation of richness estimators suggested that more than 180 species could be present in the soil samples studied. Sequencing of 119 cDNA identified genes with no homologues in databases (32%) and genes coding proteins involved in different biochemical and cellular processes. Surprisingly, the taxonomic distribution of the cDNA and of the 18S rDNA genes did not coincide, with a marked under-representation of the protists among the cDNA. Specific genes from such an environmental cDNA library could be isolated by expression in a heterologous microbial host, Saccharomyces cerevisiae. This is illustrated by the functional complementation of a histidine auxotrophic yeast mutant by two cDNA originating possibly from an ascomycete and a basidiomycete fungal species. Study of the metatranscriptome has the potential to uncover adaptations of whole microbial communities to local environmental conditions. It also gives access to an abundant source of genes of biotechnological interest.
The Linearized Kinetic Equation -- A Functional Analytic Approach
NASA Astrophysics Data System (ADS)
Brinkmann, Ralf Peter
2009-10-01
Kinetic models of plasma phenomena are difficult to address for two reasons. They i) are given as systems of nonlinear coupled integro-differential equations, and ii) involve generally six-dimensional distribution functions f(r,v,t). In situations which can be addressed in a linear regime, the first difficulty disappears, but the second one still poses considerable practical problems. This contribution presents an abstract approach to linearized kinetic theory which employs the methods of functional analysis. A kinetic electron equation with elastic electron-neutral interaction is studied in the electrostatic approximation. Under certain boundary conditions, a nonlinear functional, the kinetic free energy, exists which has the properties of a Lyapunov functional. In the linear regime, the functional becomes a quadratic form which motivates the definition of a bilinear scalar product, turning the space of all distribution functions into a Hilbert space. The linearized kinetic equation can then be described in terms of dynamical operators with well-defined properties. Abstract solutions can be constructed which have mathematically plausible properties. As an example, the formalism is applied to the example of the multipole resonance probe (MRP). Under the assumption of a Maxwellian background distribution, the kinetic model of that diagnostics device is compared to a previously investigated fluid model.
Component Modeling Approach Software Tool
2010-08-23
The Component Modeling Approach Software Tool (CMAST) establishes a set of performance libraries of approved components (frames, glass, and spacer) which can be accessed for configuring fenestration products for a project, and btaining a U-factor, Solar Heat Gain Coefficient (SHGC), and Visible Transmittance (VT) rating for those products, which can then be reflected in a CMA Label Certificate for code compliance. CMAST is web-based as well as client-based. The completed CMA program and software toolmore » will be useful in several ways for a vast array of stakeholders in the industry: Generating performance ratings for bidding projects Ascertaining credible and accurate performance data Obtaining third party certification of overall product performance for code compliance« less
Modelling approaches for evaluating multiscale tendon mechanics.
Fang, Fei; Lake, Spencer P
2016-02-01
Tendon exhibits anisotropic, inhomogeneous and viscoelastic mechanical properties that are determined by its complicated hierarchical structure and varying amounts/organization of different tissue constituents. Although extensive research has been conducted to use modelling approaches to interpret tendon structure-function relationships in combination with experimental data, many issues remain unclear (i.e. the role of minor components such as decorin, aggrecan and elastin), and the integration of mechanical analysis across different length scales has not been well applied to explore stress or strain transfer from macro- to microscale. This review outlines mathematical and computational models that have been used to understand tendon mechanics at different scales of the hierarchical organization. Model representations at the molecular, fibril and tissue levels are discussed, including formulations that follow phenomenological and microstructural approaches (which include evaluations of crimp, helical structure and the interaction between collagen fibrils and proteoglycans). Multiscale modelling approaches incorporating tendon features are suggested to be an advantageous methodology to understand further the physiological mechanical response of tendon and corresponding adaptation of properties owing to unique in vivo loading environments.
Systematic approach for modeling tetrachloroethene biodegradation
Bagley, D.M.
1998-11-01
The anaerobic biodegradation of tetrachloroethene (PCE) is a reasonably well understood process. Specific organisms capable of using PCE as an electron acceptor for growth require the addition of an electron donor to remove PCE from contaminated ground waters. However, competition from other anaerobic microorganisms for added electron donor will influence the rate and completeness of PCE degradation. The approach developed here allows for the explicit modeling of PCE and byproduct biodegradation as a function of electron donor and byproduct concentrations, and the microbiological ecology of the system. The approach is general and can be easily modified for ready use with in situ ground-water models or ex situ reactor models. Simulations conducted with models developed from this approach show the sensitivity of PCE biodegradation to input parameter values, in particular initial biomass concentrations. Additionally, the dechlorination rate will be strongly influenced by the microbial ecology of the system. Finally, comparison with experimental acclimation results indicates that existing kinetic constants may not be generally applicable. Better techniques for measuring the biomass of specific organisms groups in mixed systems are required.
NASA Astrophysics Data System (ADS)
Choubey, Sanjay K.; Mariadasse, Richard; Rajendran, Santhosh; Jeyaraman, Jeyakanthan
2016-12-01
Overexpression of HDAC1, a member of Class I histone deacetylase is reported to be implicated in breast cancer. Epigenetic alteration in carcinogenesis has been the thrust of research for few decades. Increased deacetylation leads to accelerated cell proliferation, cell migration, angiogenesis and invasion. HDAC1 is pronounced as the potential drug target towards the treatment of breast cancer. In this study, the biochemical potential of 6-aminonicotinamide derivatives was rationalized. Five point pharmacophore model with one hydrogen-bond acceptor (A3), two hydrogen-bond donors (D5, D6), one ring (R12) and one hydrophobic group (H8) was developed using 6-aminonicotinamide derivatives. The pharmacophore hypothesis yielded a 3D-QSAR model with correlation-coefficient (r2 = 0.977, q2 = 0.801) and it was externally validated with (r2pred = 0.929, r2cv = 0.850 and r2m = 0.856) which reveals the statistical significance of the model having high predictive power. The model was then employed as 3D search query for virtual screening against compound libraries (Zinc, Maybridge, Enamine, Asinex, Toslab, LifeChem and Specs) in order to identify novel scaffolds which can be experimentally validated to design future drug molecule. Density Functional Theory (DFT) at B3LYP/6-31G* level was employed to explore the electronic features of the ligands involved in charge transfer reaction during receptor ligand interaction. Binding free energy (ΔGbind) calculation was done using MM/GBSA which defines the affinity of ligands towards the receptor.
Chu, Congying; Fan, Lingzhong; Eickhoff, Claudia R; Liu, Yong; Yang, Yong; Eickhoff, Simon B; Jiang, Tianzi
2015-08-15
Recent progress in functional neuroimaging has prompted studies of brain activation during various cognitive tasks. Coordinate-based meta-analysis has been utilized to discover the brain regions that are consistently activated across experiments. However, within-experiment co-activation relationships, which can reflect the underlying functional relationships between different brain regions, have not been widely studied. In particular, voxel-wise co-activation, which may be able to provide a detailed configuration of the co-activation network, still needs to be modeled. To estimate the voxel-wise co-activation pattern and deduce the co-activation network, a Co-activation Probability Estimation (CoPE) method was proposed to model within-experiment activations for the purpose of defining the co-activations. A permutation test was adopted as a significance test. Moreover, the co-activations were automatically separated into local and long-range ones, based on distance. The two types of co-activations describe distinct features: the first reflects convergent activations; the second represents co-activations between different brain regions. The validation of CoPE was based on five simulation tests and one real dataset derived from studies of working memory. Both the simulated and the real data demonstrated that CoPE was not only able to find local convergence but also significant long-range co-activation. In particular, CoPE was able to identify a 'core' co-activation network in the working memory dataset. As a data-driven method, the CoPE method can be used to mine underlying co-activation relationships across experiments in future studies.
Green-function approach to the theory of tunneling ionization
NASA Astrophysics Data System (ADS)
Fabrikant, I. I.; Zhao, L. B.
2015-05-01
We solve the problem of tunneling ionization of a multielectron atom in a static electric field by using the Green's function for the Stark-Coulomb problem. This allows us to incorporate the outgoing-wave boundary conditions at infinity. The interaction of the active electron with the atomic residue is described either by a model potential or by the l -dependent pseudopotential which prevents virtual transitions to orbitals occupied by inner electrons. The method works well in the broad range of electric fields including the region above the classical ionization threshold (the barrier-suppression region). Calculations of ionization of Ar demonstrate a noticeable difference between the model potential approach and the pseudopotential approach, but both sets of results agree with experimental data.
Simulation of sprays using a Lagrangian filtered density function approach
NASA Astrophysics Data System (ADS)
Liu, Wanjiao; Garrick, Sean
2013-11-01
Sprays and atomization have wide applications in industry, including combustion/engines, pharmaceutics and agricultural spraying. Due to the complexity of the underlying processes, much of the underlying phenomena are not fully understood. Numerical simulation may provide ways to investigate atomization and spray dynamics. Large eddy simulation (LES) is a practical approach to flow simulation as it resolves only the large-scale structures while modeling the sub-grid scale (SGS) effects. We combine a filtered density function (FDF) based approach with a Lagrangian volume-of-fluid method to perform LES. This resulting methodology is advantageous in that it has no diffusive or dissipative numerical errors, and the highly non-linear surface tension force appears in closed form thus the modeling of the SGS surface tension is not needed when simulating turbulent, multiphase flows. We present the methodology and some results for the simulation of multiphase jets.
dos Santos, Sandra C.; Teixeira, Miguel C.; Dias, Paulo J.; Sá-Correia, Isabel
2014-01-01
Multidrug/Multixenobiotic resistance (MDR/MXR) is a widespread phenomenon with clinical, agricultural and biotechnological implications, where MDR/MXR transporters that are presumably able to catalyze the efflux of multiple cytotoxic compounds play a key role in the acquisition of resistance. However, although these proteins have been traditionally considered drug exporters, the physiological function of MDR/MXR transporters and the exact mechanism of their involvement in resistance to cytotoxic compounds are still open to debate. In fact, the wide range of structurally and functionally unrelated substrates that these transporters are presumably able to export has puzzled researchers for years. The discussion has now shifted toward the possibility of at least some MDR/MXR transporters exerting their effect as the result of a natural physiological role in the cell, rather than through the direct export of cytotoxic compounds, while the hypothesis that MDR/MXR transporters may have evolved in nature for other purposes than conferring chemoprotection has been gaining momentum in recent years. This review focuses on the drug transporters of the Major Facilitator Superfamily (MFS; drug:H+ antiporters) in the model yeast Saccharomyces cerevisiae. New insights into the natural roles of these transporters are described and discussed, focusing on the knowledge obtained or suggested by post-genomic research. The new information reviewed here provides clues into the unexpectedly complex roles of these transporters, including a proposed indirect regulation of the stress response machinery and control of membrane potential and/or internal pH, with a special emphasis on a genome-wide view of the regulation and evolution of MDR/MXR-MFS transporters. PMID:24847282
Introducing Linear Functions: An Alternative Statistical Approach
ERIC Educational Resources Information Center
Nolan, Caroline; Herbert, Sandra
2015-01-01
The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be "threshold concepts". There is recognition that linear functions can be taught in context through the exploration of linear…
Food Protein Functionality--A New Model.
Foegeding, E Allen
2015-12-01
Proteins in foods serve dual roles as nutrients and structural building blocks. The concept of protein functionality has historically been restricted to nonnutritive functions--such as creating emulsions, foams, and gels--but this places sole emphasis on food quality considerations and potentially overlooks modifications that may also alter nutritional quality or allergenicity. A new model is proposed that addresses the function of proteins in foods based on the length scale(s) responsible for the function. Properties such as flavor binding, color, allergenicity, and digestibility are explained based on the structure of individual molecules; placing this functionality at the nano/molecular scale. At the next higher scale, applications in foods involving gelation, emulsification, and foam formation are based on how proteins form secondary structures that are seen at the nano and microlength scales, collectively called the mesoscale. The macroscale structure represents the arrangements of molecules and mesoscale structures in a food. Macroscale properties determine overall product appearance, stability, and texture. The historical approach of comparing among proteins based on forming and stabilizing specific mesoscale structures remains valid but emphasis should be on a common means for structure formation to allow for comparisons across investigations. For applications in food products, protein functionality should start with identification of functional needs across scales. Those needs are then evaluated relative to how processing and other ingredients could alter desired molecular scale properties, or proper formation of mesoscale structures. This allows for a comprehensive approach to achieving the desired function of proteins in foods.
Wu, Jian; Singla, Mithun; Olmi, Claudio; Shieh, Leang S; Song, Gangbing
2010-07-01
In this paper, a scalar sign function-based digital design methodology is developed for modeling and control of a class of analog nonlinear systems that are restricted by the absolute value function constraints. As is found to be not uncommon, many real systems are subject to the constraints which are described by the non-smooth functions such as absolute value function. The non-smooth and nonlinear nature poses significant challenges to the modeling and control work. To overcome these difficulties, a novel idea proposed in this work is to use a scalar sign function approach to effectively transform the original nonlinear and non-smooth model into a smooth nonlinear rational function model. Upon the resulting smooth model, a systematic digital controller design procedure is established, in which an optimal linearization method, LQR design and digital implementation through an advanced digital redesign technique are sequentially applied. The example of tracking control of a piezoelectric actuator system is utilized throughout the paper for illustrating the proposed methodology.
Mixture Models for Distance Sampling Detection Functions
Miller, David L.; Thomas, Len
2015-01-01
We present a new class of models for the detection function in distance sampling surveys of wildlife populations, based on finite mixtures of simple parametric key functions such as the half-normal. The models share many of the features of the widely-used “key function plus series adjustment” (K+A) formulation: they are flexible, produce plausible shapes with a small number of parameters, allow incorporation of covariates in addition to distance and can be fitted using maximum likelihood. One important advantage over the K+A approach is that the mixtures are automatically monotonic non-increasing and non-negative, so constrained optimization is not required to ensure distance sampling assumptions are honoured. We compare the mixture formulation to the K+A approach using simulations to evaluate its applicability in a wide set of challenging situations. We also re-analyze four previously problematic real-world case studies. We find mixtures outperform K+A methods in many cases, particularly spiked line transect data (i.e., where detectability drops rapidly at small distances) and larger sample sizes. We recommend that current standard model selection methods for distance sampling detection functions are extended to include mixture models in the candidate set. PMID:25793744
A new approach for developing adjoint models
NASA Astrophysics Data System (ADS)
Farrell, P. E.; Funke, S. W.
2011-12-01
Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and
Statistical Approaches to Functional Neuroimaging Data
DuBois Bowman, F; Guo, Ying; Derado, Gordana
2007-01-01
Synopsis The field of statistics makes valuable contributions to functional neuroimaging research by establishing procedures for the design and conduct of neuroimaging experiements and by providing tools for objectively quantifying and measuring the strength of scientific evidence provided by the data. Two common functional neuroimaging research objecitves include detecting brain regions that reveal task-related alterations in measured brain activity (activations) and identifying highly correlated brain regions that exhibit similar patterns of activity over time (functional connectivity). In this article, we highlight various statistical procedures for analyzing data from activation studies and from functional connectivity studies, focusing on functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) data. We also discuss emerging statistical methods for prediction using fMRI and PET data, which stand to increase the translational significance of functional neuroimaging data to clinical practice. PMID:17983962
Functional Error Models to Accelerate Nested Sampling
NASA Astrophysics Data System (ADS)
Josset, L.; Elsheikh, A. H.; Demyanov, V.; Lunati, I.
2014-12-01
The main challenge in groundwater problems is the reliance on large numbers of unknown parameters with wide rage of associated uncertainties. To translate this uncertainty to quantities of interest (for instance the concentration of pollutant in a drinking well), a large number of forward flow simulations is required. To make the problem computationally tractable, Josset et al. (2013, 2014) introduced the concept of functional error models. It consists in two elements: a proxy model that is cheaper to evaluate than the full physics flow solver and an error model to account for the missing physics. The coupling of the proxy model and the error models provides reliable predictions that approximate the full physics model's responses. The error model is tailored to the problem at hand by building it for the question of interest. It follows a typical approach in machine learning where both the full physics and proxy models are evaluated for a training set (subset of realizations) and the set of responses is used to construct the error model using functional data analysis. Once the error model is devised, a prediction of the full physics response for a new geostatistical realization can be obtained by computing the proxy response and applying the error model. We propose the use of functional error models in a Bayesian inference context by combining it to the Nested Sampling (Skilling 2006; El Sheikh et al. 2013, 2014). Nested Sampling offers a mean to compute the Bayesian Evidence by transforming the multidimensional integral into a 1D integral. The algorithm is simple: starting with an active set of samples, at each iteration, the sample with the lowest likelihood is kept aside and replaced by a sample of higher likelihood. The main challenge is to find this sample of higher likelihood. We suggest a new approach: first the active set is sampled, both proxy and full physics models are run and the functional error model is build. Then, at each iteration of the Nested
A Functional Analytic Approach to Group Psychotherapy
ERIC Educational Resources Information Center
Vandenberghe, Luc
2009-01-01
This article provides a particular view on the use of Functional Analytical Psychotherapy (FAP) in a group therapy format. This view is based on the author's experiences as a supervisor of Functional Analytical Psychotherapy Groups, including groups for women with depression and groups for chronic pain patients. The contexts in which this approach…
Neuronal models of cognitive functions.
Changeux, J P; Dehaene, S
1989-11-01
Understanding the neural bases of cognition has become a scientifically tractable problem, and neurally plausible models are proposed to establish a causal link between biological structure and cognitive function. To this end, levels of organization have to be defined within the functional architecture of neuronal systems. Transitions from any one of these interacting levels to the next are viewed in an evolutionary perspective. They are assumed to involve: (1) the production of multiple transient variations and (2) the selection of some of them by higher levels via the interaction with the outside world. The time-scale of these "evolutions" is expected to differ from one level to the other. In the course of development and in the adult this internal evolution is epigenetic and does not require alteration of the structure of the genome. A selective stabilization (and elimination) of synaptic connections by spontaneous and/or evoked activity in developing neuronal networks is postulated to contribute to the shaping of the adult connectivity within an envelope of genetically encoded forms. At a higher level, models of mental representations, as states of activity of defined populations of neurons, are discussed in terms of statistical physics, and their storage is viewed as a process of selection among variable and transient pre-representations. Theoretical models illustrate that cognitive functions such as short-term memory and handling of temporal sequences may be constrained by "microscopic" physical parameters. Finally, speculations are offered about plausible neuronal models and selectionist implementations of intentions. PMID:2691185
A rational model of function learning.
Lucas, Christopher G; Griffiths, Thomas L; Williams, Joseph J; Kalish, Michael L
2015-10-01
Theories of how people learn relationships between continuous variables have tended to focus on two possibilities: one, that people are estimating explicit functions, or two that they are performing associative learning supported by similarity. We provide a rational analysis of function learning, drawing on work on regression in machine learning and statistics. Using the equivalence of Bayesian linear regression and Gaussian processes, which provide a probabilistic basis for similarity-based function learning, we show that learning explicit rules and using similarity can be seen as two views of one solution to this problem. We use this insight to define a rational model of human function learning that combines the strengths of both approaches and accounts for a wide variety of experimental results.
Work Functions for Models of Scandate Surfaces
NASA Technical Reports Server (NTRS)
Mueller, Wolfgang
1997-01-01
The electronic structure, surface dipole properties, and work functions of scandate surfaces have been investigated using the fully relativistic scattered-wave cluster approach. Three different types of model surfaces are considered: (1) a monolayer of Ba-Sc-O on W(100), (2) Ba or BaO adsorbed on Sc2O3 + W, and (3) BaO on SC2O3 + WO3. Changes in the work function due to Ba or BaO adsorption on the different surfaces are calculated by employing the depolarization model of interacting surface dipoles. The largest work function change and the lowest work function of 1.54 eV are obtained for Ba adsorbed on the Sc-O monolayer on W(100). The adsorption of Ba on Sc2O3 + W does not lead to a low work function, but the adsorption of BaO results in a work function of about 1.6-1.9 eV. BaO adsorbed on Sc2O3 + WO3, or scandium tungstates, may also lead to low work functions.
Translation: Towards a Critical-Functional Approach
ERIC Educational Resources Information Center
Sadeghi, Sima; Ketabi, Saeed
2010-01-01
The controversy over the place of translation in the teaching of English as a Foreign Language (EFL) is a thriving field of inquiry. Many older language teaching methodologies such as the Direct Method, the Audio-lingual Method, and Natural and Communicative Approaches, tended to either neglect the role of translation, or prohibit it entirely as a…
Functional Approaches to Written Text: Classroom Applications.
ERIC Educational Resources Information Center
Miller, Tom, Ed.
Noting that little in language can be understood without taking into consideration the wider picture of communicative purpose, content, context, and audience, this book address practical uses of various approaches to discourse analysis. Several assumptions run through the chapters: knowledge is socially constructed; the manner in which language…
Evaluating face trustworthiness: a model based approach
Baron, Sean G.; Oosterhof, Nikolaas N.
2008-01-01
Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response—as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic—strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension. PMID:19015102
Evaluating face trustworthiness: a model based approach.
Todorov, Alexander; Baron, Sean G; Oosterhof, Nikolaas N
2008-06-01
Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension. PMID:19015102
Linearized Functional Minimization for Inverse Modeling
Wohlberg, Brendt; Tartakovsky, Daniel M.; Dentz, Marco
2012-06-21
Heterogeneous aquifers typically consist of multiple lithofacies, whose spatial arrangement significantly affects flow and transport. The estimation of these lithofacies is complicated by the scarcity of data and by the lack of a clear correlation between identifiable geologic indicators and attributes. We introduce a new inverse-modeling approach to estimate both the spatial extent of hydrofacies and their properties from sparse measurements of hydraulic conductivity and hydraulic head. Our approach is to minimize a functional defined on the vectors of values of hydraulic conductivity and hydraulic head fields defined on regular grids at a user-determined resolution. This functional is constructed to (i) enforce the relationship between conductivity and heads provided by the groundwater flow equation, (ii) penalize deviations of the reconstructed fields from measurements where they are available, and (iii) penalize reconstructed fields that are not piece-wise smooth. We develop an iterative solver for this functional that exploits a local linearization of the mapping from conductivity to head. This approach provides a computationally efficient algorithm that rapidly converges to a solution. A series of numerical experiments demonstrates the robustness of our approach.
A Model Approach to Teacher Education.
ERIC Educational Resources Information Center
Monaco, Theresa M.; Chiappetta, Eugene L.
A "model approach" to teacher education specifies the development of a model of an idealized learning environment. One way to create a model as a real entity as opposed to a written document is to operationalize model classrooms that exemplify the type of instruction desired. The model described here goes hand in hand with the university-based and…
Transfer function modeling of damping mechanisms in distributed parameter models
NASA Technical Reports Server (NTRS)
Slater, J. C.; Inman, D. J.
1994-01-01
This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.
Kim, Sunghee; Kim, Ki Chul; Lee, Seung Woo; Jang, Seung Soon
2016-07-27
Understanding the thermodynamic stability and redox properties of oxygen functional groups on graphene is critical to systematically design stable graphene-based positive electrode materials with high potential for lithium-ion battery applications. In this work, we study the thermodynamic and redox properties of graphene functionalized with carbonyl and hydroxyl groups, and the evolution of these properties with the number, types and distribution of functional groups by employing the density functional theory method. It is found that the redox potential of the functionalized graphene is sensitive to the types, number, and distribution of oxygen functional groups. First, the carbonyl group induces higher redox potential than the hydroxyl group. Second, more carbonyl groups would result in higher redox potential. Lastly, the locally concentrated distribution of the carbonyl group is more beneficial to have higher redox potential compared to the uniformly dispersed distribution. In contrast, the distribution of the hydroxyl group does not affect the redox potential significantly. Thermodynamic investigation demonstrates that the incorporation of carbonyl groups at the edge of graphene is a promising strategy for designing thermodynamically stable positive electrode materials with high redox potentials. PMID:27412373
Kim, Sunghee; Kim, Ki Chul; Lee, Seung Woo; Jang, Seung Soon
2016-07-27
Understanding the thermodynamic stability and redox properties of oxygen functional groups on graphene is critical to systematically design stable graphene-based positive electrode materials with high potential for lithium-ion battery applications. In this work, we study the thermodynamic and redox properties of graphene functionalized with carbonyl and hydroxyl groups, and the evolution of these properties with the number, types and distribution of functional groups by employing the density functional theory method. It is found that the redox potential of the functionalized graphene is sensitive to the types, number, and distribution of oxygen functional groups. First, the carbonyl group induces higher redox potential than the hydroxyl group. Second, more carbonyl groups would result in higher redox potential. Lastly, the locally concentrated distribution of the carbonyl group is more beneficial to have higher redox potential compared to the uniformly dispersed distribution. In contrast, the distribution of the hydroxyl group does not affect the redox potential significantly. Thermodynamic investigation demonstrates that the incorporation of carbonyl groups at the edge of graphene is a promising strategy for designing thermodynamically stable positive electrode materials with high redox potentials.
Functional quantum computing: An optical approach
NASA Astrophysics Data System (ADS)
Rambo, Timothy M.; Altepeter, Joseph B.; Kumar, Prem; D'Ariano, G. Mauro
2016-05-01
Recent theoretical investigations treat quantum computations as functions, quantum processes which operate on other quantum processes, rather than circuits. Much attention has been given to the N -switch function which takes N black-box quantum operators as input, coherently permutes their ordering, and applies the result to a target quantum state. This is something which cannot be equivalently done using a quantum circuit. Here, we propose an all-optical system design which implements coherent operator permutation for an arbitrary number of input operators.
Numerical approaches to combustion modeling
Oran, E.S.; Boris, J.P. )
1991-01-01
This book presents a series of topics ranging from microscopic combustion physics to several aspects of macroscopic reactive-flow modeling. As the reader progresses into the book, the successive chapters generally include a wider range of physical and chemical processes in the mathematical model. Including more processes, however, usually means that they will be represented phenomenologically at a cruder level. In practice the detailed microscopic models and simulations are often used to develop and calibrate the phenomenologies used in the macroscopic models. The book first describes computations of the most microscopic chemical processes, then considers laminar flames and detonation modeling, and ends with computations of complex, multiphase combustion systems.
Identifying Similarities in Cognitive Subtest Functional Requirements: An Empirical Approach
ERIC Educational Resources Information Center
Frisby, Craig L.; Parkin, Jason R.
2007-01-01
In the cognitive test interpretation literature, a Rational/Intuitive, Indirect Empirical, or Combined approach is typically used to construct conceptual taxonomies of the functional (behavioral) similarities between subtests. To address shortcomings of these approaches, the functional requirements for 49 subtests from six individually…
Recent molecular approaches to understanding astrocyte function in vivo
Davila, David; Thibault, Karine; Fiacco, Todd A.; Agulhon, Cendra
2013-01-01
Astrocytes are a predominant glial cell type in the nervous systems, and are becoming recognized as important mediators of normal brain function as well as neurodevelopmental, neurological, and neurodegenerative brain diseases. Although numerous potential mechanisms have been proposed to explain the role of astrocytes in the normal and diseased brain, research into the physiological relevance of these mechanisms in vivo is just beginning. In this review, we will summarize recent developments in innovative and powerful molecular approaches, including knockout mouse models, transgenic mouse models, and astrocyte-targeted gene transfer/expression, which have led to advances in understanding astrocyte biology in vivo that were heretofore inaccessible to experimentation. We will examine the recently improved understanding of the roles of astrocytes – with an emphasis on astrocyte signaling – in the context of both the healthy and diseased brain, discuss areas where the role of astrocytes remains debated, and suggest new research directions. PMID:24399932
Enzyme function prediction with interpretable models.
Syed, Umar; Yona, Golan
2009-01-01
Enzymes play central roles in metabolic pathways, and the prediction of metabolic pathways in newly sequenced genomes usually starts with the assignment of genes to enzymatic reactions. However, genes with similar catalytic activity are not necessarily similar in sequence, and therefore the traditional sequence similarity-based approach often fails to identify the relevant enzymes, thus hindering efforts to map the metabolome of an organism.Here we study the direct relationship between basic protein properties and their function. Our goal is to develop a new tool for functional prediction (e.g., prediction of Enzyme Commission number), which can be used to complement and support other techniques based on sequence or structure information. In order to define this mapping we collected a set of 453 features and properties that characterize proteins and are believed to be related to structural and functional aspects of proteins. We introduce a mixture model of stochastic decision trees to learn the set of potentially complex relationships between features and function. To study these correlations, trees are created and tested on the Pfam classification of proteins, which is based on sequence, and the EC classification, which is based on enzymatic function. The model is very effective in learning highly diverged protein families or families that are not defined on the basis of sequence. The resulting tree structures highlight the properties that are strongly correlated with structural and functional aspects of protein families, and can be used to suggest a concise definition of a protein family.
Quantum thermodynamics: a nonequilibrium Green's function approach.
Esposito, Massimiliano; Ochoa, Maicol A; Galperin, Michael
2015-02-27
We establish the foundations of a nonequilibrium theory of quantum thermodynamics for noninteracting open quantum systems strongly coupled to their reservoirs within the framework of the nonequilibrium Green's functions. The energy of the system and its coupling to the reservoirs are controlled by a slow external time-dependent force treated to first order beyond the quasistatic limit. We derive the four basic laws of thermodynamics and characterize reversible transformations. Stochastic thermodynamics is recovered in the weak coupling limit. PMID:25768745
A moving approach for the Vector Hysteron Model
NASA Astrophysics Data System (ADS)
Cardelli, E.; Faba, A.; Laudani, A.; Quondam Antonio, S.; Riganti Fulginei, F.; Salvini, A.
2016-04-01
A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.
Model compilation: An approach to automated model derivation
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo
1990-01-01
An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.
ONION: Functional Approach for Integration of Lipidomics and Transcriptomics Data
Piwowar, Monika; Jurkowski, Wiktor
2015-01-01
To date, the massive quantity of data generated by high-throughput techniques has not yet met bioinformatics treatment required to make full use of it. This is partially due to a mismatch in experimental and analytical study design but primarily due to a lack of adequate analytical approaches. When integrating multiple data types e.g. transcriptomics and metabolomics, multidimensional statistical methods are currently the techniques of choice. Typical statistical approaches, such as canonical correlation analysis (CCA), that are applied to find associations between metabolites and genes are failing due to small numbers of observations (e.g. conditions, diet etc.) in comparison to data size (number of genes, metabolites). Modifications designed to cope with this issue are not ideal due to the need to add simulated data resulting in a lack of p-value computation or by pruning of variables hence losing potentially valid information. Instead, our approach makes use of verified or putative molecular interactions or functional association to guide analysis. The workflow includes dividing of data sets to reach the expected data structure, statistical analysis within groups and interpretation of results. By applying pathway and network analysis, data obtained by various platforms are grouped with moderate stringency to avoid functional bias. As a consequence CCA and other multivariate models can be applied to calculate robust statistics and provide easy to interpret associations between metabolites and genes to leverage understanding of metabolic response. Effective integration of lipidomics and transcriptomics is demonstrated on publically available murine nutrigenomics data sets. We are able to demonstrate that our approach improves detection of genes related to lipid metabolism, in comparison to applying statistics alone. This is measured by increased percentage of explained variance (95% vs. 75–80%) and by identifying new metabolite-gene associations related to lipid
Functional genomics approach to hypoxia signaling.
Seta, Karen A; Millhorn, David E
2004-02-01
Mammalian cells require a constant supply of oxygen to maintain energy balance, and sustained hypoxia can result in cell death. It is therefore not surprising that sophisticated adaptive mechanisms have evolved that enhance cell survival during hypoxia. During the past few years, there have been a growing number of reports on hypoxia-induced transcription of specific genes. In this review, we describe a unique experimental approach that utilizes focused cDNA libraries coupled to microarray analyses to identify hypoxia-responsive signal transduction pathways and genes that confer the hypoxia-tolerant phenotype. We have used the subtractive suppression hybridization (SSH) method to create a cDNA library enriched in hypoxia-regulated genes in oxygen-sensing pheochromocytoma cells and have used this library to create microarrays that allow us to examine hundreds of genes at a time. This library contains over 300 genes and expressed sequence tags upregulated by hypoxia, including tyrosine hydroxylase, vascular endothelial growth factor, and junB. Hypoxic regulation of these and other genes in the library has been confirmed by microarray, Northern blot, and real-time PCR analyses. Coupling focused SSH libraries with microarray analyses allows one to specifically study genes relevant to a phenotype of interest while reducing much of the biological noise associated with these types of studies. When used in conjunction with high-throughput, dye-based assays for cell survival and apoptosis, this approach offers a rapid method for discovering validated therapeutic targets for the treatment of cardiovascular disease, stroke, and tumors. PMID:14715686
Different Approaches to Covariate Inclusion in the Mixture Rasch Model
ERIC Educational Resources Information Center
Li, Tongyun; Jiao, Hong; Macready, George B.
2016-01-01
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
ERIC Educational Resources Information Center
And Others; deLannoy, Peter
1996-01-01
Describes an integrated approach to teaching a biochemistry laboratory focusing on the relationship between the three-dimensional structure of a macromolecule and its function. RNA is chosen as the model system. Discusses curriculum and student assessment. (AIM)
A Unified Approach to Modeling Multidisciplinary Interactions
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Bhatia, Kumar G.
2000-01-01
There are a number of existing methods to transfer information among various disciplines. For a multidisciplinary application with n disciplines, the traditional methods may be required to model (n(exp 2) - n) interactions. This paper presents a unified three-dimensional approach that reduces the number of interactions from (n(exp 2) - n) to 2n by using a computer-aided design model. The proposed modeling approach unifies the interactions among various disciplines. The approach is independent of specific discipline implementation, and a number of existing methods can be reformulated in the context of the proposed unified approach. This paper provides an overview of the proposed unified approach and reformulations for two existing methods. The unified approach is specially tailored for application environments where the geometry is created and managed through a computer-aided design system. Results are presented for a blended-wing body and a high-speed civil transport.
Sturmian function approach and {bar N}N bound states
Yan, Y.; Tegen, R.; Gutsche, T.; Faessler, A.
1997-09-01
A suitable numerical approach based on Sturmian functions is employed to solve the {bar N}N bound state problem for local and nonlocal potentials. The approach accounts for both the strong short-range nuclear potential and the long-range Coulomb force and provides directly the wave function of protonium and {bar N}N deep bound states with complex eigenvalues E=E{sub R}{minus}i({Gamma}/2). The spectrum of {bar N}N bound states has two parts, the atomic states bound by several keV, and the deep bound states which are bound by several hundred MeV. The observed very small hyperfine splitting of the 1s level and the 1s and 2p decay widths are reasonably well reproduced by both the Paris and Bonn potentials (supplemented with a microscopically derived quark annihilation potential), although there are differences in magnitude and level ordering. We present further arguments for the identification of the {sup 13}PF{sub 2} deep bound state with the exotic tensor meson f{sub 2}(1520). Both investigated models can accommodate the f{sub 2}(1520) but differ greatly in the total number of levels and in their ordering. The model based on the Paris potential predicts the {sup 13}P{sub 0} level slightly below 1.1 GeV while the model based on the Bonn potential puts this state below 0.8 GeV. It remains to be seen if this state can be identified with a scalar partner of the f{sub 2}(1520). {copyright} {ital 1997} {ital The American Physical Society}
Multicomponent Equilibrium Models for Testing Geothermometry Approaches
Cooper, D. Craig; Palmer, Carl D.; Smith, Robert W.; McLing, Travis L.
2013-02-01
Geothermometry is an important tool for estimating deep reservoir temperature from the geochemical composition of shallower and cooler waters. The underlying assumption of geothermometry is that the waters collected from shallow wells and seeps maintain a chemical signature that reflects equilibrium in the deeper reservoir. Many of the geothermometers used in practice are based on correlation between water temperatures and composition or using thermodynamic calculations based a subset (typically silica, cations or cation ratios) of the dissolved constituents. An alternative approach is to use complete water compositions and equilibrium geochemical modeling to calculate the degree of disequilibrium (saturation index) for large number of potential reservoir minerals as a function of temperature. We have constructed several “forward” geochemical models using The Geochemist’s Workbench to simulate the change in chemical composition of reservoir fluids as they migrate toward the surface. These models explicitly account for the formation (mass and composition) of a steam phase and equilibrium partitioning of volatile components (e.g., CO2, H2S, and H2) into the steam as a result of pressure decreases associated with upward fluid migration from depth. We use the synthetic data generated from these simulations to determine the advantages and limitations of various geothermometry and optimization approaches for estimating the likely conditions (e.g., temperature, pCO2) to which the water was exposed in the deep subsurface. We demonstrate the magnitude of errors that can result from boiling, loss of volatiles, and analytical error from sampling and instrumental analysis. The estimated reservoir temperatures for these scenarios are also compared to conventional geothermometers. These results can help improve estimation of geothermal resource temperature during exploration and early development.
Genest, Alexander; Woiterski, André; Krüger, Sven; Shor, Aleksey M; Rösch, Notker
2006-01-01
To validate the IMOMM (integrated molecular orbitals/molecular mechanics) method for ligand-stabilized transition metal clusters, we compare results of this combined quantum mechanical and molecular mechanical (QM/MM) approach, as implemented in the program ParaGauss (Kerdcharoen, T.; Birkenheuer, U.; Krüger, S.; Woiterski, A.; Rösch, N. Theor. Chem. Acc. 2003, 109, 285), to a full density functional (DF) treatment. For this purpose, we have chosen a model copper ethylthiolate cluster, Cu13(SCH2CH3)8 in D4h symmetry. The evaluation is based on 16 conformers of the cluster which exhibit single and bridging coordination of the ligands at the Cu13 cluster as well as various ligand orientations. For corresponding isomers, we obtained moderate deviations between QM and QM/MM results: 0.01-0.06 Å for pertinent bond lengths and up to ∼15° for bond angles. Ligand binding energies of the two approaches deviated less than 6 kcal/mol. The largest discrepancies between full DF and IMOMM results were found for isomers exhibiting short Cu-H and H-H contacts. We traced this back to the localization of different minima, reflecting the unequal performance of the DF and the force-field methods for nonbonding interactions. Thus, QM/MM results can be considered as more reliable because of the well-known limitations of standard exchange-correlation functionals for the description of nonbonding interactions for this class of systems.
Matrix model approach to cosmology
NASA Astrophysics Data System (ADS)
Chaney, A.; Lu, Lei; Stern, A.
2016-03-01
We perform a systematic search for rotationally invariant cosmological solutions to toy matrix models. These models correspond to the bosonic sector of Lorentzian Ishibashi, Kawai, Kitazawa and Tsuchiya (IKKT)-type matrix models in dimensions d less than ten, specifically d =3 and d =5 . After taking a continuum (or commutative) limit they yield d -1 dimensional Poisson manifolds. The manifolds have a Lorentzian induced metric which can be associated with closed, open, or static space-times. For d =3 , we obtain recursion relations from which it is possible to generate rotationally invariant matrix solutions which yield open universes in the continuum limit. Specific examples of matrix solutions have also been found which are associated with closed and static two-dimensional space-times in the continuum limit. The solutions provide for a resolution of cosmological singularities, at least within the context of the toy matrix models. The commutative limit reveals other desirable features, such as a solution describing a smooth transition from an initial inflation to a noninflationary era. Many of the d =3 solutions have analogues in higher dimensions. The case of d =5 , in particular, has the potential for yielding realistic four-dimensional cosmologies in the continuum limit. We find four-dimensional de Sitter d S4 or anti-de Sitter AdS4 solutions when a totally antisymmetric term is included in the matrix action. A nontrivial Poisson structure is attached to these manifolds which represents the lowest order effect of noncommutativity. For the case of AdS4 , we find one particular limit where the lowest order noncommutativity vanishes at the boundary, but not in the interior.
Nonrelativistic approaches derived from point-coupling relativistic models
Lourenco, O.; Dutra, M.; Delfino, A.; Sa Martins, J. S.
2010-03-15
We construct nonrelativistic versions of relativistic nonlinear hadronic point-coupling models, based on new normalized spinor wave functions after small component reduction. These expansions give us energy density functionals that can be compared to their relativistic counterparts. We show that the agreement between the nonrelativistic limit approach and the Skyrme parametrizations becomes strongly dependent on the incompressibility of each model. We also show that the particular case A=B=0 (Walecka model) leads to the same energy density functional of the Skyrme parametrizations SV and ZR2, while the truncation scheme, up to order {rho}{sup 3}, leads to parametrizations for which {sigma}=1.
HABITAT MODELING APPROACHES FOR RESTORATION SITE SELECTION
Numerous modeling approaches have been used to develop predictive models of species-environment and species-habitat relationships. These models have been used in conservation biology and habitat or species management, but their application to restoration efforts has been minimal...
Defining and Applying a Functionality Approach to Intellectual Disability
ERIC Educational Resources Information Center
Luckasson, R.; Schalock, R. L.
2013-01-01
Background: The current functional models of disability do not adequately incorporate significant changes of the last three decades in our understanding of human functioning, and how the human functioning construct can be applied to clinical functions, professional practices and outcomes evaluation. Methods: The authors synthesise current…
Hardy, Simon; Robillard, Pierre N
2004-12-01
Petri nets are a discrete event simulation approach developed for system representation, in particular for their concurrency and synchronization properties. Various extensions to the original theory of Petri nets have been used for modeling molecular biology systems and metabolic networks. These extensions are stochastic, colored, hybrid and functional. This paper carries out an initial review of the various modeling approaches based on Petri net found in the literature, and of the biological systems that have been successfully modeled with these approaches. Moreover, the modeling goals and possibilities of qualitative analysis and system simulation of each approach are discussed.
Current approaches to enhance glutamate transporter function and expression.
Fontana, Andréia C K
2015-09-01
L-glutamate is the predominant excitatory neurotransmitter in the CNS and has a central role in a variety of brain functions. The termination of glutamate neurotransmission by excitatory amino acid transporters (EAATs) is essential to maintain glutamate concentration low in extracellular space and avoid excitotoxicity. EAAT2/GLT-1, being the most abundant subtype of glutamate transporter in the CNS, plays a key role in regulation of glutamate transmission. Dysfunction of EAAT2 has been correlated with various pathologies such as traumatic brain injury, stroke, amyotrophic lateral sclerosis, Alzheimer's disease, among others. Therefore, activators of the function or enhancers of the expression of EAAT2/GLT-1 could serve as a potential therapy for these conditions. Translational activators of EAAT2/GLT-1, such as ceftriaxone and LDN/OSU-0212320, have been described to have significant protective effects in animal models of amyotrophic lateral sclerosis and epilepsy. In addition, pharmacological activators of the activity of EAAT2/GLT-1 have been explored for decades and are currently emerging as promising tools for neuroprotection, having potential advantages over expression activators. This review describes the current status of the search for EAAT2/GLT-1 activators and addresses challenges and limitations that this approach might encounter. Termination of glutamate neurotransmission by glutamate transporter EAAT2 is essential to maintain homeostasis in the brain and to avoid excitotoxicity. Dysfunction of EAAT2 has been correlated with various neurological pathologies. Therefore, activators of the function or enhancers of the expression of EAAT2 (green arrows) could serve as a potential therapy for these conditions. This review describes the current status of the search for EAAT2 activators and addresses challenges and limitations of this approach. PMID:26096891
Challenges in structural approaches to cell modeling.
Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A
2016-07-31
Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field. PMID:27255863
Challenges in structural approaches to cell modeling.
Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A
2016-07-31
Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field.
Social learning in Models and Cases - an Interdisciplinary Approach
NASA Astrophysics Data System (ADS)
Buhl, Johannes; De Cian, Enrica; Carrara, Samuel; Monetti, Silvia; Berg, Holger
2016-04-01
Our paper follows an interdisciplinary understanding of social learning. We contribute to the literature on social learning in transition research by bridging case-oriented research and modelling-oriented transition research. We start by describing selected theories on social learning in innovation, diffusion and transition research. We present theoretical understandings of social learning in techno-economic and agent-based modelling. Then we elaborate on empirical research on social learning in transition case studies. We identify and synthetize key dimensions of social learning in transition case studies. In the following we bridge between more formal and generalising modelling approaches towards social learning processes and more descriptive, individualising case study approaches by interpreting the case study analysis into a visual guide on functional forms of social learning typically identified in the cases. We then try to exemplarily vary functional forms of social learning in integrated assessment models. We conclude by drawing the lessons learned from the interdisciplinary approach - methodologically and empirically.
Roth, Jason L.; Capel, Paul D.
2012-01-01
Crop agriculture occupies 13 percent of the conterminous United States. Agricultural management practices, such as crop and tillage types, affect the hydrologic flow paths through the landscape. Some agricultural practices, such as drainage and irrigation, create entirely new hydrologic flow paths upon the landscapes where they are implemented. These hydrologic changes can affect the magnitude and partitioning of water budgets and sediment erosion. Given the wide degree of variability amongst agricultural settings, changes in the magnitudes of hydrologic flow paths and sediment erosion induced by agricultural management practices commonly are difficult to characterize, quantify, and compare using only field observations. The Water Erosion Prediction Project (WEPP) model was used to simulate two landscape characteristics (slope and soil texture) and three agricultural management practices (land cover/crop type, tillage type, and selected agricultural land management practices) to evaluate their effects on the water budgets of and sediment yield from agricultural lands. An array of sixty-eight 60-year simulations were run, each representing a distinct natural or agricultural scenario with various slopes, soil textures, crop or land cover types, tillage types, and select agricultural management practices on an isolated 16.2-hectare field. Simulations were made to represent two common agricultural climate regimes: arid with sprinkler irrigation and humid. These climate regimes were constructed with actual climate and irrigation data. The results of these simulations demonstrate the magnitudes of potential changes in water budgets and sediment yields from lands as a result of landscape characteristics and agricultural practices adopted on them. These simulations showed that variations in landscape characteristics, such as slope and soil type, had appreciable effects on water budgets and sediment yields. As slopes increased, sediment yields increased in both the arid and
A Hierarchical Systems Approach to Model Validation
NASA Astrophysics Data System (ADS)
Easterbrook, S. M.
2011-12-01
Existing approaches to the question of how climate models should be evaluated tend to rely on either philosophical arguments about the status of models as scientific tools, or on empirical arguments about how well runs from a given model match observational data. These have led to quantitative approaches expressed in terms of model bias or forecast skill, and ensemble approaches where models are assessed according to the extent to which the ensemble brackets the observational data. Unfortunately, such approaches focus the evaluation on models per se (or more specifically, on the simulation runs they produce) as though the models can be isolated from their context. Such approach may overlook a number of important aspects of the use of climate models: - the process by which models are selected and configured for a given scientific question. - the process by which model outputs are selected, aggregated and interpreted by a community of expertise in climatology. - the software fidelity of the models (i.e. whether the running code is actually doing what the modellers think it's doing). - the (often convoluted) history that begat a given model, along with the modelling choices long embedded in the code. - variability in the scientific maturity of different model components within a coupled system. These omissions mean that quantitative approaches cannot assess whether a model produces the right results for the wrong reasons, or conversely, the wrong results for the right reasons (where, say the observational data is problematic, or the model is configured to be unlike the earth system for a specific reason). Hence, we argue that it is a mistake to think that validation is a post-hoc process to be applied to an individual "finished" model, to ensure it meets some criteria for fidelity to the real world. We are therefore developing a framework for model validation that extends current approaches down into the detailed codebase and the processes by which the code is built
Inverse Modeling Via Linearized Functional Minimization
NASA Astrophysics Data System (ADS)
Barajas-Solano, D. A.; Wohlberg, B.; Vesselinov, V. V.; Tartakovsky, D. M.
2014-12-01
We present a novel parameter estimation methodology for transient models of geophysical systems with uncertain, spatially distributed, heterogeneous and piece-wise continuous parameters.The methodology employs a bayesian approach to propose an inverse modeling problem for the spatial configuration of the model parameters.The likelihood of the configuration is formulated using sparse measurements of both model parameters and transient states.We propose using total variation regularization (TV) as the prior reflecting the heterogeneous, piece-wise continuity assumption on the parameter distribution.The maximum a posteriori (MAP) estimator of the parameter configuration is then computed by minimizing the negative bayesian log-posterior using a linearized functional minimization approach. The computation of the MAP estimator is a large-dimensional nonlinear minimization problem with two sources of nonlinearity: (1) the TV operator, and (2) the nonlinear relation between states and parameters provided by the model's governing equations.We propose a a hybrid linearized functional minimization (LFM) algorithm in two stages to efficiently treat both sources of nonlinearity.The relation between states and parameters is linearized, resulting in a linear minimization sub-problem equipped with the TV operator; this sub-problem is then minimized using the Alternating Direction Method of Multipliers (ADMM). The methodology is illustrated with a transient saturated groundwater flow application in a synthetic domain, stimulated by external point-wise loadings representing aquifer pumping, together with an array of discrete measurements of hydraulic conductivity and transient measurements of hydraulic head.We show that our inversion strategy is able to recover the overall large-scale features of the parameter configuration, and that the reconstruction is improved by the addition of transient information of the state variable.
Challenges and opportunities for integrating lake ecosystem modelling approaches
Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.
2010-01-01
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative
Defining mental disorder. Exploring the 'natural function' approach
2011-01-01
Due to several socio-political factors, to many psychiatrists only a strictly objective definition of mental disorder, free of value components, seems really acceptable. In this paper, I will explore a variant of such an objectivist approach to defining metal disorder, natural function objectivism. Proponents of this approach make recourse to the notion of natural function in order to reach a value-free definition of mental disorder. The exploration of Christopher Boorse's 'biostatistical' account of natural function (1) will be followed an investigation of the 'hybrid naturalism' approach to natural functions by Jerome Wakefield (2). In the third part, I will explore two proposals that call into question the whole attempt to define mental disorder (3). I will conclude that while 'natural function objectivism' accounts fail to provide the backdrop for a reliable definition of mental disorder, there is no compelling reason to conclude that a definition cannot be achieved. PMID:21255405
Combining Formal and Functional Approaches to Topic Structure
ERIC Educational Resources Information Center
Zellers, Margaret; Post, Brechtje
2012-01-01
Fragmentation between formal and functional approaches to prosodic variation is an ongoing problem in linguistic research. In particular, the frameworks of the Phonetics of Talk-in-Interaction (PTI) and Empirical Phonology (EP) take very different theoretical and methodological approaches to this kind of variation. We argue that it is fruitful to…
A functional language approach in high-speed digital simulation
NASA Technical Reports Server (NTRS)
Ercegovac, M. D.; Lu, S.-L.
1983-01-01
A functional programming approach for a multi-microprocessor architecture is presented. The language, based on Backus FP, its intermediate form and the translation process are discussed and illustrated with an example. The approach allows performance analysis to be performed at a high level as an aid in program partitioning.
An approach to solving large reliability models
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.
1988-01-01
This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).
Function Model for Community Health Service Information
NASA Astrophysics Data System (ADS)
Yang, Peng; Pan, Feng; Liu, Danhong; Xu, Yongyong
In order to construct a function model of community health service (CHS) information for development of CHS information management system, Integration Definition for Function Modeling (IDEF0), an IEEE standard which is extended from Structured Analysis and Design(SADT) and now is a widely used function modeling method, was used to classifying its information from top to bottom. The contents of every level of the model were described and coded. Then function model for CHS information, which includes 4 super-classes, 15 classes and 28 sub-classed of business function, 43 business processes and 168 business activities, was established. This model can facilitate information management system development and workflow refinement.
Shell Model in a First Principles Approach
Navratil, P; Nogga, A; Lloyd, R; Vary, J P; Ormand, W E; Barrett, B R
2004-01-08
We develop and apply an ab-initio approach to nuclear structure. Starting with the NN interaction, that fits two-body scattering and bound state data, and adding a theoretical NNN potential, we evaluate nuclear properties in a no-core approach. For presently feasible no-core model spaces, we evaluate an effective Hamiltonian in a cluster approach which is guaranteed to provide exact answers for sufficiently large model spaces and/or sufficiently large clusters. A number of recent applications are surveyed including an initial application to exotic multiquark systems.
NASA Astrophysics Data System (ADS)
Cecchet, F.; Lis, D.; Caudano, Y.; Mani, A. A.; Peremans, A.; Champagne, B.; Guthmuller, J.
2012-03-01
The knowledge of the first hyperpolarizability tensor elements of molecular groups is crucial for a quantitative interpretation of the sum frequency generation (SFG) activity of thin organic films at interfaces. Here, the SFG response of the terminal methyl group of a dodecanethiol (DDT) monolayer has been interpreted on the basis of calculations performed at the density functional theory (DFT) level of approximation. In particular, DFT calculations have been carried out on three classes of models for the aliphatic chains. The first class of models consists of aliphatic chains, containing from 3 to 12 carbon atoms, in which only one methyl group can freely vibrate, while the rest of the chain is frozen by a strong overweight of its C and H atoms. This enables us to localize the probed vibrational modes on the methyl group. In the second class, only one methyl group is frozen, while the entire remaining chain is allowed to vibrate. This enables us to analyse the influence of the aliphatic chain on the methyl stretching vibrations. Finally, the dodecanethiol (DDT) molecule is considered, for which the effects of two dielectrics, i.e. n-hexane and n-dodecane, are investigated. Moreover, DDT calculations are also carried out by using different exchange-correlation (XC) functionals in order to assess the DFT approximations. Using the DFT IR vectors and Raman tensors, the SFG spectrum of DDT has been simulated and the orientation of the methyl group has then been deduced and compared with that obtained using an analytical approach based on a bond additivity model. This analysis shows that when using DFT molecular properties, the predicted orientation of the terminal methyl group tends to converge as a function of the alkyl chain length and that the effects of the chain as well as of the dielectric environment are small. Instead, a more significant difference is observed when comparing the DFT-based results with those obtained from the analytical approach, thus indicating
Improving Treatment Integrity through a Functional Approach to Intervention Support
ERIC Educational Resources Information Center
Liaupsin, Carl J.
2015-01-01
A functional approach to intervention planning has been shown to be effective in reducing problem behaviors and promoting appropriate behaviors in children and youth with behavior disorders. When function-based intervention plans are not successful, it is often due to issues of treatment integrity in which teachers omit or do not sufficiently…
Belief Function Model for Information Retrieval.
ERIC Educational Resources Information Center
Silva, Wagner Teixeira da; Milidiu, Ruy Luiz
1993-01-01
Describes the Belief Function Model for automatic indexing and ranking of documents which is based on a controlled vocabulary and on term frequencies in each document. Belief Function Theory is explained, and the Belief Function Model is compared to the Standard Vector Space Model. (17 references) (LRW)
Hybrid approaches to physiologic modeling and prediction
NASA Astrophysics Data System (ADS)
Olengü, Nicholas O.; Reifman, Jaques
2005-05-01
This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.
A penalized likelihood approach for mixture cure models.
Corbière, Fabien; Commenges, Daniel; Taylor, Jeremy M G; Joly, Pierre
2009-02-01
Cure models have been developed to analyze failure time data with a cured fraction. For such data, standard survival models are usually not appropriate because they do not account for the possibility of cure. Mixture cure models assume that the studied population is a mixture of susceptible individuals, who may experience the event of interest, and non-susceptible individuals that will never experience it. Important issues in mixture cure models are estimation of the baseline survival function for susceptibles and estimation of the variance of the regression parameters. The aim of this paper is to propose a penalized likelihood approach, which allows for flexible modeling of the hazard function for susceptible individuals using M-splines. This approach also permits direct computation of the variance of parameters using the inverse of the Hessian matrix. Properties and limitations of the proposed method are discussed and an illustration from a cancer study is presented.
Heterogeneous Factor Analysis Models: A Bayesian Approach.
ERIC Educational Resources Information Center
Ansari, Asim; Jedidi, Kamel; Dube, Laurette
2002-01-01
Developed Markov Chain Monte Carlo procedures to perform Bayesian inference, model checking, and model comparison in heterogeneous factor analysis. Tested the approach with synthetic data and data from a consumption emotion study involving 54 consumers. Results show that traditional psychometric methods cannot fully capture the heterogeneity in…
Modeling diffuse pollution with a distributed approach.
León, L F; Soulis, E D; Kouwen, N; Farquhar, G J
2002-01-01
The transferability of parameters for non-point source pollution models to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse pollution modeling. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. The distributed approach for the water quality model for diffuse pollution in agricultural watersheds is described in this paper. Integrating the model with data extracted using GIS technology (Geographical Information Systems) for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve non-point source modeling at the watershed scale level.
Selectionist and evolutionary approaches to brain function: a critical appraisal.
Fernando, Chrisantha; Szathmáry, Eörs; Husbands, Phil
2012-01-01
We consider approaches to brain dynamics and function that have been claimed to be Darwinian. These include Edelman's theory of neuronal group selection, Changeux's theory of synaptic selection and selective stabilization of pre-representations, Seung's Darwinian synapse, Loewenstein's synaptic melioration, Adam's selfish synapse, and Calvin's replicating activity patterns. Except for the last two, the proposed mechanisms are selectionist but not truly Darwinian, because no replicators with information transfer to copies and hereditary variation can be identified in them. All of them fit, however, a generalized selectionist framework conforming to the picture of Price's covariance formulation, which deliberately was not specific even to selection in biology, and therefore does not imply an algorithmic picture of biological evolution. Bayesian models and reinforcement learning are formally in agreement with selection dynamics. A classification of search algorithms is shown to include Darwinian replicators (evolutionary units with multiplication, heredity, and variability) as the most powerful mechanism for search in a sparsely occupied search space. Examples are given of cases where parallel competitive search with information transfer among the units is more efficient than search without information transfer between units. Finally, we review our recent attempts to construct and analyze simple models of true Darwinian evolutionary units in the brain in terms of connectivity and activity copying of neuronal groups. Although none of the proposed neuronal replicators include miraculous mechanisms, their identification remains a challenge but also a great promise.
Selectionist and Evolutionary Approaches to Brain Function: A Critical Appraisal
Fernando, Chrisantha; Szathmáry, Eörs; Husbands, Phil
2012-01-01
We consider approaches to brain dynamics and function that have been claimed to be Darwinian. These include Edelman’s theory of neuronal group selection, Changeux’s theory of synaptic selection and selective stabilization of pre-representations, Seung’s Darwinian synapse, Loewenstein’s synaptic melioration, Adam’s selfish synapse, and Calvin’s replicating activity patterns. Except for the last two, the proposed mechanisms are selectionist but not truly Darwinian, because no replicators with information transfer to copies and hereditary variation can be identified in them. All of them fit, however, a generalized selectionist framework conforming to the picture of Price’s covariance formulation, which deliberately was not specific even to selection in biology, and therefore does not imply an algorithmic picture of biological evolution. Bayesian models and reinforcement learning are formally in agreement with selection dynamics. A classification of search algorithms is shown to include Darwinian replicators (evolutionary units with multiplication, heredity, and variability) as the most powerful mechanism for search in a sparsely occupied search space. Examples are given of cases where parallel competitive search with information transfer among the units is more efficient than search without information transfer between units. Finally, we review our recent attempts to construct and analyze simple models of true Darwinian evolutionary units in the brain in terms of connectivity and activity copying of neuronal groups. Although none of the proposed neuronal replicators include miraculous mechanisms, their identification remains a challenge but also a great promise. PMID:22557963
Crossing Hazard Functions in Common Survival Models
Zhang, Jiajia; Peng, Yingwei
2010-01-01
Crossing hazard functions have extensive applications in modeling survival data. However, existing studies in the literature mainly focus on comparing crossed hazard functions and estimating the time at which the hazard functions cross, and there is little theoretical work on conditions under which hazard functions from a model will have a crossing. In this paper, we investigate crossing status of hazard functions from the proportional hazards (PH) model, the accelerated hazard (AH) model, and the accelerated failure time (AFT) model. We provide and prove conditions under which the hazard functions from the AH and the AFT models have no crossings or a single crossing. A few examples are also provided to demonstrate how the conditions can be used to determine crossing status of hazard functions from the three models. PMID:20613974
Measuring Social Returns to Higher Education Investments in Hong Kong: Production Function Approach.
ERIC Educational Resources Information Center
Voon, Jan P.
2001-01-01
Uses a growth model involving an aggregate production function to measure social benefits from human capital improvements due to investments in Hong Kong higher education. Returns calculated using the production-function approach are significantly higher than those derived from the wage-increment method. Returns declined during the past 10 years.…
The McMaster Model of Family Functioning.
ERIC Educational Resources Information Center
Epstein, Nathan B; And Others
1978-01-01
The model of family functioning being presented is the product of over 20 years of research in clinical work with family units. The model uses a general systems theory approach in an attempt to describe the structure, organization, and transactional patterns of the family unit. (Author)
Filtered density function approach for reactive transport in groundwater
NASA Astrophysics Data System (ADS)
Suciu, Nicolae; Schüler, Lennart; Attinger, Sabine; Knabner, Peter
2016-04-01
Spatial filtering may be used in coarse-grained simulations (CGS) of reactive transport in groundwater, similar to the large eddy simulations (LES) in turbulence. The filtered density function (FDF), stochastically equivalent to a probability density function (PDF), provides a statistical description of the sub-grid, unresolved, variability of the concentration field. Besides closing the chemical source terms in the transport equation for the mean concentration, like in LES-FDF methods, the CGS-FDF approach aims at quantifying the uncertainty over the whole hierarchy of heterogeneity scales exhibited by natural porous media. Practically, that means estimating concentration PDFs on coarse grids, at affordable computational costs. To cope with the high dimensionality of the problem in case of multi-component reactive transport and to reduce the numerical diffusion, FDF equations are solved by particle methods. But, while trajectories of computational particles are modeled as stochastic processes indexed by time, the concentration's heterogeneity is modeled as a random field, with multi-dimensional, spatio-temporal sets of indices. To overcome this conceptual inconsistency, we consider FDFs/PDFs of random species concentrations weighted by conserved scalars and we show that their evolution equations can be formulated as Fokker-Planck equations describing stochastically equivalent processes in concentration-position spaces. Numerical solutions can then be approximated by the density in the concentration-position space of an ensemble of computational particles governed by the associated Itô equations. Instead of sequential particle methods we use a global random walk (GRW) algorithm, which is stable, free of numerical diffusion, and practically insensitive to the increase of the number of particles. We illustrate the general FDF approach and the GRW numerical solution for a reduced complexity problem consisting of the transport of a single scalar in groundwater
Models of Protocellular Structure, Function and Evolution
NASA Technical Reports Server (NTRS)
New, Michael H.; Pohorille, Andrew; Szostak, Jack W.; Keefe, Tony; Lanyi, Janos K.; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
In the absence of any record of protocells, the most direct way to test our understanding, of the origin of cellular life is to construct laboratory models that capture important features of protocellular systems. Such efforts are currently underway in a collaborative project between NASA-Ames, Harvard Medical School and University of California. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures. The centerpiece of this project is a method for the in vitro evolution of protein enzymes toward arbitrary catalytic targets. A similar approach has already been developed for nucleic acids in which a small number of functional molecules are selected from a large, random population of candidates. The selected molecules are next vastly multiplied using the polymerase chain reaction.
New approach to folding with the Coulomb wave function
Blokhintsev, L. D.; Savin, D. A.; Kadyrov, A. S.; Mukhamedzhanov, A. M.
2015-05-15
Due to the long-range character of the Coulomb interaction theoretical description of low-energy nuclear reactions with charged particles still remains a formidable task. One way of dealing with the problem in an integral-equation approach is to employ a screened Coulomb potential. A general approach without screening requires folding of kernels of the integral equations with the Coulomb wave. A new method of folding a function with the Coulomb partial waves is presented. The partial-wave Coulomb function both in the configuration and momentum representations is written in the form of separable series. Each term of the series is represented as a product of a factor depending only on the Coulomb parameter and a function depending on the spatial variable in the configuration space and the momentum variable if the momentum representation is used. Using a trial function, the method is demonstrated to be efficient and reliable.
Thilaga, M; Vijayalakshmi, R; Nadarajan, R; Nandagopal, D
2016-06-01
The complex nature of neuronal interactions of the human brain has posed many challenges to the research community. To explore the underlying mechanisms of neuronal activity of cohesive brain regions during different cognitive activities, many innovative mathematical and computational models are required. This paper presents a novel Common Functional Pattern Mining approach to demonstrate the similar patterns of interactions due to common behavior of certain brain regions. The electrode sites of EEG-based functional brain network are modeled as a set of transactions and node-based complex network measures as itemsets. These itemsets are transformed into a graph data structure called Functional Pattern Graph. By mining this Functional Pattern Graph, the common functional patterns due to specific brain functioning can be identified. The empirical analyses show the efficiency of the proposed approach in identifying the extent to which the electrode sites (transactions) are similar during various cognitive load states. PMID:27401999
Thilaga, M; Vijayalakshmi, R; Nadarajan, R; Nandagopal, D
2016-06-01
The complex nature of neuronal interactions of the human brain has posed many challenges to the research community. To explore the underlying mechanisms of neuronal activity of cohesive brain regions during different cognitive activities, many innovative mathematical and computational models are required. This paper presents a novel Common Functional Pattern Mining approach to demonstrate the similar patterns of interactions due to common behavior of certain brain regions. The electrode sites of EEG-based functional brain network are modeled as a set of transactions and node-based complex network measures as itemsets. These itemsets are transformed into a graph data structure called Functional Pattern Graph. By mining this Functional Pattern Graph, the common functional patterns due to specific brain functioning can be identified. The empirical analyses show the efficiency of the proposed approach in identifying the extent to which the electrode sites (transactions) are similar during various cognitive load states.
Functional volumes modeling: theory and preliminary assessment.
Fox, P T; Lancaster, J L; Parsons, L M; Xiong, J H; Zamarripa, F
1997-01-01
A construct for metanalytic modeling of the functional organization of the human brain, termed functional volumes modeling (FVM), is presented and preliminarily tested. FVM uses the published literature to model brain functional areas as spatial probability distributions. The FVM statistical model estimates population variance (i.e., among individuals) from the variance observed among group-mean studies, these being the most prevalent type of study in the functional imaging literature. The FVM modeling strategy is tested by: (1) constructing an FVM of the mouth region of primary motor cortex using published, group-mean, functional imaging reports as input, and (2) comparing the confidence bounds predicted by that FVM with those observed in 10 normal subjects performing overt-speech tasks. The FVM model correctly predicted the mean location and spatial distribution of per-subject functional responses. FVM has a wide range of applications, including hypothesis testing for statistical parametric images.
Towards a Multiscale Approach to Cybersecurity Modeling
Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay; Halappanavar, Mahantesh; Oler, Kiri J.; Joslyn, Cliff A.
2013-11-12
We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example of a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.
Post-16 Biology--Some Model Approaches?
ERIC Educational Resources Information Center
Lock, Roger
1997-01-01
Outlines alternative approaches to the teaching of difficult concepts in A-level biology which may help student learning by making abstract ideas more concrete and accessible. Examples include models, posters, and poems for illustrating meiosis, mitosis, genetic mutations, and protein synthesis. (DDR)
From Equation to Inequality Using a Function-Based Approach
ERIC Educational Resources Information Center
Verikios, Petros; Farmaki, Vassiliki
2010-01-01
This article presents features of a qualitative research study concerning the teaching and learning of school algebra using a function-based approach in a grade 8 class, of 23 students, in 26 lessons, in a state school of Athens, in the school year 2003-2004. In this article, we are interested in the inequality concept and our aim is to…
Questionnaire of Executive Function for Dancers: An Ecological Approach
ERIC Educational Resources Information Center
Wong, Alina; Rodriguez, Mabel; Quevedo, Liliana; de Cossio, Lourdes Fernandez; Borges, Ariel; Reyes, Alicia; Corral, Roberto; Blanco, Florentino; Alvarez, Miguel
2012-01-01
There is a current debate about the ecological validity of executive function (EF) tests. Consistent with the verisimilitude approach, this research proposes the Ballet Executive Scale (BES), a self-rating questionnaire that assimilates idiosyncratic executive behaviors of classical dance community. The BES was administrated to 149 adolescents,…
Beyond Resistance: A Functional Approach to Building a Shared Agenda.
ERIC Educational Resources Information Center
Janas, Monica; Boudreaux, Martine
1997-01-01
Notes full inclusion has become a reality in many schools. Discusses four basic types of resistance to such educational reforms. Suggests that a functional approach can help educators deal with resistance. Suggests learning to recognize resistance and to deal with it effectively facilitates the creation of alternatives for students with special…
A Functional Approach to Composition Offers an Alternative.
ERIC Educational Resources Information Center
Hartnett, Carolyn G.
1997-01-01
When it comes to teaching students how to correct errors in mechanics and usage, English composition teachers have a problem in determining what and how to teach. An approach is developing overseas which comes from a type of linguistics called "functional," because it describes how languages work rather than only its forms. A branch that has…
Bootstrapped models for intrinsic random functions
Campbell, K.
1988-08-01
Use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process. The fact that this function has to be estimated from data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the bootstrap in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as their kriging variance, provide a reasonable picture of variability introduced by imperfect estimation of the generalized covariance function.
Bootstrapped models for intrinsic random functions
Campbell, K.
1987-01-01
The use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process, and the fact that this function has to be estimated from the data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the ''bootstrap'' in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as of their ''kriging variance,'' provide a reasonable picture of the variability introduced by imperfect estimation of the generalized covariance function.
Tensor renormalization group approach to classical dimer models
NASA Astrophysics Data System (ADS)
Roychowdhury, Krishanu; Huang, Ching-Yu
2015-05-01
We analyze classical dimer models on a square and a triangular lattice using a tensor network representation of the dimers. The correlation functions are numerically calculated using the recently developed "tensor renormalization group" (TRG) technique. The partition function for the dimer problem can be calculated exactly by the Pfaffian method, which is used here as a platform for comparing the numerical results. The TRG approach turns out to be a powerful tool for describing gapped systems with exponentially decaying correlations very efficiently due to its fast convergence. This is the case for the dimer model on the triangular lattice. However, the convergence becomes very slow and unstable in the case of the square lattice where the model has algebraically decaying correlations. We highlight these aspects with numerical simulations and critically appraise the robustness of the TRG approach by contrasting the results for small and large system sizes against the exact calculations. Furthermore, we benchmark our TRG results with the classical Monte Carlo method.
An object-oriented approach to energy-economic modeling
Wise, M.A.; Fox, J.A.; Sands, R.D.
1993-12-01
In this paper, the authors discuss the experiences in creating an object-oriented economic model of the U.S. energy and agriculture markets. After a discussion of some central concepts, they provide an overview of the model, focusing on the methodology of designing an object-oriented class hierarchy specification based on standard microeconomic production functions. The evolution of the model from the class definition stage to programming it in C++, a standard object-oriented programming language, will be detailed. The authors then discuss the main differences between writing the object-oriented program versus a procedure-oriented program of the same model. Finally, they conclude with a discussion of the advantages and limitations of the object-oriented approach based on the experience in building energy-economic models with procedure-oriented approaches and languages.
Approaches for functional analysis of flagellar proteins in African trypanosomes.
Oberholzer, Michael; Lopez, Miguel A; Ralston, Katherine S; Hill, Kent L
2009-01-01
The eukaryotic flagellum is a highly conserved organelle serving motility, sensory, and transport functions. Although genetic, genomic, and proteomic studies have led to the identification of hundreds of flagellar and putative flagellar proteins, precisely how these proteins function individually and collectively to drive flagellum motility and other functions remains to be determined. In this chapter we provide an overview of tools and approaches available for studying flagellum protein function in the protozoan parasite Trypanosoma brucei. We begin by outlining techniques for in vitro cultivation of both T. brucei life cycle stages, as well as transfection protocols for the delivery of DNA constructs. We then describe specific assays used to assess flagellum function including flagellum preparation and quantitative motility assays. We conclude the chapter with a description of molecular genetic approaches for manipulating gene function. In summary, the availability of potent molecular tools, as well as the health and economic relevance of T. brucei as a pathogen, combine to make the parasite an attractive and integral experimental system for the functional analysis of flagellar proteins. PMID:20409810
Järvinen, Anna; Ng, Rowena; Bellugi, Ursula
2015-11-01
Williams syndrome (WS) is a neurogenetic disorder that is saliently characterized by a unique social phenotype, most notably associated with a dramatically increased affinity and approachability toward unfamiliar people. Despite a recent proliferation of studies into the social profile of WS, the underpinnings of the pro-social predisposition are poorly understood. To this end, the present study was aimed at elucidating approach behavior of individuals with WS contrasted with typical development (TD) by employing a multidimensional design combining measures of autonomic arousal, social functioning, and two levels of approach evaluations. Given previous evidence suggesting that approach behaviors of individuals with WS are driven by a desire for social closeness, approachability tendencies were probed across two levels of social interaction: talking versus befriending. The main results indicated that while overall level of approachability did not differ between groups, an important qualitative between-group difference emerged across the two social interaction contexts: whereas individuals with WS demonstrated a similar willingness to approach strangers across both experimental conditions, TD individuals were significantly more willing to talk to than to befriend strangers. In WS, high approachability to positive faces across both social interaction levels was further associated with more normal social functioning. A novel finding linked autonomic responses with willingness to befriend negative faces in the WS group: elevated autonomic responsivity was associated with increased affiliation to negative face stimuli, which may represent an autonomic correlate of approach behavior in WS. Implications for underlying organization of the social brain are discussed. PMID:26459097
Järvinen, Anna; Ng, Rowena; Bellugi, Ursula
2015-11-01
Williams syndrome (WS) is a neurogenetic disorder that is saliently characterized by a unique social phenotype, most notably associated with a dramatically increased affinity and approachability toward unfamiliar people. Despite a recent proliferation of studies into the social profile of WS, the underpinnings of the pro-social predisposition are poorly understood. To this end, the present study was aimed at elucidating approach behavior of individuals with WS contrasted with typical development (TD) by employing a multidimensional design combining measures of autonomic arousal, social functioning, and two levels of approach evaluations. Given previous evidence suggesting that approach behaviors of individuals with WS are driven by a desire for social closeness, approachability tendencies were probed across two levels of social interaction: talking versus befriending. The main results indicated that while overall level of approachability did not differ between groups, an important qualitative between-group difference emerged across the two social interaction contexts: whereas individuals with WS demonstrated a similar willingness to approach strangers across both experimental conditions, TD individuals were significantly more willing to talk to than to befriend strangers. In WS, high approachability to positive faces across both social interaction levels was further associated with more normal social functioning. A novel finding linked autonomic responses with willingness to befriend negative faces in the WS group: elevated autonomic responsivity was associated with increased affiliation to negative face stimuli, which may represent an autonomic correlate of approach behavior in WS. Implications for underlying organization of the social brain are discussed.
A hybrid modeling approach for option pricing
NASA Astrophysics Data System (ADS)
Hajizadeh, Ehsan; Seifi, Abbas
2011-11-01
The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.
Models of Protocellular Structure, Function and Evolution
NASA Technical Reports Server (NTRS)
New, Michael H.; Pohorille, Andrew; Szostak, Jack W.; Keefe, Tony; Lanyi, Janos K.
2001-01-01
In the absence of any record of protocells, the most direct way to test our understanding of the origin of cellular life is to construct laboratory models that capture important features of protocellular systems. Such efforts are currently underway in a collaborative project between NASA-Ames, Harvard Medical School and University of California. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures. The centerpiece of this project is a method for the in vitro evolution of protein enzymes toward arbitrary catalytic targets. A similar approach has already been developed for nucleic acids in which a small number of functional molecules are selected from a large, random population of candidates. The selected molecules are next vastly multiplied using the polymerase chain reaction. A mutagenic approach, in which the sequences of selected molecules are randomly altered, can yield further improvements in performance or alterations of specificities. Unfortunately, the catalytic potential of nucleic acids is rather limited. Proteins are more catalytically capable but cannot be directly amplified. In the new technique, this problem is circumvented by covalently linking each protein of the initial, diverse, pool to the RNA sequence that codes for it. Then, selection is performed on the proteins, but the nucleic acids are replicated. Additional information is contained in the original extended abstract.
A subgrid based approach for morphodynamic modelling
NASA Astrophysics Data System (ADS)
Volp, N. D.; van Prooijen, B. C.; Pietrzak, J. D.; Stelling, G. S.
2016-07-01
To improve the accuracy and the efficiency of morphodynamic simulations, we present a subgrid based approach for a morphodynamic model. This approach is well suited for areas characterized by sub-critical flow, like in estuaries, coastal areas and in low land rivers. This new method uses a different grid resolution to compute the hydrodynamics and the morphodynamics. The hydrodynamic computations are carried out with a subgrid based, two-dimensional, depth-averaged model. This model uses a coarse computational grid in combination with a subgrid. The subgrid contains high resolution bathymetry and roughness information to compute volumes, friction and advection. The morphodynamic computations are carried out entirely on a high resolution grid, the bed grid. It is key to find a link between the information defined on the different grids in order to guaranty the feedback between the hydrodynamics and the morphodynamics. This link is made by using a new physics-based interpolation method. The method interpolates water levels and velocities from the coarse grid to the high resolution bed grid. The morphodynamic solution improves significantly when using the subgrid based method compared to a full coarse grid approach. The Exner equation is discretised with an upwind method based on the direction of the bed celerity. This ensures a stable solution for the Exner equation. By means of three examples, it is shown that the subgrid based approach offers a significant improvement at a minimal computational cost.
Computational Modeling of Mitochondrial Function
Cortassa, Sonia; Aon, Miguel A.
2012-01-01
The advent of techniques with the ability to scan massive changes in cellular makeup (genomics, proteomics, etc.) has revealed the compelling need for analytical methods to interpret and make sense of those changes. Computational models built on sound physico-chemical mechanistic basis are unavoidable at the time of integrating, interpreting, and simulating high-throughput experimental data. Another powerful role of computational models is predicting new behavior provided they are adequately validated. Mitochondrial energy transduction has been traditionally studied with thermodynamic models. More recently, kinetic or thermo-kinetic models have been proposed, leading the path toward an understanding of the control and regulation of mitochondrial energy metabolism and its interaction with cytoplasmic and other compartments. In this work, we outline the methods, step-by-step, that should be followed to build a computational model of mitochondrial energetics in isolation or integrated to a network of cellular processes. Depending on the question addressed by the modeler, the methodology explained herein can be applied with different levels of detail, from the mitochondrial energy producing machinery in a network of cellular processes to the dynamics of a single enzyme during its catalytic cycle. PMID:22057575
A Bayesian Shrinkage Approach for AMMI Models
de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves
2015-01-01
Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior
A Bayesian Shrinkage Approach for AMMI Models.
da Silva, Carlos Pereira; de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves; Balestre, Marcio
2015-01-01
Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior
Bioactive Functions of Milk Proteins: a Comparative Genomics Approach.
Sharp, Julie A; Modepalli, Vengama; Enjapoori, Ashwanth Kumar; Bisana, Swathi; Abud, Helen E; Lefevre, Christophe; Nicholas, Kevin R
2014-12-01
The composition of milk includes factors required to provide appropriate nutrition for the growth of the neonate. However, it is now clear that milk has many functions and comprises bioactive molecules that play a central role in regulating developmental processes in the young while providing a protective function for both the suckled young and the mammary gland during the lactation cycle. Identifying these bioactives and their physiological function in eutherians can be difficult and requires extensive screening of milk components that may function to improve well-being and options for prevention and treatment of disease. New animal models with unique reproductive strategies are now becoming increasingly relevant to search for these factors.
Semitransparent one-dimensional potential: a Green's function approach
NASA Astrophysics Data System (ADS)
Maldonado-Villamizar, F. H.
2015-06-01
We study the unstable harmonic oscillator and the unstable linear potential in the presence of the point potential, which is the superposition of the Dirac δ (x) and its derivative {{δ }\\prime }(x). Using the physical boundary conditions for the Green's function we derive for both systems the resonance poles and the resonance wave functions. The matching conditions for the resonance wave functions coincide with those obtained by the self-adjoint extensions of the point potentials and also by the modelling of the {{δ }\\prime }(x) function. We find that, with our definitions, the pure b{{δ }\\prime }(x) barrier is semi-transparent independent of the value of b.
Modeling approach for business process reengineering
NASA Astrophysics Data System (ADS)
Tseng, Mitchell M.; Chen, Yuliu
1995-08-01
The purpose of this paper is to introduce a modeling approach to define, simulate, animate, and control business processes. The intent is to introduce the undergoing methodology to build tools for designing and managing business processes. Similar to computer aided design (CAD) for mechanical parts, CAD tools are needed for designing business processes. It emphasizes the dynamic behavior of business process. The proposed modeling technique consists of a definition of each individual activity, the network of activities, a control mechanism that describes coordination of these activities, and events that will flow through these activities. Based on the formalism introduced in this modeling technique, users will be able to define business process with minimum ambiguity, take snap shots of particular events in the process, describe the accountability of participants, and view a replay of event streams in the process flow. This modeling approach, mapped on top of a commercial software, has been tested by using examples from real life business process. The examples and testing helped us to identify some of the strengths and weaknesses of this proposed approach.
An Evolutionary Computation Approach to Examine Functional Brain Plasticity
Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.
2016-01-01
One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength
An Evolutionary Computation Approach to Examine Functional Brain Plasticity.
Roy, Arnab; Campbell, Colin; Bernier, Rachel A; Hillary, Frank G
2016-01-01
One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength
Learning local objective functions for robust face model fitting.
Wimmer, Matthias; Stulp, Freek; Pietzsch, Sylvia; Radig, Bernd
2008-08-01
Model-based techniques have proven to be successful in interpreting the large amount of information contained in images. Associated fitting algorithms search for the global optimum of an objective function, which should correspond to the best model fit in a given image. Although fitting algorithms have been the subject of intensive research and evaluation, the objective function is usually designed ad hoc, based on implicit and domain-dependent knowledge. In this article, we address the root of the problem by learning more robust objective functions. First, we formulate a set of desirable properties for objective functions and give a concrete example function that has these properties. Then, we propose a novel approach that learns an objective function from training data generated by manual image annotations and this ideal objective function. In this approach, critical decisions such as feature selection are automated, and the remaining manual steps hardly require domain-dependent knowledge. Furthermore, an extensive empirical evaluation demonstrates that the obtained objective functions yield more robustness. Learned objective functions enable fitting algorithms to determine the best model fit more accurately than with designed objective functions. PMID:18566491
Bayesian non-parametrics and the probabilistic approach to modelling
Ghahramani, Zoubin
2013-01-01
Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman’s coalescent, Dirichlet diffusion trees and Wishart processes. PMID:23277609
Quasielastic scattering with the relativistic Green’s function approach
Meucci, Andrea; Giusti, Carlotta
2015-05-15
A relativistic model for quasielastic (QE) lepton-nucleus scattering is presented. The effects of final-state interactions (FSI) between the ejected nucleon and the residual nucleus are described in the relativistic Green’s function (RGF) model where FSI are consistently described with exclusive scattering using a complex optical potential. The results of the model are compared with experimental results of electron and neutrino scattering.
Elements of a function analytic approach to probability.
Ghanem, Roger Georges; Red-Horse, John Robert
2008-02-01
We first provide a detailed motivation for using probability theory as a mathematical context in which to analyze engineering and scientific systems that possess uncertainties. We then present introductory notes on the function analytic approach to probabilistic analysis, emphasizing the connections to various classical deterministic mathematical analysis elements. Lastly, we describe how to use the approach as a means to augment deterministic analysis methods in a particular Hilbert space context, and thus enable a rigorous framework for commingling deterministic and probabilistic analysis tools in an application setting.
Mayorga, René V; Carrera, Jonathan
2007-06-01
This Paper presents an efficient approach for the fast computation of inverse continuous time variant functions with the proper use of Radial Basis Function Networks (RBFNs). The approach is based on implementing RBFNs for computing inverse continuous time variant functions via an overall damped least squares solution that includes a novel null space vector for singularities prevention. The singularities avoidance null space vector is derived from developing a sufficiency condition for singularities prevention that conduces to establish some characterizing matrices and an associated performance index.
Penalized spline estimation for functional coefficient regression models
Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z.
2011-01-01
The functional coefficient regression models assume that the regression coefficients vary with some “threshold” variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called “curse of dimensionality” in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application. PMID:21516260
Accuracy of functional surfaces on comparatively modeled protein structures
Zhao, Jieling; Dundas, Joe; Kachalo, Sema; Ouyang, Zheng; Liang, Jie
2012-01-01
Identification and characterization of protein functional surfaces are important for predicting protein function, understanding enzyme mechanism, and docking small compounds to proteins. As the rapid speed of accumulation of protein sequence information far exceeds that of structures, constructing accurate models of protein functional surfaces and identify their key elements become increasingly important. A promising approach is to build comparative models from sequences using known structural templates such as those obtained from structural genome projects. Here we assess how well this approach works in modeling binding surfaces. By systematically building three-dimensional comparative models of proteins using Modeller, we determine how well functional surfaces can be accurately reproduced. We use an alpha shape based pocket algorithm to compute all pockets on the modeled structures, and conduct a large-scale computation of similarity measurements (pocket RMSD and fraction of functional atoms captured) for 26,590 modeled enzyme protein structures. Overall, we find that when the sequence fragment of the binding surfaces has more than 45% identity to that of the tempalte protein, the modeled surfaces have on average an RMSD of 0.5 Å, and contain 48% or more of the binding surface atoms, with nearly all of the important atoms in the signatures of binding pockets captured. PMID:21541664
Neurocomputing approaches to modelling of drying process dynamics
Kaminski, W.; Strumillo, P.; Tomczak, E.
1998-07-01
The application of artificial neural networks to mathematical modeling of drying kinetics, degradation kinetics and smoothing of experimental data is discussed in the paper. A theoretical foundation of drying process description by means of artificial neural networks is presented. Two network types are proposed for drying process modelling, namely the multilayer perceptron network and the radial basis functions network. These were validated experimentally for fresh green peals and diced potatoes which represent diverse food products. Network training procedures based on experimental data are explained. Additionally, the proposed neural network modelling approach is tested on drying experiments of silica gel saturated with ascorbic acid solution.
New approaches to enhance active steering system functionalities: preliminary results
NASA Astrophysics Data System (ADS)
Serarslan, Benan
2014-09-01
An important development of the steering systems in general is active steering systems like active front steering and steer-by-wire systems. In this paper the current functional possibilities in application of active steering systems are explored. A new approach and additional functionalities are presented that can be implemented to the active steering systems without additional hardware such as new sensors and electronic control units. Commercial active steering systems are controlling the steering angle depending on the driving situation only. This paper introduce methods for enhancing active steering system functionalities depending not only on the driving situation but also vehicle parameters like vehicle mass, tyre and road condition. In this regard, adaptation of the steering ratio as a function of above mentioned vehicle parameters is presented with examples. With some selected vehicle parameter changes, the reduction of the undesired influences on vehicle dynamics of these parameter changes has been demonstrated theoretically with simulations and with real-time driving measurements.
Computational approaches for rational design of proteins with novel functionalities.
Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul
2012-01-01
Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes.
Quantum cluster approach to the spinful Haldane-Hubbard model
NASA Astrophysics Data System (ADS)
Wu, Jingxiang; Faye, Jean Paul Latyr; Sénéchal, David; Maciejko, Joseph
2016-02-01
We study the spinful fermionic Haldane-Hubbard model at half-filling using a combination of quantum cluster methods: cluster perturbation theory, the variational cluster approximation, and cluster dynamical mean-field theory. We explore possible zero-temperature phases of the model as a function of onsite repulsive interaction strength and next-nearest-neighbor hopping amplitude and phase. Our approach allows us to access the regime of intermediate interaction strength, where charge fluctuations are significant and effective spin model descriptions may not be justified. Our approach also improves upon mean-field solutions of the Haldane-Hubbard model by retaining local quantum fluctuations and treating them nonperturbatively. We find a correlated topological Chern insulator for weak interactions and a topologically trivial Néel antiferromagnetic insulator for strong interactions. For intermediate interactions, we find that topologically nontrivial Néel antiferromagnetic insulating phases and/or a topologically nontrivial nonmagnetic insulating phase may be stabilized.
Systems Engineering Interfaces: A Model Based Approach
NASA Technical Reports Server (NTRS)
Fosse, Elyse; Delp, Christopher
2013-01-01
Currently: Ops Rev developed and maintains a framework that includes interface-specific language, patterns, and Viewpoints. Ops Rev implements the framework to design MOS 2.0 and its 5 Mission Services. Implementation de-couples interfaces and instances of interaction Future: A Mission MOSE implements the approach and uses the model based artifacts for reviews. The framework extends further into the ground data layers and provides a unified methodology.
Algebraic operator approach to gas kinetic models
NASA Astrophysics Data System (ADS)
Il'ichov, L. V.
1997-02-01
Some general properties of the linear Boltzmann kinetic equation are used to present it in the form ∂ tϕ = - Â†Âϕ with the operators ÂandÂ† possessing some nontrivial algebraic properties. When applied to the Keilson-Storer kinetic model, this method gives an example of quantum ( q-deformed) Lie algebra. This approach provides also a natural generalization of the “kangaroo model”.
Exact Approach to Inflationary Universe Models
NASA Astrophysics Data System (ADS)
del Campo, Sergio
In this chapter we introduce a study of inflationary universe models that are characterized by a single scalar inflation field . The study of these models is based on two dynamical equations: one corresponding to the Klein-Gordon equation for the inflaton field and the other to a generalized Friedmann equation. After describing the kinematics and dynamics of the models under the Hamilton-Jacobi scheme, we determine in some detail scalar density perturbations and relic gravitational waves. We also introduce the study of inflation under the hierarchy of the slow-roll parameters together with the flow equations. We apply this approach to the modified Friedmann equation that we call the Friedmann-Chern-Simons equation, characterized by F(H) = H^2- α H4, and the brane-world inflationary models expressed by the modified Friedmann equation.
Multichip reticle approach for OPC model verification
NASA Astrophysics Data System (ADS)
Taravade, Kunal N.; Belova, Nadya; Jost, Andrew M.; Callan, Neal P.
2003-12-01
The complexity of current semiconductor technology due to shrinking feature sizes causes more and more engineering efforts and expenses to deliver the final product to customers. One of the largest expense in the entire budget is the reticle manufacturing. With the need to perform mask correction in order to account for optical proximity effects on the wafer level, the reticle expenses have become even more critical. For 0.13um technology one can not avoid optical proximity correction (OPC) procedure for modifying original designs to comply with design rules as required by Front End (FE) and Back End (BE) processes. Once an OPC model is generated one needs to confirm and verify the said model with additional test reticles for every critical layer of the technology. Such a verification procedure would include the most critical layers (two FE layers and four BE layers for the 0.13 technology node). This allows us to evaluate model performance under real production conditions encountered on customer designs. At LSI we have developed and verified the low volume reticle (LVR) approach for verification of different OPC models. The proposed approach allows performing die-to-die reticle defect inspection in addition to checking the printed image on the wafer. It helps finalizing litho and etch process parameters. Processing wafers with overlaying masks for two consecutive BE layer (via and metal2 masks) allowed us to evaluate robustness of OPC models for a wafer stack against both reticle and wafer induced misalignments.
Assessment of Cardiac Function--Basic Principles and Approaches.
Spinale, Francis G
2015-09-20
Increased access and ability to visualize the heart has provided a means to measure a myriad of cardiovascular parameters in real or near real time. However, without fundamental knowledge regarding the basis for cardiac contraction and how to evaluate cardiac function in terms of loading conditions and inotropic state, appropriate interpretation of these cardiovascular parameters can be difficult and can lead to misleading conclusions regarding the functional state of the cardiac muscle. Thus, in this series of Comprehensive Physiology, the basic properties of cardiac muscle function, the cardiac cycle, and determinants of pump function will be reviewed. These basic concepts will then be integrated by presenting approaches in which the effects of preload, afterload, and myocardial contractility can be examined. Moreover, the utility of the pressure-volume relation in terms of assessing both myocardial contractility as well as critical aspects of diastolic performance will be presented. Finally, a generalized approach for the assessment and interpretation of cardiac function within the intact cardiovascular system will be presented.
Green-function approach for scattering quantum walks
Andrade, F. M.; Luz, M. G. E. da
2011-10-15
In this work a Green-function approach for scattering quantum walks is developed. The exact formula has the form of a sum over paths and always can be cast into a closed analytic expression for arbitrary topologies and position-dependent quantum amplitudes. By introducing the step and path operators, it is shown how to extract any information about the system from the Green function. The method's relevant features are demonstrated by discussing in detail an example, a general diamond-shaped graph.
NASA Astrophysics Data System (ADS)
Auger, P. A.; Diaz, F.; Ulses, C.; Estournel, C.; Neveux, J.; Joux, F.; Pujo-Pay, M.; Naudin, J. J.
2010-12-01
Low-salinity water (LSW, Salinity < 37.5) lenses detached from the Rhone River plume under specific wind conditions tend to favour the biological productivity and potentially a transfer of energy to higher trophic levels on the Gulf of Lions (GoL). A field cruise conducted in May 2006 (BIOPRHOFI) followed some LSW lenses by using a lagrangian strategy. A thorough analysis of the available data set enabled to further improve our understanding of the LSW lenses' functioning and their potential influence on marine ecosystems. Through an innovative 3-D coupled hydrodynamic-biogeochemical modelling approach, a specific calibration dedicated to river plume ecosystems was then proposed and validated on field data. Exploring the role of ecosystems on the particulate organic carbon (POC) export and deposition on the shelf, a sensitivity analysis to the particulate organic matter inputs from the Rhone River was carried out from 1 April to 15 July 2006. Over such a typical end-of-spring period marked by moderate floods, the main deposition area of POC was identified alongshore between 0 and 50 m depth on the GoL, extending the Rhone prodelta to the west towards the exit of the shelf. Moreover, the main deposition area of terrestrial POC was found on the prodelta region, which confirms recent results from sediment data. The averaged daily deposition of particulate organic carbon over the whole GoL is estimated by the model between 40 and 80 mgC/m2, which is in the range of previous secular estimations. The role of ecosystems on the POC export toward sediments or offshore areas was actually highlighted and feedbacks between ecosystems and particulate organic matters are proposed to explain paradoxical model results to the sensitivity test. In fact, the conversion of organic matter in living organisms would increase the retention of organic matter in the food web and this matter transfer along the food web could explain the minor quantity of POC of marine origin observed in the
Fuzzy set approach to quality function deployment: An investigation
NASA Technical Reports Server (NTRS)
Masud, Abu S. M.
1992-01-01
The final report of the 1992 NASA/ASEE Summer Faculty Fellowship at the Space Exploration Initiative Office (SEIO) in Langley Research Center is presented. Quality Function Deployment (QFD) is a process, focused on facilitating the integration of the customer's voice in the design and development of a product or service. Various input, in the form of judgements and evaluations, are required during the QFD analyses. All the input variables in these analyses are treated as numeric variables. The purpose of the research was to investigate how QFD analyses can be performed when some or all of the input variables are treated as linguistic variables with values expressed as fuzzy numbers. The reason for this consideration is that human judgement, perception, and cognition are often ambiguous and are better represented as fuzzy numbers. Two approaches for using fuzzy sets in QFD have been proposed. In both cases, all the input variables are considered as linguistic variables with values indicated as linguistic expressions. These expressions are then converted to fuzzy numbers. The difference between the two approaches is due to how the QFD computations are performed with these fuzzy numbers. In Approach 1, the fuzzy numbers are first converted to their equivalent crisp scores and then the QFD computations are performed using these crisp scores. As a result, the output of this approach are crisp numbers, similar to those in traditional QFD. In Approach 2, all the QFD computations are performed with the fuzzy numbers and the output are fuzzy numbers also. Both the approaches have been explained with the help of illustrative examples of QFD application. Approach 2 has also been applied in a QFD application exercise in SEIO, involving a 'mini moon rover' design. The mini moon rover is a proposed tele-operated vehicle that will traverse and perform various tasks, including autonomous operations, on the moon surface. The output of the moon rover application exercise is a
Approaches to modelling hydrology and ecosystem interactions
NASA Astrophysics Data System (ADS)
Silberstein, Richard P.
2014-05-01
As the pressures of industry, agriculture and mining on groundwater resources increase there is a burgeoning un-met need to be able to capture these multiple, direct and indirect stresses in a formal framework that will enable better assessment of impact scenarios. While there are many catchment hydrological models and there are some models that represent ecological states and change (e.g. FLAMES, Liedloff and Cook, 2007), these have not been linked in any deterministic or substantive way. Without such coupled eco-hydrological models quantitative assessments of impacts from water use intensification on water dependent ecosystems under changing climate are difficult, if not impossible. The concept would include facility for direct and indirect water related stresses that may develop around mining and well operations, climate stresses, such as rainfall and temperature, biological stresses, such as diseases and invasive species, and competition such as encroachment from other competing land uses. Indirect water impacts could be, for example, a change in groundwater conditions has an impact on stream flow regime, and hence aquatic ecosystems. This paper reviews previous work examining models combining ecology and hydrology with a view to developing a conceptual framework linking a biophysically defensable model that combines ecosystem function with hydrology. The objective is to develop a model capable of representing the cumulative impact of multiple stresses on water resources and associated ecosystem function.
Connectotyping: model based fingerprinting of the functional connectome.
Miranda-Dominguez, Oscar; Mills, Brian D; Carpenter, Samuel D; Grant, Kathleen A; Kroenke, Christopher D; Nigg, Joel T; Fair, Damien A
2014-01-01
A better characterization of how an individual's brain is functionally organized will likely bring dramatic advances to many fields of study. Here we show a model-based approach toward characterizing resting state functional connectivity MRI (rs-fcMRI) that is capable of identifying a so-called "connectotype", or functional fingerprint in individual participants. The approach rests on a simple linear model that proposes the activity of a given brain region can be described by the weighted sum of its functional neighboring regions. The resulting coefficients correspond to a personalized model-based connectivity matrix that is capable of predicting the timeseries of each subject. Importantly, the model itself is subject specific and has the ability to predict an individual at a later date using a limited number of non-sequential frames. While we show that there is a significant amount of shared variance between models across subjects, the model's ability to discriminate an individual is driven by unique connections in higher order control regions in frontal and parietal cortices. Furthermore, we show that the connectotype is present in non-human primates as well, highlighting the translational potential of the approach.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
A Multi-Level Model of Moral Functioning Revisited
ERIC Educational Resources Information Center
Reed, Don Collins
2009-01-01
The model of moral functioning scaffolded in the 2008 "JME" Special Issue is here revisited in response to three papers criticising that volume. As guest editor of that Special Issue I have formulated the main body of this response, concerning the dynamic systems approach to moral development, the problem of moral relativism and the role of…
Muñoz-Martínez, Amanda M; Coletti, Juan Pablo
2015-01-01
Abstract Functional Analytic Psychotherapy (FAP) is a therapeutic approach developed in
Gauge-invariant Green function dynamics: A unified approach
Swiecicki, Sylvia D. Sipe, J.E.
2013-11-15
We present a gauge-invariant description of Green function dynamics introduced by means of a generalized Peirels phase involving an arbitrary differentiable path in space–time. Two other approaches to formulating a gauge-invariant description of systems, the Green function treatment of Levanda and Fleurov [M. Levanda, V. Fleurov, J. Phys.: Condens. Matter 6 (1994) 7889] and the usual multipolar expansion for an atom, are shown to arise as special cases of our formalism. We argue that the consideration of paths in the generalized Peirels phase that do not lead to introduction of an effective gauge-invariant Hamiltonian with polarization and magnetization fields may prove useful for the treatment of the response of materials with short electron correlation lengths. -- Highlights: •Peirels phase for an arbitrary path in space–time established. •Gauge-invariant Green functions and the Power–Zienau–Wooley transformation connected. •Limitations on possible polarization and magnetization fields established.
Merging Digital Surface Models Implementing Bayesian Approaches
NASA Astrophysics Data System (ADS)
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
On an approach for computing the generating functions of the characters of simple Lie algebras
NASA Astrophysics Data System (ADS)
Fernández Núñez, José; García Fuertes, Wifredo; Perelomov, Askold M.
2014-04-01
We describe a general approach to obtain the generating functions of the characters of simple Lie algebras which is based on the theory of the quantum trigonometric Calogero-Sutherland model. We show how the method works in practice by means of a few examples involving some low rank classical algebras.
Data Mining Approaches for Modeling Complex Electronic Circuit Design Activities
Kwon, Yongjin; Omitaomu, Olufemi A; Wang, Gi-Nam
2008-01-01
A printed circuit board (PCB) is an essential part of modern electronic circuits. It is made of a flat panel of insulating materials with patterned copper foils that act as electric pathways for various components such as ICs, diodes, capacitors, resistors, and coils. The size of PCBs has been shrinking over the years, while the number of components mounted on these boards has increased considerably. This trend makes the design and fabrication of PCBs ever more difficult. At the beginning of design cycles, it is important to estimate the time to complete the steps required accurately, based on many factors such as the required parts, approximate board size and shape, and a rough sketch of schematics. Current approach uses multiple linear regression (MLR) technique for time and cost estimations. However, the need for accurate predictive models continues to grow as the technology becomes more advanced. In this paper, we analyze a large volume of historical PCB design data, extract some important variables, and develop predictive models based on the extracted variables using a data mining approach. The data mining approach uses an adaptive support vector regression (ASVR) technique; the benchmark model used is the MLR technique currently being used in the industry. The strengths of SVR for this data include its ability to represent data in high-dimensional space through kernel functions. The computational results show that a data mining approach is a better prediction technique for this data. Our approach reduces computation time and enhances the practical applications of the SVR technique.
Density functional calculations on model tyrosyl radicals.
Himo, F; Gräslund, A; Eriksson, L A
1997-01-01
A gradient-corrected density functional theory approach (PWP86) has been applied, together with large basis sets (IGLO-III), to investigate the structure and hyperfine properties of model tyrosyl free radicals. In nature, these radicals are observed in, e.g., the charge transfer pathways in photosystem II (PSII) and in ribonucleotide reductases (RNRs). By comparing spin density distributions and proton hyperfine couplings with experimental data, it is confirmed that the tyrosyl radicals present in the proteins are neutral. It is shown that hydrogen bonding to the phenoxyl oxygen atom, when present, causes a reduction in spin density on O and a corresponding increase on C4. Calculated proton hyperfine coupling constants for the beta-protons show that the alpha-carbon is rotated 75-80 degrees out of the plane of the ring in PSII and Salmonella typhimurium RNR, but only 20-30 degrees in, e.g., Escherichia coli, mouse, herpes simplex, and bacteriophage T4-induced RNRs. Furthermore, based on the present calculations, we have revised the empirical parameters used in the experimental determination of the oxygen spin density in the tyrosyl radical in E. coli RNR and of the ring carbon spin densities, from measured hyperfine coupling constants. Images FIGURE 1 FIGURE 5 PMID:9083661
Bioactive Functions of Milk Proteins: a Comparative Genomics Approach.
Sharp, Julie A; Modepalli, Vengama; Enjapoori, Ashwanth Kumar; Bisana, Swathi; Abud, Helen E; Lefevre, Christophe; Nicholas, Kevin R
2014-12-01
The composition of milk includes factors required to provide appropriate nutrition for the growth of the neonate. However, it is now clear that milk has many functions and comprises bioactive molecules that play a central role in regulating developmental processes in the young while providing a protective function for both the suckled young and the mammary gland during the lactation cycle. Identifying these bioactives and their physiological function in eutherians can be difficult and requires extensive screening of milk components that may function to improve well-being and options for prevention and treatment of disease. New animal models with unique reproductive strategies are now becoming increasingly relevant to search for these factors. PMID:26115887
Two Approaches to Modelling and Forecast
NASA Astrophysics Data System (ADS)
Bezruchko, Boris P.; Smirnov, Dmitry A.
Before creation of a model, one should specify one's intentions in respect of its predictive ability. Such a choice determines which mathematical tools are appropriate. If one does not pretend to a precise and unique forecast of future states, then a probabilistic approach is traditionally used. Then, some quantities describing an object under investigation are declared random, i.e. fundamentally unpredictable, stochastic.1 Such a "verdict" may be based on different reasoning (Sect. 2.2) but if it is accepted, one uses a body of the theory of probability and mathematical statistics.
The impact of legislation on divorce: a hazard function approach.
Kidd, M P
1995-01-01
"The paper examines the impact of the introduction of no-fault divorce legislation in Australia. The approach used is rather novel, a hazard model of the divorce rate is estimated with the role of legislation captured via a time-varying covariate. The paper concludes that contrary to U.S. empirical evidence, no-fault divorce legislation appears to have had a positive impact upon the divorce rate in Australia."
Hossner, Ernst-Joachim; Schiebl, Frank; Göhner, Ulrich
2015-01-01
In a hypothesis-and-theory paper, a functional approach to movement analysis in sports is introduced. In this approach, contrary to classical concepts, it is not anymore the "ideal" movement of elite athletes that is taken as a template for the movements produced by learners. Instead, movements are understood as the means to solve given tasks that in turn, are defined by to-be-achieved task goals. A functional analysis comprises the steps of (1) recognizing constraints that define the functional structure, (2) identifying sub-actions that subserve the achievement of structure-dependent goals, (3) explicating modalities as specifics of the movement execution, and (4) assigning functions to actions, sub-actions and modalities. Regarding motor-control theory, a functional approach can be linked to a dynamical-system framework of behavioral shaping, to cognitive models of modular effect-related motor control as well as to explicit concepts of goal setting and goal achievement. Finally, it is shown that a functional approach is of particular help for sports practice in the context of structuring part practice, recognizing functionally equivalent task solutions, finding innovative technique alternatives, distinguishing errors from style, and identifying root causes of movement errors.
A functional approach to movement analysis and error identification in sports and physical education
Hossner, Ernst-Joachim; Schiebl, Frank; Göhner, Ulrich
2015-01-01
In a hypothesis-and-theory paper, a functional approach to movement analysis in sports is introduced. In this approach, contrary to classical concepts, it is not anymore the “ideal” movement of elite athletes that is taken as a template for the movements produced by learners. Instead, movements are understood as the means to solve given tasks that in turn, are defined by to-be-achieved task goals. A functional analysis comprises the steps of (1) recognizing constraints that define the functional structure, (2) identifying sub-actions that subserve the achievement of structure-dependent goals, (3) explicating modalities as specifics of the movement execution, and (4) assigning functions to actions, sub-actions and modalities. Regarding motor-control theory, a functional approach can be linked to a dynamical-system framework of behavioral shaping, to cognitive models of modular effect-related motor control as well as to explicit concepts of goal setting and goal achievement. Finally, it is shown that a functional approach is of particular help for sports practice in the context of structuring part practice, recognizing functionally equivalent task solutions, finding innovative technique alternatives, distinguishing errors from style, and identifying root causes of movement errors. PMID:26441717
Statistical modeling approach for detecting generalized synchronization
NASA Astrophysics Data System (ADS)
Schumacher, Johannes; Haslinger, Robert; Pipa, Gordon
2012-05-01
Detecting nonlinear correlations between time series presents a hard problem for data analysis. We present a generative statistical modeling method for detecting nonlinear generalized synchronization. Truncated Volterra series are used to approximate functional interactions. The Volterra kernels are modeled as linear combinations of basis splines, whose coefficients are estimated via l1 and l2 regularized maximum likelihood regression. The regularization manages the high number of kernel coefficients and allows feature selection strategies yielding sparse models. The method's performance is evaluated on different coupled chaotic systems in various synchronization regimes and analytical results for detecting m:n phase synchrony are presented. Experimental applicability is demonstrated by detecting nonlinear interactions between neuronal local field potentials recorded in different parts of macaque visual cortex.
A Facile Approach to Functionalize Cell Membrane-Coated Nanoparticles
Zhou, Hao; Fan, Zhiyuan; Lemons, Pelin K.; Cheng, Hao
2016-01-01
Convenient strategies to provide cell membrane-coated nanoparticles (CM-NPs) with multi-functionalities beyond the natural function of cell membranes would dramatically expand the application of this emerging class of nanomaterials. We have developed a facile approach to functionalize CM-NPs by chemically modifying live cell membranes prior to CM-NP fabrication using a bifunctional linker, succinimidyl-[(N-maleimidopropionamido)-polyethyleneglycol] ester (NHS-PEG-Maleimide). This method is particularly suitable to conjugate large bioactive molecules such as proteins on cell membranes as it establishes a strong anchorage and enable the control of linker length, a critical parameter for maximizing the function of anchored proteins. As a proof of concept, we show the conjugation of human recombinant hyaluronidase, PH20 (rHuPH20) on red blood cell (RBC) membranes and demonstrate that long linker (MW: 3400) is superior to short linker (MW: 425) for maintaining enzyme activity, while minimizing the changes to cell membranes. When the modified membranes were fabricated into RBC membrane-coated nanoparticles (RBCM-NPs), the conjugated rHuPH20 can assist NP diffusion more efficiently than free rHuPH20 in matrix-mimicking gels and the pericellular hyaluronic acid matrix of PC3 prostate cancer cells. After quenching the unreacted chemical groups with polyethylene glycol, we demonstrated that the rHuPH20 modification does not reduce the ultra-long blood circulation time of RBCM-NPs. Therefore, this surface engineering approach provides a platform to functionlize CM-NPs without sacrificing the natural function of cell membranes. PMID:27217834
A function-based approach to cockpit procedure aids
NASA Technical Reports Server (NTRS)
Phatak, Anil V.; Jain, Parveen; Palmer, Everett
1990-01-01
The objective of this research is to develop and test a cockpit procedural aid that can compose and present procedures that are appropriate for the given flight situation. The procedure would indicate the status of the aircraft engineering systems, and the environmental conditions. Prescribed procedures already exist for normal as well as for a number of non-normal and emergency situations, and can be presented to the crew using an interactive cockpit display. However, no procedures are prescribed or recommended for a host of plausible flight situations involving multiple malfunctions compounded by adverse environmental conditions. Under these circumstances, the cockpit procedural aid must review the prescribed procedures for the individual malfunction (when available), evaluate the alternatives or options, and present one or more composite procedures (prioritized or unprioritized) in response to the given situation. A top-down function-based conceptual approach towards composing and presenting cockpit procedures is being investigated. This approach is based upon the thought process that an operating crew must go through while attempting to meet the flight objectives given the current flight situation. In order to accomplish the flight objectives, certain critical functions must be maintained during each phase of the flight, using the appropriate procedures or success paths. The viability of these procedures depends upon the availability of required resources. If resources available are not sufficient to meet the requirements, alternative procedures (success paths) using the available resources must be constructed to maintain the critical functions and the corresponding objectives. If no success path exists that can satisfy the critical functions/objectives, then the next level of critical functions/objectives must be selected and the process repeated. Information is given in viewgraph form.
Garcia-Aldea, David; Alvarellos, J. E.
2008-02-15
We propose a kinetic energy density functional scheme with nonlocal terms based on the von Weizsaecker functional, instead of the more traditional approach where the nonlocal terms have the structure of the Thomas-Fermi functional. The proposed functionals recover the exact kinetic energy and reproduce the linear response function of homogeneous electron systems. In order to assess their quality, we have tested the total kinetic energies as well as the kinetic energy density for atoms. The results show that these nonlocal functionals give as good results as the most sophisticated functionals in the literature. The proposed scheme for constructing the functionals means a step ahead in the field of fully nonlocal kinetic energy functionals, because they are capable of giving better local behavior than the semilocal functionals, yielding at the same time accurate results for total kinetic energies. Moreover, the functionals enjoy the possibility of being evaluated as a single integral in momentum space if an adequate reference density is defined, and then quasilinear scaling for the computational cost can be achieved.
Das, Sayoni; Lee, David; Sillitoe, Ian; Dawson, Natalie L.; Lees, Jonathan G.; Orengo, Christine A.
2015-01-01
Motivation: Computational approaches that can predict protein functions are essential to bridge the widening function annotation gap especially since <1.0% of all proteins in UniProtKB have been experimentally characterized. We present a domain-based method for protein function classification and prediction of functional sites that exploits functional sub-classification of CATH superfamilies. The superfamilies are sub-classified into functional families (FunFams) using a hierarchical clustering algorithm supervised by a new classification method, FunFHMMer. Results: FunFHMMer generates more functionally coherent groupings of protein sequences than other domain-based protein classifications. This has been validated using known functional information. The conserved positions predicted by the FunFams are also found to be enriched in known functional residues. Moreover, the functional annotations provided by the FunFams are found to be more precise than other domain-based resources. FunFHMMer currently identifies 110 439 FunFams in 2735 superfamilies which can be used to functionally annotate > 16 million domain sequences. Availability and implementation: All FunFam annotation data are made available through the CATH webpages (http://www.cathdb.info). The FunFHMMer webserver (http://www.cathdb.info/search/by_funfhmmer) allows users to submit query sequences for assignment to a CATH FunFam. Contact: sayoni.das.12@ucl.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26139634
Franck, Christopher T; Koffarnus, Mikhail N; House, Leanna L; Bickel, Warren K
2015-01-01
The study of delay discounting, or valuation of future rewards as a function of delay, has contributed to understanding the behavioral economics of addiction. Accurate characterization of discounting can be furthered by statistical model selection given that many functions have been proposed to measure future valuation of rewards. The present study provides a convenient Bayesian model selection algorithm that selects the most probable discounting model among a set of candidate models chosen by the researcher. The approach assigns the most probable model for each individual subject. Importantly, effective delay 50 (ED50) functions as a suitable unifying measure that is computable for and comparable between a number of popular functions, including both one- and two-parameter models. The combined model selection/ED50 approach is illustrated using empirical discounting data collected from a sample of 111 undergraduate students with models proposed by Laibson (1997); Mazur (1987); Myerson & Green (1995); Rachlin (2006); and Samuelson (1937). Computer simulation suggests that the proposed Bayesian model selection approach outperforms the single model approach when data truly arise from multiple models. When a single model underlies all participant data, the simulation suggests that the proposed approach fares no worse than the single model approach.
Nuclear level density: Shell-model approach
NASA Astrophysics Data System (ADS)
Sen'kov, Roman; Zelevinsky, Vladimir
2016-06-01
Knowledge of the nuclear level density is necessary for understanding various reactions, including those in the stellar environment. Usually the combinatorics of a Fermi gas plus pairing is used for finding the level density. Recently a practical algorithm avoiding diagonalization of huge matrices was developed for calculating the density of many-body nuclear energy levels with certain quantum numbers for a full shell-model Hamiltonian. The underlying physics is that of quantum chaos and intrinsic thermalization in a closed system of interacting particles. We briefly explain this algorithm and, when possible, demonstrate the agreement of the results with those derived from exact diagonalization. The resulting level density is much smoother than that coming from conventional mean-field combinatorics. We study the role of various components of residual interactions in the process of thermalization, stressing the influence of incoherent collision-like processes. The shell-model results for the traditionally used parameters are also compared with standard phenomenological approaches.
Functional analytic psychotherapy: a behavioral relational approach to treatment.
Tsai, Mavis; Yard, Samantha; Kohlenberg, Robert J
2014-09-01
Functional analytic psychotherapy (FAP) is a relational approach to psychotherapy that is behavioral, yet involves an intensive, emotional, and in-depth therapy experience. FAP is approachable by therapists of diverse theoretical backgrounds owing to the minimal use of behavioral jargon, and can be used as an addition or complement to other interventions. The methods described in this article-being aware of clients' clinically relevant behaviors, being courageous in evoking clinically relevant behaviors, reinforcing improvements with therapeutic love, using behavioral interpretations to help clients generalize changes to daily life, and providing intensive and personal experiential training of FAP practitioners-maximize the impact of the therapeutic relationship to promote change and personal growth for both clients and therapists.
An ecological approach to language development: an alternative functionalism.
Dent, C H
1990-11-01
I argue for a new functionalist approach to language development, an ecological approach. A realist orientation is used that locates the causes of language development neither in the child nor in the language environment but in the functioning of perceptual systems that detect language-world relationships and use them to guide attention and action. The theory requires no concept of innateness, thus avoiding problems inherent in either the innate ideas or the genes-as-causal-programs explanations of the source of structure in language. An ecological explanation of language is discussed in relation to concepts and language, language as representation, problems in early word learning, metaphor, and syntactic development. Finally, problems incurred in using the idea of innateness are summarized: History prior to the chosen beginning point is ignored, data on organism-environment mutuality are not collected, and the explanation claims no effect of learning, which cannot be tested empirically.
Sensorimotor integration for functional recovery and the Bobath approach.
Levin, Mindy F; Panturin, Elia
2011-04-01
Bobath therapy is used to treat patients with neurological disorders. Bobath practitioners use hands-on approaches to elicit and reestablish typical movement patterns through therapist-controlled sensorimotor experiences within the context of task accomplishment. One aspect of Bobath practice, the recovery of sensorimotor function, is reviewed within the framework of current motor control theories. We focus on the role of sensory information in movement production, the relationship between posture and movement and concepts related to motor recovery and compensation with respect to this therapeutic approach. We suggest that a major barrier to the evaluation of the therapeutic effectiveness of the Bobath concept is the lack of a unified framework for both experimental identification and treatment of neurological motor deficits. More conclusive analysis of therapeutic effectiveness requires the development of specific outcomes that measure movement quality. PMID:21628730
Krabbe's leukodystrophy: Approaches and models in vitro.
Avola, Rosanna; Graziano, Adriana Carol Eleonora; Pannuzzo, Giovanna; Alvares, Elisa; Cardile, Venera
2016-11-01
This Review describes some in vitro approaches used to investigate the mechanisms involved in Krabbe's disease, with particular regard to the cellular systems employed to study processes of inflammation, apoptosis, and angiogenesis. The aim was to update the knowledge on the results obtained from in vitro models of this neurodegenerative disorder and provide stimuli for future research. For a long time, the nonavailability of established neural cells has limited the understanding of neuropathogenic mechanisms in Krabbe's leukodystrophy. More recently, the development of new Krabbe's disease cell models has allowed the identification of neurologically relevant pathogenic cascades, including the major role of elevated psychosine levels. Thus, direct and/or indirect roles of psychosine in the release of cytokines, reactive oxygen species, and nitric oxide and in the activation of kinases, caspases, and angiogenic factors results should be clearer. In parallel, it is now understood that the presence of globoid cells precedes oligodendrocyte apoptosis and demyelination. The information described here will help to continue the research on Krabbe's leukodystrophy and on potential new therapeutic approaches for this disease that even today, despite numerous attempts, is without cure. © 2016 Wiley Periodicals, Inc. PMID:27638610
The fruits of a functional approach for psychological science.
Stewart, Ian
2016-02-01
The current paper introduces relational frame theory (RFT) as a functional contextual approach to complex human behaviour and examines how this theory has contributed to our understanding of several key phenomena in psychological science. I will first briefly outline the philosophical foundation of RFT and then examine its conceptual basis and core concepts. Thereafter, I provide an overview of the empirical findings and applications that RFT has stimulated in a number of key domains such as language development, linguistic generativity, rule-following, analogical reasoning, intelligence, theory of mind, psychopathology and implicit cognition. PMID:26103949
Current Approaches on Viral Infection: Proteomics and Functional Validations
Zheng, Jie; Tan, Boon Huan; Sugrue, Richard; Tang, Kai
2012-01-01
Viruses could manipulate cellular machinery to ensure their continuous survival and thus become parasites of living organisms. Delineation of sophisticated host responses upon virus infection is a challenging task. It lies in identifying the repertoire of host factors actively involved in the viral infectious cycle and characterizing host responses qualitatively and quantitatively during viral pathogenesis. Mass spectrometry based proteomics could be used to efficiently study pathogen-host interactions and virus-hijacked cellular signaling pathways. Moreover, direct host and viral responses upon infection could be further investigated by activity-based functional validation studies. These approaches involve drug inhibition of secretory pathway, immunofluorescence staining, dominant negative mutant of protein target, real-time PCR, small interfering siRNA-mediated knockdown, and molecular cloning studies. In this way, functional validation could gain novel insights into the high-content proteomic dataset in an unbiased and comprehensive way. PMID:23162545
Direction-dependent learning approach for radial basis function networks.
Singla, Puneet; Subbarao, Kamesh; Junkins, John L
2007-01-01
Direction-dependent scaling, shaping, and rotation of Gaussian basis functions are introduced for maximal trend sensing with minimal parameter representations for input output approximation. It is shown that shaping and rotation of the radial basis functions helps in reducing the total number of function units required to approximate any given input-output data, while improving accuracy. Several alternate formulations that enforce minimal parameterization of the most general radial basis functions are presented. A novel "directed graph" based algorithm is introduced to facilitate intelligent direction based learning and adaptation of the parameters appearing in the radial basis function network. Further, a parameter estimation algorithm is incorporated to establish starting estimates for the model parameters using multiple windows of the input-output data. The efficacy of direction-dependent shaping and rotation in function approximation is evaluated by modifying the minimal resource allocating network and considering different test examples. The examples are drawn from recent literature to benchmark the new algorithm versus existing methods.
Functional model of biological neural networks
2010-01-01
A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks. PMID:22132040
On extended thermonuclear functions through pathway model
NASA Astrophysics Data System (ADS)
Kumar, Dilip
when α → 1. The beauty of the result is that these different families of three different functional forms are covered through the pathway parameter α. In a physical set up if f (x) in (3) is the stable or limiting form, the Maxwell-Boltzmann approach to thermonuclear functions, then f (x) in (1) and (2) will contain a large variety of unstable or chaotic situations which will all tend to (3) in the limit. Thus we get a clear idea of all the stable and unstable situations around the Maxwell-Boltzmann approach. Thus the current theory is given a mathematical extension and physical interpretations can be found to situations in (1) and (2). Incidently Tsallis statistics is a special case of (1) for γ = 0, a = 1, δ = 1, η = 1. The Beck-Cohen superstatistics, discussed in current statistical mechanics literature is a special case of (2) for a = 1, η = 1, α > 1. The main purpose of the present paper is to investigate in some more detail, mathematically, the extended thermonuclear functions for Maxwell-Boltzmann statistics and in the cut-off case. The extended thermonuclear functions will be evaluated in closed form for all convenient values of the parameter by means of residue calculus. A comparison of the standard thermonuclear functions with the extended thermonuclear functions is also done. The results and derivations in this paper are new and these will be of interest to physicists, mathematicians, probabilists, and statisticians.
Green's function approach for quantum graphs: An overview
NASA Astrophysics Data System (ADS)
Andrade, Fabiano M.; Schmidt, A. G. M.; Vicentini, E.; Cheng, B. K.; da Luz, M. G. E.
2016-08-01
Here we review the many aspects and distinct phenomena associated to quantum dynamics on general graph structures. For so, we discuss such class of systems under the energy domain Green's function (G) framework. This approach is particularly interesting because G can be written as a sum over classical-like paths, where local quantum effects are taken into account through the scattering matrix elements (basically, transmission and reflection amplitudes) defined on each one of the graph vertices. Hence, the exact G has the functional form of a generalized semiclassical formula, which through different calculation techniques (addressed in detail here) always can be cast into a closed analytic expression. It allows to solve exactly arbitrary large (although finite) graphs in a recursive and fast way. Using the Green's function method, we survey many properties of open and closed quantum graphs as scattering solutions for the former and eigenspectrum and eigenstates for the latter, also considering quasi-bound states. Concrete examples, like cube, binary trees and Sierpiński-like topologies are presented. Along the work, possible distinct applications using the Green's function methods for quantum graphs are outlined.
Vertebrate Membrane Proteins: Structure, Function, and Insights from Biophysical Approaches
MÜLLER, DANIEL J.; WU, NAN; PALCZEWSKI, KRZYSZTOF
2008-01-01
Membrane proteins are key targets for pharmacological intervention because they are vital for cellular function. Here, we analyze recent progress made in the understanding of the structure and function of membrane proteins with a focus on rhodopsin and development of atomic force microscopy techniques to study biological membranes. Membrane proteins are compartmentalized to carry out extra- and intracellular processes. Biological membranes are densely populated with membrane proteins that occupy approximately 50% of their volume. In most cases membranes contain lipid rafts, protein patches, or paracrystalline formations that lack the higher-order symmetry that would allow them to be characterized by diffraction methods. Despite many technical difficulties, several crystal structures of membrane proteins that illustrate their internal structural organization have been determined. Moreover, high-resolution atomic force microscopy, near-field scanning optical microscopy, and other lower resolution techniques have been used to investigate these structures. Single-molecule force spectroscopy tracks interactions that stabilize membrane proteins and those that switch their functional state; this spectroscopy can be applied to locate a ligand-binding site. Recent development of this technique also reveals the energy landscape of a membrane protein, defining its folding, reaction pathways, and kinetics. Future development and application of novel approaches during the coming years should provide even greater insights to the understanding of biological membrane organization and function. PMID:18321962
Takahashi, Kou; Kong, Qiongman; Lin, Yuchen; Stouffer, Nathan; Schulte, Delanie A; Lai, Liching; Liu, Qibing; Chang, Ling-Chu; Dominguez, Sky; Xing, Xuechao; Cuny, Gregory D; Hodgetts, Kevin J; Glicksman, Marcie A; Lin, Chien-Liang Glenn
2015-03-01
Glutamatergic systems play a critical role in cognitive functions and are known to be defective in Alzheimer's disease (AD) patients. Previous literature has indicated that glial glutamate transporter EAAT2 plays an essential role in cognitive functions and that loss of EAAT2 protein is a common phenomenon observed in AD patients and animal models. In the current study, we investigated whether restored EAAT2 protein and function could benefit cognitive functions and pathology in APPSw,Ind mice, an animal model of AD. A transgenic mouse approach via crossing EAAT2 transgenic mice with APPSw,Ind. mice and a pharmacological approach using a novel EAAT2 translational activator, LDN/OSU-0212320, were conducted. Findings from both approaches demonstrated that restored EAAT2 protein function significantly improved cognitive functions, restored synaptic integrity, and reduced amyloid plaques. Importantly, the observed benefits were sustained one month after compound treatment cessation, suggesting that EAAT2 is a potential disease modifier with therapeutic potential for AD. PMID:25711212
A relaxation-based approach to damage modeling
NASA Astrophysics Data System (ADS)
Junker, Philipp; Schwarz, Stephan; Makowski, Jerzy; Hackl, Klaus
2016-10-01
Material models, including softening effects due to, for example, damage and localizations, share the problem of ill-posed boundary value problems that yield mesh-dependent finite element results. It is thus necessary to apply regularization techniques that couple local behavior described, for example, by internal variables, at a spatial level. This can take account of the gradient of the internal variable to yield mesh-independent finite element results. In this paper, we present a new approach to damage modeling that does not use common field functions, inclusion of gradients or complex integration techniques: Appropriate modifications of the relaxed (condensed) energy hold the same advantage as other methods, but with much less numerical effort. We start with the theoretical derivation and then discuss the numerical treatment. Finally, we present finite element results that prove empirically how the new approach works.
A Functional Approach to Deconvolve Dynamic Neuroimaging Data
Jiang, Ci-Ren; Aston, John A. D.; Wang, Jane-Ling
2016-01-01
Positron emission tomography (PET) is an imaging technique which can be used to investigate chemical changes in human biological processes such as cancer development or neurochemical reactions. Most dynamic PET scans are currently analyzed based on the assumption that linear first-order kinetics can be used to adequately describe the system under observation. However, there has recently been strong evidence that this is not the case. To provide an analysis of PET data which is free from this compartmental assumption, we propose a nonparametric deconvolution and analysis model for dynamic PET data based on functional principal component analysis. This yields flexibility in the possible deconvolved functions while still performing well when a linear compartmental model setup is the true data generating mechanism. As the deconvolution needs to be performed on only a relative small number of basis functions rather than voxel by voxel in the entire three-dimensional volume, the methodology is both robust to typical brain imaging noise levels while also being computationally efficient. The new methodology is investigated through simulations in both one-dimensional functions and 2D images and also applied to a neuroimaging study whose goal is the quantification of opioid receptor concentration in the brain. PMID:27226673
Stochastic model updating utilizing Bayesian approach and Gaussian process model
NASA Astrophysics Data System (ADS)
Wan, Hua-Ping; Ren, Wei-Xin
2016-03-01
Stochastic model updating (SMU) has been increasingly applied in quantifying structural parameter uncertainty from responses variability. SMU for parameter uncertainty quantification refers to the problem of inverse uncertainty quantification (IUQ), which is a nontrivial task. Inverse problem solved with optimization usually brings about the issues of gradient computation, ill-conditionedness, and non-uniqueness. Moreover, the uncertainty present in response makes the inverse problem more complicated. In this study, Bayesian approach is adopted in SMU for parameter uncertainty quantification. The prominent strength of Bayesian approach for IUQ problem is that it solves IUQ problem in a straightforward manner, which enables it to avoid the previous issues. However, when applied to engineering structures that are modeled with a high-resolution finite element model (FEM), Bayesian approach is still computationally expensive since the commonly used Markov chain Monte Carlo (MCMC) method for Bayesian inference requires a large number of model runs to guarantee the convergence. Herein we reduce computational cost in two aspects. On the one hand, the fast-running Gaussian process model (GPM) is utilized to approximate the time-consuming high-resolution FEM. On the other hand, the advanced MCMC method using delayed rejection adaptive Metropolis (DRAM) algorithm that incorporates local adaptive strategy with global adaptive strategy is employed for Bayesian inference. In addition, we propose the use of the powerful variance-based global sensitivity analysis (GSA) in parameter selection to exclude non-influential parameters from calibration parameters, which yields a reduced-order model and thus further alleviates the computational burden. A simulated aluminum plate and a real-world complex cable-stayed pedestrian bridge are presented to illustrate the proposed framework and verify its feasibility.
Simple models of human brain functional networks.
Vértes, Petra E; Alexander-Bloch, Aaron F; Gogtay, Nitin; Giedd, Jay N; Rapoport, Judith L; Bullmore, Edward T
2012-04-10
Human brain functional networks are embedded in anatomical space and have topological properties--small-worldness, modularity, fat-tailed degree distributions--that are comparable to many other complex networks. Although a sophisticated set of measures is available to describe the topology of brain networks, the selection pressures that drive their formation remain largely unknown. Here we consider generative models for the probability of a functional connection (an edge) between two cortical regions (nodes) separated by some Euclidean distance in anatomical space. In particular, we propose a model in which the embedded topology of brain networks emerges from two competing factors: a distance penalty based on the cost of maintaining long-range connections; and a topological term that favors links between regions sharing similar input. We show that, together, these two biologically plausible factors are sufficient to capture an impressive range of topological properties of functional brain networks. Model parameters estimated in one set of functional MRI (fMRI) data on normal volunteers provided a good fit to networks estimated in a second independent sample of fMRI data. Furthermore, slightly detuned model parameters also generated a reasonable simulation of the abnormal properties of brain functional networks in people with schizophrenia. We therefore anticipate that many aspects of brain network organization, in health and disease, may be parsimoniously explained by an economical clustering rule for the probability of functional connectivity between different brain areas.
Understanding human functioning using graphical models
2010-01-01
Background Functioning and disability are universal human experiences. However, our current understanding of functioning from a comprehensive perspective is limited. The development of the International Classification of Functioning, Disability and Health (ICF) on the one hand and recent developments in graphical modeling on the other hand might be combined and open the door to a more comprehensive understanding of human functioning. The objective of our paper therefore is to explore how graphical models can be used in the study of ICF data for a range of applications. Methods We show the applicability of graphical models on ICF data for different tasks: Visualization of the dependence structure of the data set, dimension reduction and comparison of subpopulations. Moreover, we further developed and applied recent findings in causal inference using graphical models to estimate bounds on intervention effects in an observational study with many variables and without knowing the underlying causal structure. Results In each field, graphical models could be applied giving results of high face-validity. In particular, graphical models could be used for visualization of functioning in patients with spinal cord injury. The resulting graph consisted of several connected components which can be used for dimension reduction. Moreover, we found that the differences in the dependence structures between subpopulations were relevant and could be systematically analyzed using graphical models. Finally, when estimating bounds on causal effects of ICF categories on general health perceptions among patients with chronic health conditions, we found that the five ICF categories that showed the strongest effect were plausible. Conclusions Graphical Models are a flexible tool and lend themselves for a wide range of applications. In particular, studies involving ICF data seem to be suited for analysis using graphical models. PMID:20149230
Internal wave signal processing: A model-based approach
Candy, J.V.; Chambers, D.H.
1995-02-22
A model-based approach is proposed to solve the oceanic internal wave signal processing problem that is based on state-space representations of the normal-mode vertical velocity and plane wave horizontal velocity propagation models. It is shown that these representations can be utilized to spatially propagate the modal (depth) vertical velocity functions given the basic parameters (wave numbers, Brunt-Vaisala frequency profile etc.) developed from the solution of the associated boundary value problem as well as the horizontal velocity components. These models are then generalized to the stochastic case where an approximate Gauss-Markov theory applies. The resulting Gauss-Markov representation, in principle, allows the inclusion of stochastic phenomena such as noise and modeling errors in a consistent manner. Based on this framework, investigations are made of model-based solutions to the signal enhancement problem for internal waves. In particular, a processor is designed that allows in situ recursive estimation of the required velocity functions. Finally, it is shown that the associated residual or so-called innovation sequence that ensues from the recursive nature of this formulation can be employed to monitor the model`s fit to the data.
FINDSITE: a combined evolution/structure-based approach to protein function prediction
Brylinski, Michal
2009-01-01
A key challenge of the post-genomic era is the identification of the function(s) of all the molecules in a given organism. Here, we review the status of sequence and structure-based approaches to protein function inference and ligand screening that can provide functional insights for a significant fraction of the ∼50% of ORFs of unassigned function in an average proteome. We then describe FINDSITE, a recently developed algorithm for ligand binding site prediction, ligand screening and molecular function prediction, which is based on binding site conservation across evolutionary distant proteins identified by threading. Importantly, FINDSITE gives comparable results when high-resolution experimental structures as well as predicted protein models are used. PMID:19324930
A comprehensive approach to age-dependent dosimetric modeling
Leggett, R.W.; Cristy, M.; Eckerman, K.F.
1986-01-01
In the absence of age-specific biokinetic models, current retention models of the International Commission on Radiological Protection (ICRP) frequently are used as a point of departure for evaluation of exposures to the general population. These models were designed and intended for estimation of long-term integrated doses to the adult worker. Their format and empirical basis preclude incorporation of much valuable physiological information and physiologically reasonable assumptions that could be used in characterizing the age-specific behavior of radioelements in humans. In this paper we discuss a comprehensive approach to age-dependent dosimetric modeling in which consideration is given not only to changes with age in masses and relative geometries of body organs and tissues but also to best available physiological and radiobiological information relating to the age-specific biobehavior of radionuclides. This approach is useful in obtaining more accurate estimates of long-term dose commitments as a function of age at intake, but it may be particularly valuable in establishing more accurate estimates of dose rate as a function of age. Age-specific dose rates are needed for a proper analysis of the potential effects on estimates or risk of elevated dose rates per unit intake in certain stages of life, elevated response per unit dose received during some stages of life, and age-specific non-radiogenic competing risks.
Rational transfer function models for biofilm reactors
Wik, T.; Breitholtz, C.
1998-12-01
Design of controllers and optimization of plants using biofilm reactors often require dynamic models and efficient simulation methods. Standard model assumptions were used to derive nonrational transfer functions describing the fast dynamics of stirred-tank reactors with zero- or first-order reactions inside the biofilm. A method based on the location of singularities was used to derive rational transfer functions that approximate nonrational ones. These transfer functions can be used in efficient simulation routines and in standard methods of controller design. The order of the transfer functions can be chosen in a natural way, and changes in physical parameters may directly be related to changes in the transfer functions. Further, the mass balances used and, hence, the transfer functions, are applicable to catalytic reactors with porous catalysts as well. By applying the methods to a nitrifying trickling filter, reactor parameters are estimated from residence-time distributions and low-order rational transfer functions are achieved. Simulated effluent dynamics, using these transfer functions, agree closely with measurements.
Optimization approaches to nonlinear model predictive control
Biegler, L.T. . Dept. of Chemical Engineering); Rawlings, J.B. . Dept. of Chemical Engineering)
1991-01-01
With the development of sophisticated methods for nonlinear programming and powerful computer hardware, it now becomes useful and efficient to formulate and solve nonlinear process control problems through on-line optimization methods. This paper explores and reviews control techniques based on repeated solution of nonlinear programming (NLP) problems. Here several advantages present themselves. These include minimization of readily quantifiable objectives, coordinated and accurate handling of process nonlinearities and interactions, and systematic ways of dealing with process constraints. We motivate this NLP-based approach with small nonlinear examples and present a basic algorithm for optimization-based process control. As can be seen this approach is a straightforward extension of popular model-predictive controllers (MPCs) that are used for linear systems. The statement of the basic algorithm raises a number of questions regarding stability and robustness of the method, efficiency of the control calculations, incorporation of feedback into the controller and reliable ways of handling process constraints. Each of these will be treated through analysis and/or modification of the basic algorithm. To highlight and support this discussion, several examples are presented and key results are examined and further developed. 74 refs., 11 figs.
Green's function approach to edge states in transition metal dichalcogenides
NASA Astrophysics Data System (ADS)
Farmanbar, Mojtaba; Amlaki, Taher; Brocks, Geert
2016-05-01
The semiconducting two-dimensional transition metal dichalcogenides MX 2 show an abundance of one-dimensional metallic edges and grain boundaries. Standard techniques for calculating edge states typically model nanoribbons, and require the use of supercells. In this paper, we formulate a Green's function technique for calculating edge states of (semi-)infinite two-dimensional systems with a single well-defined edge or grain boundary. We express Green's functions in terms of Bloch matrices, constructed from the solutions of a quadratic eigenvalue equation. The technique can be applied to any localized basis representation of the Hamiltonian. Here, we use it to calculate edge states of MX 2 monolayers by means of tight-binding models. Aside from the basic zigzag and armchair edges, we study edges with a more general orientation, structurally modifed edges, and grain boundaries. A simple three-band model captures an important part of the edge electronic structures. An 11-band model comprising all valence orbitals of the M and X atoms is required to obtain all edge states with energies in the MX 2 band gap. Here, states of odd symmetry with respect to a mirror plane through the layer of M atoms have a dangling-bond character, and tend to pin the Fermi level.
Mining Functional Modules in Heterogeneous Biological Networks Using Multiplex PageRank Approach
Li, Jun; Zhao, Patrick X.
2016-01-01
Identification of functional modules/sub-networks in large-scale biological networks is one of the important research challenges in current bioinformatics and systems biology. Approaches have been developed to identify functional modules in single-class biological networks; however, methods for systematically and interactively mining multiple classes of heterogeneous biological networks are lacking. In this paper, we present a novel algorithm (called mPageRank) that utilizes the Multiplex PageRank approach to mine functional modules from two classes of biological networks. We demonstrate the capabilities of our approach by successfully mining functional biological modules through integrating expression-based gene-gene association networks and protein-protein interaction networks. We first compared the performance of our method with that of other methods using simulated data. We then applied our method to identify the cell division cycle related functional module and plant signaling defense-related functional module in the model plant Arabidopsis thaliana. Our results demonstrated that the mPageRank method is effective for mining sub-networks in both expression-based gene-gene association networks and protein-protein interaction networks, and has the potential to be adapted for the discovery of functional modules/sub-networks in other heterogeneous biological networks. The mPageRank executable program, source code, the datasets and results of the presented two case studies are publicly and freely available at http://plantgrn.noble.org/MPageRank/. PMID:27446133
Mining Functional Modules in Heterogeneous Biological Networks Using Multiplex PageRank Approach.
Li, Jun; Zhao, Patrick X
2016-01-01
Identification of functional modules/sub-networks in large-scale biological networks is one of the important research challenges in current bioinformatics and systems biology. Approaches have been developed to identify functional modules in single-class biological networks; however, methods for systematically and interactively mining multiple classes of heterogeneous biological networks are lacking. In this paper, we present a novel algorithm (called mPageRank) that utilizes the Multiplex PageRank approach to mine functional modules from two classes of biological networks. We demonstrate the capabilities of our approach by successfully mining functional biological modules through integrating expression-based gene-gene association networks and protein-protein interaction networks. We first compared the performance of our method with that of other methods using simulated data. We then applied our method to identify the cell division cycle related functional module and plant signaling defense-related functional module in the model plant Arabidopsis thaliana. Our results demonstrated that the mPageRank method is effective for mining sub-networks in both expression-based gene-gene association networks and protein-protein interaction networks, and has the potential to be adapted for the discovery of functional modules/sub-networks in other heterogeneous biological networks. The mPageRank executable program, source code, the datasets and results of the presented two case studies are publicly and freely available at http://plantgrn.noble.org/MPageRank/.
Mining Functional Modules in Heterogeneous Biological Networks Using Multiplex PageRank Approach.
Li, Jun; Zhao, Patrick X
2016-01-01
Identification of functional modules/sub-networks in large-scale biological networks is one of the important research challenges in current bioinformatics and systems biology. Approaches have been developed to identify functional modules in single-class biological networks; however, methods for systematically and interactively mining multiple classes of heterogeneous biological networks are lacking. In this paper, we present a novel algorithm (called mPageRank) that utilizes the Multiplex PageRank approach to mine functional modules from two classes of biological networks. We demonstrate the capabilities of our approach by successfully mining functional biological modules through integrating expression-based gene-gene association networks and protein-protein interaction networks. We first compared the performance of our method with that of other methods using simulated data. We then applied our method to identify the cell division cycle related functional module and plant signaling defense-related functional module in the model plant Arabidopsis thaliana. Our results demonstrated that the mPageRank method is effective for mining sub-networks in both expression-based gene-gene association networks and protein-protein interaction networks, and has the potential to be adapted for the discovery of functional modules/sub-networks in other heterogeneous biological networks. The mPageRank executable program, source code, the datasets and results of the presented two case studies are publicly and freely available at http://plantgrn.noble.org/MPageRank/. PMID:27446133
Incorporating covariates in skewed functional data models.
Li, Meng; Staicu, Ana-Maria; Bondell, Howard D
2015-07-01
We introduce a class of covariate-adjusted skewed functional models (cSFM) designed for functional data exhibiting location-dependent marginal distributions. We propose a semi-parametric copula model for the pointwise marginal distributions, which are allowed to depend on covariates, and the functional dependence, which is assumed covariate invariant. The proposed cSFM framework provides a unifying platform for pointwise quantile estimation and trajectory prediction. We consider a computationally feasible procedure that handles densely as well as sparsely observed functional data. The methods are examined numerically using simulations and is applied to a new tractography study of multiple sclerosis. Furthermore, the methodology is implemented in the R package cSFM, which is publicly available on CRAN.
Direct and Evolutionary Approaches for Optimal Receiver Function Inversion
NASA Astrophysics Data System (ADS)
Dugda, Mulugeta Tuji
Receiver functions are time series obtained by deconvolving vertical component seismograms from radial component seismograms. Receiver functions represent the impulse response of the earth structure beneath a seismic station. Generally, receiver functions consist of a number of seismic phases related to discontinuities in the crust and upper mantle. The relative arrival times of these phases are correlated with the locations of discontinuities as well as the media of seismic wave propagation. The Moho (Mohorovicic discontinuity) is a major interface or discontinuity that separates the crust and the mantle. In this research, automatic techniques to determine the depth of the Moho from the earth's surface (the crustal thickness H) and the ratio of crustal seismic P-wave velocity (Vp) to S-wave velocity (Vs) (kappa= Vp/Vs) were developed. In this dissertation, an optimization problem of inverting receiver functions has been developed to determine crustal parameters and the three associated weights using evolutionary and direct optimization techniques. The first technique developed makes use of the evolutionary Genetic Algorithms (GA) optimization technique. The second technique developed combines the direct Generalized Pattern Search (GPS) and evolutionary Fitness Proportionate Niching (FPN) techniques by employing their strengths. In a previous study, Monte Carlo technique has been utilized for determining variable weights in the H-kappa stacking of receiver functions. Compared to that previously introduced variable weights approach, the current GA and GPS-FPN techniques have tremendous advantages of saving time and these new techniques are suitable for automatic and simultaneous determination of crustal parameters and appropriate weights. The GA implementation provides optimal or near optimal weights necessary in stacking receiver functions as well as optimal H and kappa values simultaneously. Generally, the objective function of the H-kappa stacking problem
A Wigner Monte Carlo approach to density functional theory
Sellier, J.M. Dimov, I.
2014-08-01
In order to simulate quantum N-body systems, stationary and time-dependent density functional theories rely on the capacity of calculating the single-electron wave-functions of a system from which one obtains the total electron density (Kohn–Sham systems). In this paper, we introduce the use of the Wigner Monte Carlo method in ab-initio calculations. This approach allows time-dependent simulations of chemical systems in the presence of reflective and absorbing boundary conditions. It also enables an intuitive comprehension of chemical systems in terms of the Wigner formalism based on the concept of phase-space. Finally, being based on a Monte Carlo method, it scales very well on parallel machines paving the way towards the time-dependent simulation of very complex molecules. A validation is performed by studying the electron distribution of three different systems, a Lithium atom, a Boron atom and a hydrogenic molecule. For the sake of simplicity, we start from initial conditions not too far from equilibrium and show that the systems reach a stationary regime, as expected (despite no restriction is imposed in the choice of the initial conditions). We also show a good agreement with the standard density functional theory for the hydrogenic molecule. These results demonstrate that the combination of the Wigner Monte Carlo method and Kohn–Sham systems provides a reliable computational tool which could, eventually, be applied to more sophisticated problems.
Functional Analysis of Jasmonates in Rice through Mutant Approaches
Dhakarey, Rohit; Kodackattumannil Peethambaran, Preshobha; Riemann, Michael
2016-01-01
Jasmonic acid, one of the major plant hormones, is, unlike other hormones, a lipid-derived compound that is synthesized from the fatty acid linolenic acid. It has been studied intensively in many plant species including Arabidopsis thaliana, in which most of the enzymes participating in its biosynthesis were characterized. In the past 15 years, mutants and transgenic plants affected in the jasmonate pathway became available in rice and facilitate studies on the functions of this hormone in an important crop. Those functions are partially conserved compared to other plant species, and include roles in fertility, response to mechanical wounding and defense against herbivores. However, new and surprising functions have also been uncovered by mutant approaches, such as a close link between light perception and the jasmonate pathway. This was not only useful to show a phenomenon that is unique to rice but also helped to establish this role in plant species where such links are less obvious. This review aims to provide an overview of currently available rice mutants and transgenic plants in the jasmonate pathway and highlights some selected roles of jasmonate in this species, such as photomorphogenesis, and abiotic and biotic stress. PMID:27135235
A Mixed Approach for Modeling Blood Flow in Brain Microcirculation
NASA Astrophysics Data System (ADS)
Lorthois, Sylvie; Peyrounette, Myriam; Davit, Yohan; Quintard, Michel; Groupe d'Etude sur les Milieux Poreux Team
2015-11-01
Consistent with its distribution and exchange functions, the vascular system of the human brain cortex is a superposition of two components. At small-scale, a homogeneous and space-filling mesh-like capillary network. At large scale, quasi-fractal branched veins and arteries. From a modeling perspective, this is the superposition of: (a) a continuum model resulting from the homogenization of slow transport in the small-scale capillary network; and (b) a discrete network approach describing fast transport in the arteries and veins, which cannot be homogenized because of their fractal nature. This problematic is analogous to fast conducting wells embedded in a reservoir rock in petroleum engineering. An efficient method to reduce the computational cost is to use relatively large grid blocks for the continuum model. This makes it difficult to accurately couple both components. We solve this issue by adapting the ``well model'' concept used in petroleum engineering to brain specific 3D situations. We obtain a unique linear system describing the discrete network, the continuum and the well model. Results are presented for realistic arterial and venous geometries. The mixed approach is compared with full network models including various idealized capillary networks of known permeability. ERC BrainMicroFlow GA615102.
Generalized exponential function and discrete growth models
NASA Astrophysics Data System (ADS)
Souto Martinez, Alexandre; Silva González, Rodrigo; Lauri Espíndola, Aquino
2009-07-01
Here we show that a particular one-parameter generalization of the exponential function is suitable to unify most of the popular one-species discrete population dynamic models into a simple formula. A physical interpretation is given to this new introduced parameter in the context of the continuous Richards model, which remains valid for the discrete case. From the discretization of the continuous Richards’ model (generalization of the Gompertz and Verhulst models), one obtains a generalized logistic map and we briefly study its properties. Notice, however that the physical interpretation for the introduced parameter persists valid for the discrete case. Next, we generalize the (scramble competition) θ-Ricker discrete model and analytically calculate the fixed points as well as their stabilities. In contrast to previous generalizations, from the generalized θ-Ricker model one is able to retrieve either scramble or contest models.
A real-space stochastic density matrix approach for density functional electronic structure.
Beck, Thomas L
2015-12-21
The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches. PMID:25969148
Chromatin organization in pluripotent cells: emerging approaches to study and disrupt function
Lopes Novo, Clara
2016-01-01
Translating the vast amounts of genomic and epigenomic information accumulated on the linear genome into three-dimensional models of nuclear organization is a current major challenge. In response to this challenge, recent technological innovations based on chromosome conformation capture methods in combination with increasingly powerful functional approaches have revealed exciting insights into key aspects of genome regulation. These findings have led to an emerging model where the genome is folded and compartmentalized into highly conserved topological domains that are further divided into functional subdomains containing physical loops that bring cis-regulatory elements to close proximity. Targeted functional experiments, largely based on designable DNA-binding proteins, have begun to define the major architectural proteins required to establish and maintain appropriate genome regulation. Here, we focus on the accessible and well-characterized system of pluripotent cells to review the functional role of chromatin organization in regulating pluripotency, differentiation and reprogramming. PMID:26206085
Promoting return of function in multiple sclerosis: An integrated approach
Gacias, Mar; Casaccia, Patrizia
2013-01-01
Multiple sclerosis is a disease characterized by inflammatory demyelination, axonal degeneration and progressive brain atrophy. Most of the currently available disease modifying agents proved to be very effective in managing the relapse rate, however progressive neuronal damage continues to occur and leads to progressive accumulation of irreversible disability. For this reason, any therapeutic strategy aimed at restoration of function must take into account not only immunomodulation, but also axonal protection and new myelin formation. We further highlight the importance of an holistic approach, which considers the variability of therapeutic responsiveness as the result of the interplay between genetic differences and the epigenome, which is in turn affected by gender, age and differences in life style including diet, exercise, smoking and social interaction. PMID:24363985
Promoting return of function in multiple sclerosis: An integrated approach.
Gacias, Mar; Casaccia, Patrizia
2013-10-01
Multiple sclerosis is a disease characterized by inflammatory demyelination, axonal degeneration and progressive brain atrophy. Most of the currently available disease modifying agents proved to be very effective in managing the relapse rate, however progressive neuronal damage continues to occur and leads to progressive accumulation of irreversible disability. For this reason, any therapeutic strategy aimed at restoration of function must take into account not only immunomodulation, but also axonal protection and new myelin formation. We further highlight the importance of an holistic approach, which considers the variability of therapeutic responsiveness as the result of the interplay between genetic differences and the epigenome, which is in turn affected by gender, age and differences in life style including diet, exercise, smoking and social interaction. PMID:24363985
Electron Systems Out of Equilibrium: Nonequilibrium Green's Function Approach
NASA Astrophysics Data System (ADS)
Špička, Václav Velický, Bedřich Kalvová, Anděla
2015-10-01
This review deals with the state of the art and perspectives of description of non-equilibrium many body systems using the non-equilibrium Green's function (NGF) method. The basic aim is to describe time evolution of the many-body system from its initial state over its transient dynamics to its long time asymptotic evolution. First, we discuss basic aims of transport theories to motivate the introduction of the NGF techniques. Second, this article summarizes the present view on construction of the electron transport equations formulated within the NGF approach to non-equilibrium. We discuss incorporation of complex initial conditions to the NGF formalism, and the NGF reconstruction theorem, which serves as a tool to derive simplified kinetic equations. Three stages of evolution of the non-equilibrium, the first described by the full NGF description, the second by a Non-Markovian Generalized Master Equation and the third by a Markovian Master Equation will be related to each other.
NASA Astrophysics Data System (ADS)
Mercaldo, M. T.; Rabuffo, I.; De Cesare, L.; Caramico D'Auria, A.
2016-04-01
In this work we study the quantum phase transition, the phase diagram and the quantum criticality induced by the easy-plane single-ion anisotropy in a d-dimensional quantum spin-1 XY model in absence of an external longitudinal magnetic field. We employ the two-time Green function method by avoiding the Anderson-Callen decoupling of spin operators at the same sites which is of doubtful accuracy. Following the original Devlin procedure we treat exactly the higher order single-site anisotropy Green functions and use Tyablikov-like decouplings for the exchange higher order ones. The related self-consistent equations appear suitable for an analysis of the thermodynamic properties at and around second order phase transition points. Remarkably, the equivalence between the microscopic spin model and the continuous O(2) -vector model with transverse-Ising model (TIM)-like dynamics, characterized by a dynamic critical exponent z=1, emerges at low temperatures close to the quantum critical point with the single-ion anisotropy parameter D as the non-thermal control parameter. The zero-temperature critic anisotropy parameter Dc is obtained for dimensionalities d > 1 as a function of the microscopic exchange coupling parameter and the related numerical data for different lattices are found to be in reasonable agreement with those obtained by means of alternative analytical and numerical methods. For d > 2, and in particular for d=3, we determine the finite-temperature critical line ending in the quantum critical point and the related TIM-like shift exponent, consistently with recent renormalization group predictions. The main crossover lines between different asymptotic regimes around the quantum critical point are also estimated providing a global phase diagram and a quantum criticality very similar to the conventional ones.
Approach to combined-function magnets via symplectic slicing
NASA Astrophysics Data System (ADS)
Titze, M.
2016-05-01
In this article we describe how to obtain symplectic "slice" maps for combined-function magnets, by using a method of generating functions. A feature of this method is that one can use an unexpanded and unsplit Hamiltonian. From such a slice map we obtain a first-order map which is symplectic at the closed orbit. We also obtain a symplectic kick map. Both results were implemented into the widely used program MAD-X to regain, in particular, the twiss parameters for the sliced model of the Proton Synchrotron at CERN. In addition, we obtain recursion equations for symplectic maps of general time-dependent Hamiltonians, which might be useful even beyond the scope of accelerator physics.
A Network Approach to Rare Disease Modeling
NASA Astrophysics Data System (ADS)
Ghiassian, Susan; Rabello, Sabrina; Sharma, Amitabh; Wiest, Olaf; Barabasi, Albert-Laszlo
2011-03-01
Network approaches have been widely used to better understand different areas of natural and social sciences. Network Science had a particularly great impact on the study of biological systems. In this project, using biological networks, candidate drugs as a potential treatment of rare diseases were identified. Developing new drugs for more than 2000 rare diseases (as defined by ORPHANET) is too expensive and beyond expectation. Disease proteins do not function in isolation but in cooperation with other interacting proteins. Research on FDA approved drugs have shown that most of the drugs do not target the disease protein but a protein which is 2 or 3 steps away from the disease protein in the Protein-Protein Interaction (PPI) network. We identified the already known drug targets in the disease gene's PPI subnetwork (up to the 3rd neighborhood) and among them those in the same sub cellular compartment and higher coexpression coefficient with the disease gene are expected to be stronger candidates. Out of 2177 rare diseases, 1092 were found not to have any drug target. Using the above method, we have found the strongest candidates among the rest in order to further experimental validations.
Distribution function approach to irreversible adsorption of interacting colloidal particles
NASA Astrophysics Data System (ADS)
Faraudo, Jordi; Bafaluy, Javier
2000-01-01
A statistical-mechanical description of the irreversible adsorption of interacting colloidal particles is developed. Our approach describes in a consistent way the interaction of particles from the bulk with adsorbed particles during the transport process towards the adsorbing surface. The macroscopic physical quantities corresponding to the actual process are expressed as averages over simpler auxiliary processes which proceed in the presence of a fixed number n of adsorbed particles. The adsorption rate verifies a generalized Langmuir equation, in which the kinetic resistance (the inverse of the kinetic coefficient) is expressed as the sum of a diffusional resistance and a resistance due to interaction with adsorbed particles during the transport process (blocking effect). Contrary to previous approaches, the blocking effect is not due to geometrical exclusion, instead it measures how the transport from the bulk is affected by the adsorbed particles. From the general expressions obtained, we have derived coverage expansions for the adsorption rate and the surface correlation function. The theory is applied to the case of colloidal particles interacting through DLVO potentials. This form of the kinetic coefficient is shown to be in agreement with recent experimental results, in which RSA fails.
Predicting activity approach based on new atoms similarity kernel function.
Abu El-Atta, Ahmed H; Moussa, M I; Hassanien, Aboul Ella
2015-07-01
Drug design is a high cost and long term process. To reduce time and costs for drugs discoveries, new techniques are needed. Chemoinformatics field implements the informational techniques and computer science like machine learning and graph theory to discover the chemical compounds properties, such as toxicity or biological activity. This is done through analyzing their molecular structure (molecular graph). To overcome this problem there is an increasing need for algorithms to analyze and classify graph data to predict the activity of molecules. Kernels methods provide a powerful framework which combines machine learning with graph theory techniques. These kernels methods have led to impressive performance results in many several chemoinformatics problems like biological activity prediction. This paper presents a new approach based on kernel functions to solve activity prediction problem for chemical compounds. First we encode all atoms depending on their neighbors then we use these codes to find a relationship between those atoms each other. Then we use relation between different atoms to find similarity between chemical compounds. The proposed approach was compared with many other classification methods and the results show competitive accuracy with these methods.
A genetic algorithms approach for altering the membership functions in fuzzy logic controllers
NASA Technical Reports Server (NTRS)
Shehadeh, Hana; Lea, Robert N.
1992-01-01
Through previous work, a fuzzy control system was developed to perform translational and rotational control of a space vehicle. This problem was then re-examined to determine the effectiveness of genetic algorithms on fine tuning the controller. This paper explains the problems associated with the design of this fuzzy controller and offers a technique for tuning fuzzy logic controllers. A fuzzy logic controller is a rule-based system that uses fuzzy linguistic variables to model human rule-of-thumb approaches to control actions within a given system. This 'fuzzy expert system' features rules that direct the decision process and membership functions that convert the linguistic variables into the precise numeric values used for system control. Defining the fuzzy membership functions is the most time consuming aspect of the controller design. One single change in the membership functions could significantly alter the performance of the controller. This membership function definition can be accomplished by using a trial and error technique to alter the membership functions creating a highly tuned controller. This approach can be time consuming and requires a great deal of knowledge from human experts. In order to shorten development time, an iterative procedure for altering the membership functions to create a tuned set that used a minimal amount of fuel for velocity vector approach and station-keep maneuvers was developed. Genetic algorithms, search techniques used for optimization, were utilized to solve this problem.
Atom and Bond Fukui Functions and Matrices: A Hirshfeld-I Atoms-in-Molecule Approach.
Oña, Ofelia B; De Clercq, Olivier; Alcoba, Diego R; Torre, Alicia; Lain, Luis; Van Neck, Dimitri; Bultinck, Patrick
2016-09-19
The Fukui function is often used in its atom-condensed form by isolating it from the molecular Fukui function using a chosen weight function for the atom in the molecule. Recently, Fukui functions and matrices for both atoms and bonds separately were introduced for semiempirical and ab initio levels of theory using Hückel and Mulliken atoms-in-molecule models. In this work, a double partitioning method of the Fukui matrix is proposed within the Hirshfeld-I atoms-in-molecule framework. Diagonalizing the resulting atomic and bond matrices gives eigenvalues and eigenvectors (Fukui orbitals) describing the reactivity of atoms and bonds. The Fukui function is the diagonal element of the Fukui matrix and may be resolved in atom and bond contributions. The extra information contained in the atom and bond resolution of the Fukui matrices and functions is highlighted. The effect of the choice of weight function arising from the Hirshfeld-I approach to obtain atom- and bond-condensed Fukui functions is studied. A comparison of the results with those generated by using the Mulliken atoms-in-molecule approach shows low correlation between the two partitioning schemes.
Atom and Bond Fukui Functions and Matrices: A Hirshfeld-I Atoms-in-Molecule Approach.
Oña, Ofelia B; De Clercq, Olivier; Alcoba, Diego R; Torre, Alicia; Lain, Luis; Van Neck, Dimitri; Bultinck, Patrick
2016-09-19
The Fukui function is often used in its atom-condensed form by isolating it from the molecular Fukui function using a chosen weight function for the atom in the molecule. Recently, Fukui functions and matrices for both atoms and bonds separately were introduced for semiempirical and ab initio levels of theory using Hückel and Mulliken atoms-in-molecule models. In this work, a double partitioning method of the Fukui matrix is proposed within the Hirshfeld-I atoms-in-molecule framework. Diagonalizing the resulting atomic and bond matrices gives eigenvalues and eigenvectors (Fukui orbitals) describing the reactivity of atoms and bonds. The Fukui function is the diagonal element of the Fukui matrix and may be resolved in atom and bond contributions. The extra information contained in the atom and bond resolution of the Fukui matrices and functions is highlighted. The effect of the choice of weight function arising from the Hirshfeld-I approach to obtain atom- and bond-condensed Fukui functions is studied. A comparison of the results with those generated by using the Mulliken atoms-in-molecule approach shows low correlation between the two partitioning schemes. PMID:27381271
Decision making in bipolar disorder: a cognitive modeling approach.
Yechiam, Eldad; Hayden, Elizabeth P; Bodkins, Misty; O'Donnell, Brian F; Hetrick, William P
2008-11-30
A formal modeling approach was used to characterize decision-making processes in bipolar disorder. Decision making was examined in 28 bipolar patients (14 acute and 14 remitted) and 25 controls using the Iowa Gambling Task (Bechara et al., 1994), a decision-making task used for assessing cognitive impulsivity. To disentangle motivational and cognitive aspects of decision-making processes, we applied a formal cognitive model to the performance on the Iowa Gambling Task. The model has three parameters: The relative impact of rewards and punishments on evaluations, the impact of recent and past payoffs, and the degree of choice consistency. The results indicated that acute bipolar patients were characterized by low choice consistency, or a tendency to make erratic choices. Low choice consistency improved the prediction of acute bipolar disorder beyond that provided by cognitive functioning and self-report measures of personality and temperament. PMID:18848361
Decision making in bipolar disorder: a cognitive modeling approach.
Yechiam, Eldad; Hayden, Elizabeth P; Bodkins, Misty; O'Donnell, Brian F; Hetrick, William P
2008-11-30
A formal modeling approach was used to characterize decision-making processes in bipolar disorder. Decision making was examined in 28 bipolar patients (14 acute and 14 remitted) and 25 controls using the Iowa Gambling Task (Bechara et al., 1994), a decision-making task used for assessing cognitive impulsivity. To disentangle motivational and cognitive aspects of decision-making processes, we applied a formal cognitive model to the performance on the Iowa Gambling Task. The model has three parameters: The relative impact of rewards and punishments on evaluations, the impact of recent and past payoffs, and the degree of choice consistency. The results indicated that acute bipolar patients were characterized by low choice consistency, or a tendency to make erratic choices. Low choice consistency improved the prediction of acute bipolar disorder beyond that provided by cognitive functioning and self-report measures of personality and temperament.
Reducing equifinality of hydrological models by integrating Functional Streamflow Disaggregation
NASA Astrophysics Data System (ADS)
Lüdtke, Stefan; Apel, Heiko; Nied, Manuela; Carl, Peter; Merz, Bruno
2014-05-01
A universal problem of the calibration of hydrological models is the equifinality of different parameter sets derived from the calibration of models against total runoff values. This is an intrinsic problem stemming from the quality of the calibration data and the simplified process representation by the model. However, discharge data contains additional information which can be extracted by signal processing methods. An analysis specifically developed for the disaggregation of runoff time series into flow components is the Functional Streamflow Disaggregation (FSD; Carl & Behrendt, 2008). This method is used in the calibration of an implementation of the hydrological model SWIM in a medium sized watershed in Thailand. FSD is applied to disaggregate the discharge time series into three flow components which are interpreted as base flow, inter-flow and surface runoff. In addition to total runoff, the model is calibrated against these three components in a modified GLUE analysis, with the aim to identify structural model deficiencies, assess the internal process representation and to tackle equifinality. We developed a model dependent (MDA) approach calibrating the model runoff components against the FSD components, and a model independent (MIA) approach comparing the FSD of the model results and the FSD of calibration data. The results indicate, that the decomposition provides valuable information for the calibration. Particularly MDA highlights and discards a number of standard GLUE behavioural models underestimating the contribution of soil water to river discharge. Both, MDA and MIA yield to a reduction of the parameter ranges by a factor up to 3 in comparison to standard GLUE. Based on these results, we conclude that the developed calibration approach is able to reduce the equifinality of hydrological model parameterizations. The effect on the uncertainty of the model predictions is strongest by applying MDA and shows only minor reductions for MIA. Besides
Di Maggio, Jimena; Fernández, Carolina; Parodi, Elisa R; Diaz, M Soledad; Estrada, Vanina
2016-01-01
In this paper we address the formulation of two mechanistic water quality models that differ in the way the phytoplankton community is described. We carry out parameter estimation subject to differential-algebraic constraints and validation for each model and comparison between models performance. The first approach aggregates phytoplankton species based on their phylogenetic characteristics (Taxonomic group model) and the second one, on their morpho-functional properties following Reynolds' classification (Functional group model). The latter approach takes into account tolerance and sensitivity to environmental conditions. The constrained parameter estimation problems are formulated within an equation oriented framework, with a maximum likelihood objective function. The study site is Paso de las Piedras Reservoir (Argentina), which supplies water for consumption for 450,000 population. Numerical results show that phytoplankton morpho-functional groups more closely represent each species growth requirements within the group. Each model performance is quantitatively assessed by three diagnostic measures. Parameter estimation results for seasonal dynamics of the phytoplankton community and main biogeochemical variables for a one-year time horizon are presented and compared for both models, showing the functional group model enhanced performance. Finally, we explore increasing nutrient loading scenarios and predict their effect on phytoplankton dynamics throughout a one-year time horizon.
Advanced Ginzburg-Landau theory of freezing: A density-functional approach
NASA Astrophysics Data System (ADS)
Tóth, Gyula I.; Provatas, Nikolas
2014-09-01
This paper revisits the weakly fourth-order anisotropic Ginzburg-Landau (GL) theory of freezing (also known as the Landau-Brazowskii model or theory of weak crystallization) by comparing it to a recent density functional approach, the phase-field crystal (PFC) model. First we study the critical behavior of a generalized PFC model and show that (i) the so-called one-mode approximation is exact in the leading order, and (ii) the direct correlation function has no contribution to the phase diagram near the critical point. Next, we calculate the anisotropy of the crystal-liquid interfacial free energy in the phase-field crystal (PFC) model analytically. For comparison, we also determine the anisotropy numerically and show that no range of parameters can be found for which the phase-field crystal equation can quantitatively model anisotropy for metallic materials. Finally, we derive the leading order PFC amplitude model and show that it coincides with the weakly fourth-order anisotropic GL theory, as a consequence of the assumptions of the GL theory being inherent in the PFC model. We also propose a way to calibrate the anisotropy in the Ginzburg-Landau theory via a generalized gradient operator emerging from the direct correlation function appearing in the generating PFC free energy functional.
Fast approach to infrared image restoration based on shrinkage functions calibration
NASA Astrophysics Data System (ADS)
Zhang, Chengshuo; Shi, Zelin; Xu, Baoshu; Feng, Bin
2016-05-01
High-quality image restoration in real time is a challenge for infrared imaging systems. We present a fast approach to infrared image restoration based on shrinkage functions calibration. Rather than directly modeling the prior of sharp images to obtain the shrinkage functions, we calibrate them for restoration directly by using the acquirable sharp and blurred image pairs from the same infrared imaging system. The calibration method is employed to minimize the sum of squared errors between sharp images and restored images from the blurred images. Our restoration algorithm is noniterative and its shrinkage functions are stored in the look-up tables, so an architecture solution of pipeline structure can work in real time. We demonstrate the effectiveness of our approach by testing its quantitative performance from simulation experiments and its qualitative performance from a developed wavefront coding infrared imaging system.
Enhancements to the SSME transfer function modeling code
NASA Technical Reports Server (NTRS)
Irwin, R. Dennis; Mitchell, Jerrel R.; Bartholomew, David L.; Glenn, Russell D.
1995-01-01
This report details the results of a one year effort by Ohio University to apply the transfer function modeling and analysis tools developed under NASA Grant NAG8-167 (Irwin, 1992), (Bartholomew, 1992) to attempt the generation of Space Shuttle Main Engine High Pressure Turbopump transfer functions from time domain data. In addition, new enhancements to the transfer function modeling codes which enhance the code functionality are presented, along with some ideas for improved modeling methods and future work. Section 2 contains a review of the analytical background used to generate transfer functions with the SSME transfer function modeling software. Section 2.1 presents the 'ratio method' developed for obtaining models of systems that are subject to single unmeasured excitation sources and have two or more measured output signals. Since most of the models developed during the investigation use the Eigensystem Realization Algorithm (ERA) for model generation, Section 2.2 presents an introduction of ERA, and Section 2.3 describes how it can be used to model spectral quantities. Section 2.4 details the Residue Identification Algorithm (RID) including the use of Constrained Least Squares (CLS) and Total Least Squares (TLS). Most of this information can be found in the report (and is repeated for convenience). Section 3 chronicles the effort of applying the SSME transfer function modeling codes to the a51p394.dat and a51p1294.dat time data files to generate transfer functions from the unmeasured input to the 129.4 degree sensor output. Included are transfer function modeling attempts using five methods. The first method is a direct application of the SSME codes to the data files and the second method uses the underlying trends in the spectral density estimates to form transfer function models with less clustering of poles and zeros than the models obtained by the direct method. In the third approach, the time data is low pass filtered prior to the modeling process in an
Modeling uncertainty in reservoir loss functions using fuzzy sets
NASA Astrophysics Data System (ADS)
Teegavarapu, Ramesh S. V.; Simonovic, Slobodan P.
1999-09-01
Imprecision involved in the definition of reservoir loss functions is addressed using fuzzy set theory concepts. A reservoir operation problem is solved using the concepts of fuzzy mathematical programming. Membership functions from fuzzy set theory are used to represent the decision maker's preferences in the definition of shape of loss curves. These functions are assumed to be known and are used to model the uncertainties. Linear and nonlinear optimization models are developed under fuzzy environment. A new approach is presented that involves development of compromise reservoir operating policies based on the rules from the traditional optimization models and their fuzzy equivalents while considering the preferences of the decision maker. The imprecision associated with the definition of penalty and storage zones and uncertainty in the penalty coefficients are the main issues addressed through this study. The models developed are applied to the Green Reservoir, Kentucky. Simulations are performed to evaluate the operating rules generated by the models considering the uncertainties in the loss functions. Results indicate that the reservoir operating policies are sensitive to change in the shapes of loss functions.
A Model-Fitting Approach to Characterizing Polymer Decomposition Kinetics
Burnham, A K; Weese, R K
2004-07-20
The use of isoconversional, sometimes called model-free, kinetic analysis methods have recently gained favor in the thermal analysis community. Although these methods are very useful and instructive, the conclusion that model fitting is a poor approach is largely due to improper use of the model fitting approach, such as fitting each heating rate separately. The current paper shows the ability of model fitting to correlate reaction data over very wide time-temperature regimes, including simultaneous fitting of isothermal and constant heating rate data. Recently published data on cellulose pyrolysis by Capart et al. (TCA, 2004) with a combination of an autocatalytic primary reaction and an nth-order char pyrolysis reaction is given as one example. Fits for thermal decomposition of Estane, Viton-A, and Kel-F over very wide ranges of heating rates is also presented. The Kel-F required two parallel reactions--one describing a small, early decomposition process, and a second autocatalytic reaction describing the bulk of pyrolysis. Viton-A and Estane also required two parallel reactions for primary pyrolysis, with the first Viton-A reaction also being a minor, early process. In addition, the yield of residue from these two polymers depends on the heating rate. This is an example of a competitive reaction between volatilization and char formation, which violates the basic tenet of the isoconversional approach and is an example of why it has limitations. Although more complicated models have been used in the literature for this type of process, we described our data well with a simple addition to the standard model in which the char yield is a function of the logarithm of the heating rate.
A textural approach based on Gabor functions for texture edge detection in ultrasound images.
Chen, C M; Lu, H H; Han, K C
2001-04-01
Edge detection is an important, but difficult, step in quantitative ultrasound (US) image analysis. In this paper, we present a new textural approach for detecting a class of edges in US images; namely, the texture edges with a weak regional mean gray-level difference (RMGD) between adjacent regions. The proposed approach comprises a vision model-based texture edge detector using Gabor functions and a new texture-enhancement scheme. The experimental results on the synthetic edge images have shown that the performances of the four tested textural and nontextural edge detectors are about 20%-95% worse than that of the proposed approach. Moreover, the texture enhancement may improve the performance of the proposed texture edge detector by as much as 40%. The experiments on 20 clinical US images have shown that the proposed approach can find reasonable edges for real objects of interest with the performance of 0.4 +/- 0.08 in terms of the Pratt's figure.
Maximum entropy models of ecosystem functioning
NASA Astrophysics Data System (ADS)
Bertram, Jason
2014-12-01
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes' broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Maximum entropy models of ecosystem functioning
Bertram, Jason
2014-12-05
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Using computational models to relate structural and functional brain connectivity
Hlinka, Jaroslav; Coombes, Stephen
2012-01-01
Modern imaging methods allow a non-invasive assessment of both structural and functional brain connectivity. This has lead to the identification of disease-related alterations affecting functional connectivity. The mechanism of how such alterations in functional connectivity arise in a structured network of interacting neural populations is as yet poorly understood. Here we use a modeling approach to explore the way in which this can arise and to highlight the important role that local population dynamics can have in shaping emergent spatial functional connectivity patterns. The local dynamics for a neural population is taken to be of the Wilson–Cowan type, whilst the structural connectivity patterns used, describing long-range anatomical connections, cover both realistic scenarios (from the CoComac database) and idealized ones that allow for more detailed theoretical study. We have calculated graph–theoretic measures of functional network topology from numerical simulations of model networks. The effect of the form of local dynamics on the observed network state is quantified by examining the correlation between structural and functional connectivity. We document a profound and systematic dependence of the simulated functional connectivity patterns on the parameters controlling the dynamics. Importantly, we show that a weakly coupled oscillator theory explaining these correlations and their variation across parameter space can be developed. This theoretical development provides a novel way to characterize the mechanisms for the breakdown of functional connectivity in diseases through changes in local dynamics. PMID:22805059
Agents: An approach for dynamic process modelling
NASA Astrophysics Data System (ADS)
Grohmann, Axel; Kopetzky, Roland; Lurk, Alexander
1999-03-01
With the growing amount of distributed and heterogeneous information and services, conventional information systems have come to their limits. This gave rise to the development of a Multi-Agent System (the "Logical Client") which can be used in complex information systems as well as in other advanced software systems. Computer agents are proactive, reactive and social. They form a community of independent software components that can communicate and co-operate in order to accomplish complex tasks. Thus the agent-oriented paradigm provides a new and powerful approach to programming distributed systems. The communication framework developed is based on standards like CORBA, KQML and KIF. It provides an embedded rule based system to find adequate reactions to incoming messages. The macro-architecture of the Logical Client consists of independent agents and uses artificial intelligence to cope with complex patterns of communication and actions. A set of system agents is also provided, including the Strategy Service as a core component for modelling processes at runtime, the Computer Supported Cooperative Work (CSCW) Component for supporting remote co-operation between human users and the Repository for managing and hiding the file based data flow in heterogeneous networks. This architecture seems to be capable of managing complexity in information systems. It is also being implemented in a complex simulation system that monitors and simulates the environmental radioactivity in the country Baden-Württemberg.
Gene function hypotheses for the Campylobacter jejuni glycome generated by a logic-based approach.
Sternberg, Michael J E; Tamaddoni-Nezhad, Alireza; Lesk, Victor I; Kay, Emily; Hitchen, Paul G; Cootes, Adrian; van Alphen, Lieke B; Lamoureux, Marc P; Jarrell, Harold C; Rawlings, Christopher J; Soo, Evelyn C; Szymanski, Christine M; Dell, Anne; Wren, Brendan W; Muggleton, Stephen H
2013-01-01
Increasingly, experimental data on biological systems are obtained from several sources and computational approaches are required to integrate this information and derive models for the function of the system. Here, we demonstrate the power of a logic-based machine learning approach to propose hypotheses for gene function integrating information from two diverse experimental approaches. Specifically, we use inductive logic programming that automatically proposes hypotheses explaining the empirical data with respect to logically encoded background knowledge. We study the capsular polysaccharide biosynthetic pathway of the major human gastrointestinal pathogen Campylobacter jejuni. We consider several key steps in the formation of capsular polysaccharide consisting of 15 genes of which 8 have assigned function, and we explore the extent to which functions can be hypothesised for the remaining 7. Two sources of experimental data provide the information for learning-the results of knockout experiments on the genes involved in capsule formation and the absence/presence of capsule genes in a multitude of strains of different serotypes. The machine learning uses the pathway structure as background knowledge. We propose assignments of specific genes to five previously unassigned reaction steps. For four of these steps, there was an unambiguous optimal assignment of gene to reaction, and to the fifth, there were three candidate genes. Several of these assignments were consistent with additional experimental results. We therefore show that the logic-based methodology provides a robust strategy to integrate results from different experimental approaches and propose hypotheses for the behaviour of a biological system.
High temporal resolution functional MRI with partial separability model.
Ngo, Giang-Chau; Holtrop, Joseph L; Fu, Maojing; Lam, Fan; Sutton, Bradley P
2015-01-01
Even though the hemodynamic response is a slow phenomenon, high temporal resolution in functional fMRI can enable better differentiation between the signal of interest and physiological noise or increase the statistical power of functional studies. To increase the temporal resolution, several methods have been developed to decrease the repetition time, TR, such as simultaneous multi-slice imaging and MR encephalography approaches. In this work, a method using a fast acquisition and a partial separability model is presented to achieve a multi-slice fMRI protocol at a temporal resolution of 75 ms. The method is demonstrated on a visual block task. PMID:26738022
A novel approach to modeling spacecraft spectral reflectance
NASA Astrophysics Data System (ADS)
Willison, Alexander; Bédard, Donald
2016-10-01
Simulated spectrometric observations of unresolved resident space objects are required for the interpretation of quantities measured by optical telescopes. This allows for their characterization as part of regular space surveillance activity. A peer-reviewed spacecraft reflectance model is necessary to help improve the understanding of characterization measurements. With this objective in mind, a novel approach to model spacecraft spectral reflectance as an overall spectral bidirectional reflectance distribution function (sBRDF) is presented. A spacecraft's overall sBRDF is determined using its triangular-faceted computer-aided design (CAD) model and the empirical sBRDF of its homogeneous materials. The CAD model is used to determine the proportional contribution of each homogeneous material to the overall reflectance. Each empirical sBRDF is contained in look-up tables developed from measurements made over a range of illumination and reflection geometries using simple interpolation and extrapolation techniques. A demonstration of the spacecraft reflectance model is provided through simulation of an optical ground truth characterization using the Canadian Advanced Nanospace eXperiment-1 Engineering Model nanosatellite as the subject. Validation of the reflectance model is achieved through a qualitative comparison of simulated and measured quantities.
A secured e-tendering modeling using misuse case approach
NASA Astrophysics Data System (ADS)
Mohd, Haslina; Robie, Muhammad Afdhal Muhammad; Baharom, Fauziah; Darus, Norida Muhd; Saip, Mohamed Ali; Yasin, Azman
2016-08-01
Major risk factors relating to electronic transactions may lead to destructive impacts on trust and transparency in the process of tendering. Currently, electronic tendering (e-tendering) systems still remain uncertain in issues relating to legal and security compliance and most importantly it has an unclear security framework. Particularly, the available systems are lacking in addressing integrity, confidentiality, authentication, and non-repudiation in e-tendering requirements. Thus, one of the challenges in developing an e-tendering system is to ensure the system requirements include the function for secured and trusted environment. Therefore, this paper aims to model a secured e-tendering system using misuse case approach. The modeling process begins with identifying the e-tendering process, which is based on the Australian Standard Code of Tendering (AS 4120-1994). It is followed by identifying security threats and their countermeasure. Then, the e-tendering was modelled using misuse case approach. The model can contribute to e-tendering developers and also to other researchers or experts in the e-tendering domain.
Modeling continuous covariates with a "spike" at zero: Bivariate approaches.
Jenkner, Carolin; Lorenz, Eva; Becher, Heiko; Sauerbrei, Willi
2016-07-01
In epidemiology and clinical research, predictors often take value zero for a large amount of observations while the distribution of the remaining observations is continuous. These predictors are called variables with a spike at zero. Examples include smoking or alcohol consumption. Recently, an extension of the fractional polynomial (FP) procedure, a technique for modeling nonlinear relationships, was proposed to deal with such situations. To indicate whether or not a value is zero, a binary variable is added to the model. In a two stage procedure, called FP-spike, the necessity of the binary variable and/or the continuous FP function for the positive part are assessed for a suitable fit. In univariate analyses, the FP-spike procedure usually leads to functional relationships that are easy to interpret. This paper introduces four approaches for dealing with two variables with a spike at zero (SAZ). The methods depend on the bivariate distribution of zero and nonzero values. Bi-Sep is the simplest of the four bivariate approaches. It uses the univariate FP-spike procedure separately for the two SAZ variables. In Bi-D3, Bi-D1, and Bi-Sub, proportions of zeros in both variables are considered simultaneously in the binary indicators. Therefore, these strategies can account for correlated variables. The methods can be used for arbitrary distributions of the covariates. For illustration and comparison of results, data from a case-control study on laryngeal cancer, with smoking and alcohol intake as two SAZ variables, is considered. In addition, a possible extension to three or more SAZ variables is outlined. A combination of log-linear models for the analysis of the correlation in combination with the bivariate approaches is proposed. PMID:27072783
A variational free-energy functional approach to the Schrödinger-Poisson theory
NASA Astrophysics Data System (ADS)
Solis, Francisco J.; Jadhao, Vikram; Mitra, Kaushik; Olvera de La Cruz, Monica
2015-03-01
In the numerical simulation of model electronic device systems, where electrons are typically under confinement, a key obstacle is the need to iteratively solve the coupled Schrödinger-Poisson equation in order to obtain the electronic potential. We show that it is possible to bypass this obstacle by adopting a variational approach and obtaining the solution of the SP equation by minimizing a functional. We construct the required functional and establish some of its properties. We apply this formulation to the case of narrow channel quantum wells where the local density approximation yields accurate results.
A Dynamic Density Functional Theory Approach to Diffusion in White Dwarfs and Neutron Star Envelopes
NASA Astrophysics Data System (ADS)
Diaw, A.; Murillo, M. S.
2016-09-01
We develop a multicomponent hydrodynamic model based on moments of the Born-Bogolyubov-Green-Kirkwood-Yvon hierarchy equations for physical conditions relevant to astrophysical plasmas. These equations incorporate strong correlations through a density functional theory closure, while transport enters through a relaxation approximation. This approach enables the introduction of Coulomb coupling correction terms into the standard Burgers equations. The diffusive currents for these strongly coupled plasmas is self-consistently derived. The settling of impurities and its impact on cooling can be greatly affected by strong Coulomb coupling, which we show can be quantified using the direct correlation function.
Modelling the ecological niche from functional traits
Kearney, Michael; Simpson, Stephen J.; Raubenheimer, David; Helmuth, Brian
2010-01-01
The niche concept is central to ecology but is often depicted descriptively through observing associations between organisms and habitats. Here, we argue for the importance of mechanistically modelling niches based on functional traits of organisms and explore the possibilities for achieving this through the integration of three theoretical frameworks: biophysical ecology (BE), the geometric framework for nutrition (GF) and dynamic energy budget (DEB) models. These three frameworks are fundamentally based on the conservation laws of thermodynamics, describing energy and mass balance at the level of the individual and capturing the prodigious predictive power of the concepts of ‘homeostasis’ and ‘evolutionary fitness’. BE and the GF provide mechanistic multi-dimensional depictions of climatic and nutritional niches, respectively, providing a foundation for linking organismal traits (morphology, physiology, behaviour) with habitat characteristics. In turn, they provide driving inputs and cost functions for mass/energy allocation within the individual as determined by DEB models. We show how integration of the three frameworks permits calculation of activity constraints, vital rates (survival, development, growth, reproduction) and ultimately population growth rates and species distributions. When integrated with contemporary niche theory, functional trait niche models hold great promise for tackling major questions in ecology and evolutionary biology. PMID:20921046
Ryll, A; Bucher, J; Bonin, A; Bongard, S; Gonçalves, E; Saez-Rodriguez, J; Niklas, J; Klamt, S
2014-10-01
Systems biology has to increasingly cope with large- and multi-scale biological systems. Many successful in silico representations and simulations of various cellular modules proved mathematical modelling to be an important tool in gaining a solid understanding of biological phenomena. However, models spanning different functional layers (e.g. metabolism, signalling and gene regulation) are still scarce. Consequently, model integration methods capable of fusing different types of biological networks and various model formalisms become a key methodology to increase the scope of cellular processes covered by mathematical models. Here we propose a new integration approach to couple logical models of signalling or/and gene-regulatory networks with kinetic models of metabolic processes. The procedure ends up with an integrated dynamic model of both layers relying on differential equations. The feasibility of the approach is shown in an illustrative case study integrating a kinetic model of central metabolic pathways in hepatocytes with a Boolean logical network depicting the hormonally induced signal transduction and gene regulation events involved. In silico simulations demonstrate the integrated model to qualitatively describe the physiological switch-like behaviour of hepatocytes in response to nutritionally regulated changes in extracellular glucagon and insulin levels. A simulated failure mode scenario addressing insulin resistance furthermore illustrates the pharmacological potential of a model covering interactions between signalling, gene regulation and metabolism. PMID:25063553
Lightning Modelling: From 3D to Circuit Approach
NASA Astrophysics Data System (ADS)
Moussa, H.; Abdi, M.; Issac, F.; Prost, D.
2012-05-01
The topic of this study is electromagnetic environment and electromagnetic interferences (EMI) effects, specifically the modelling of lightning indirect effects [1] on aircraft electrical systems present on deported and highly exposed equipments, such as nose landing gear (NLG) and nacelle, through a circuit approach. The main goal of the presented work, funded by a French national project: PREFACE, is to propose a simple equivalent electrical circuit to represent a geometrical structure, taking into account mutual, self inductances, and resistances, which play a fundamental role in the lightning current distribution. Then this model is intended to be coupled to a functional one, describing a power train chain composed of: a converter, a shielded power harness and a motor or a set of resistors used as a load for the converter. The novelty here, is to provide a pre-sizing qualitative approach allowing playing on integration in pre-design phases. This tool intends to offer a user-friendly way for replying rapidly to calls for tender, taking into account the lightning constraints. Two cases are analysed: first, a NLG that is composed of tubular pieces that can be easily approximated by equivalent cylindrical straight conductors. Therefore, passive R, L, M elements of the structure can be extracted through analytical engineer formulas such as those implemented in the partial element equivalent circuit (PEEC) [2] technique. Second, the same approach is intended to be applied on an electrical de-icing nacelle sub-system.
Executive function and food approach behavior in middle childhood
Groppe, Karoline; Elsner, Birgit
2014-01-01
Executive function (EF) has long been considered to be a unitary, domain-general cognitive ability. However, recent research suggests differentiating “hot” affective and “cool” cognitive aspects of EF. Yet, findings regarding this two-factor construct are still inconsistent. In particular, the development of this factor structure remains unclear and data on school-aged children is lacking. Furthermore, studies linking EF and overweight or obesity suggest that EF contributes to the regulation of eating behavior. So far, however, the links between EF and eating behavior have rarely been investigated in children and non-clinical populations. First, we examined whether EF can be divided into hot and cool factors or whether they actually correspond to a unitary construct in middle childhood. Second, we examined how hot and cool EF are associated with different eating styles that put children at risk of becoming overweight during development. Hot and cool EF were assessed experimentally in a non-clinical population of 1657 elementary-school children (aged 6–11 years). The “food approach” behavior was rated mainly via parent questionnaires. Findings indicate that hot EF is distinguishable from cool EF. However, only cool EF seems to represent a coherent functional entity, whereas hot EF does not seem to be a homogenous construct. This was true for a younger and an older subgroup of children. Furthermore, different EF components were correlated with eating styles, such as responsiveness to food, desire to drink, and restrained eating in girls but not in boys. This shows that lower levels of EF are not only seen in clinical populations of obese patients but are already associated with food approach styles in a normal population of elementary school-aged girls. Although the direction of effect still has to be clarified, results point to the possibility that EF constitutes a risk factor for eating styles contributing to the development of overweight in the long
Polymicrobial Multi-functional Approach for Enhancement of Crop Productivity.
Reddy, Chilekampalli A; Saravanan, Ramu S
2013-01-01
There is an increasing global need for enhancing the food production to meet the needs of the fast-growing human population. Traditional approach to increasing agricultural productivity through high inputs of chemical nitrogen and phosphate fertilizers and pesticides is not sustainable because of high costs and concerns about global warming, environmental pollution, and safety concerns. Therefore, the use of naturally occurring soil microbes for increasing productivity of food crops is an attractive eco-friendly, cost-effective, and sustainable alternative to the use of chemical fertilizers and pesticides. There is a vast body of published literature on microbial symbiotic and nonsymbiotic nitrogen fixation, multiple beneficial mechanisms used by plant growth-promoting rhizobacteria (PGPR), the nature and significance of mycorrhiza-plant symbiosis, and the growing technology on production of efficacious microbial inoculants. These areas are briefly reviewed here. The construction of an inoculant with a consortium of microbes with multiple beneficial functions such as N(2) fixation, biocontrol, phosphate solubilization, and other plant growth-promoting properties is a positive new development in this area in that a single inoculant can be used effectively for increasing the productivity of a broad spectrum of crops including legumes, cereals, vegetables, and grasses. Such a polymicrobial inoculant containing several microorganisms for each major function involved in promoting the plant growth and productivity gives it greater stability and wider applications for a range of major crops. Intensifying research in this area leading to further advances in our understanding of biochemical/molecular mechanisms involved in plant-microbe-soil interactions coupled with rapid advances in the genomics-proteomics of beneficial microbes should lead to the design and development of inoculants with greater efficacy for increasing the productivity of a wide range of crops.
Lithium battery aging model based on Dakin's degradation approach
NASA Astrophysics Data System (ADS)
Baghdadi, Issam; Briat, Olivier; Delétage, Jean-Yves; Gyan, Philippe; Vinassa, Jean-Michel
2016-09-01
This paper proposes and validates a calendar and power cycling aging model for two different lithium battery technologies. The model development is based on previous SIMCAL and SIMSTOCK project data. In these previous projects, the effect of the battery state of charge, temperature and current magnitude on aging was studied on a large panel of different battery chemistries. In this work, data are analyzed using Dakin's degradation approach. In fact, the logarithms of battery capacity fade and the increase in resistance evolves linearly over aging. The slopes identified from straight lines correspond to battery aging rates. Thus, a battery aging rate expression function of aging factors was deduced and found to be governed by Eyring's law. The proposed model simulates the capacity fade and resistance increase as functions of the influencing aging factors. Its expansion using Taylor series was consistent with semi-empirical models based on the square root of time, which are widely studied in the literature. Finally, the influence of the current magnitude and temperature on aging was simulated. Interestingly, the aging rate highly increases with decreasing and increasing temperature for the ranges of -5 °C-25 °C and 25 °C-60 °C, respectively.
Linear mixed-effects modeling approach to FMRI group analysis
Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.
2013-01-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the
A Generic Modeling Process to Support Functional Fault Model Development
NASA Technical Reports Server (NTRS)
Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.
2016-01-01
Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.
Predicting plants -modeling traits as a function of environment
NASA Astrophysics Data System (ADS)
Franklin, Oskar
2016-04-01
A central problem in understanding and modeling vegetation dynamics is how to represent the variation in plant properties and function across different environments. Addressing this problem there is a strong trend towards trait-based approaches, where vegetation properties are functions of the distributions of functional traits rather than of species. Recently there has been enormous progress in in quantifying trait variability and its drivers and effects (Van Bodegom et al. 2012; Adier et al. 2014; Kunstler et al. 2015) based on wide ranging datasets on a small number of easily measured traits, such as specific leaf area (SLA), wood density and maximum plant height. However, plant function depends on many other traits and while the commonly measured trait data are valuable, they are not sufficient for driving predictive and mechanistic models of vegetation dynamics -especially under novel climate or management conditions. For this purpose we need a model to predict functional traits, also those not easily measured, and how they depend on the plants' environment. Here I present such a mechanistic model based on fitness concepts and focused on traits related to water and light limitation of trees, including: wood density, drought response, allocation to defense, and leaf traits. The model is able to predict observed patterns of variability in these traits in relation to growth and mortality, and their responses to a gradient of water limitation. The results demonstrate that it is possible to mechanistically predict plant traits as a function of the environment based on an eco-physiological model of plant fitness. References Adier, P.B., Salguero-Gómez, R., Compagnoni, A., Hsu, J.S., Ray-Mukherjee, J., Mbeau-Ache, C. et al. (2014). Functional traits explain variation in plant lifehistory strategies. Proc. Natl. Acad. Sci. U. S. A., 111, 740-745. Kunstler, G., Falster, D., Coomes, D.A., Hui, F., Kooyman, R.M., Laughlin, D.C. et al. (2015). Plant functional traits
Modeling of functionally graded piezoelectric ultrasonic transducers.
Rubio, Wilfredo Montealegre; Buiochi, Flávio; Adamowski, Julio Cezar; Silva, Emílio Carlos Nelli
2009-05-01
The application of functionally graded material (FGM) concept to piezoelectric transducers allows the design of composite transducers without interfaces, due to the continuous change of property values. Thus, large improvements can be achieved, as reduction of stress concentration, increasing of bonding strength, and bandwidth. This work proposes to design and to model FGM piezoelectric transducers and to compare their performance with non-FGM ones. Analytical and finite element (FE) modeling of FGM piezoelectric transducers radiating a plane pressure wave in fluid medium are developed and their results are compared. The ANSYS software is used for the FE modeling. The analytical model is based on FGM-equivalent acoustic transmission-line model, which is implemented using MATLAB software. Two cases are considered: (i) the transducer emits a pressure wave in water and it is composed of a graded piezoceramic disk, and backing and matching layers made of homogeneous materials; (ii) the transducer has no backing and matching layer; in this case, no external load is simulated. Time and frequency pressure responses are obtained through a transient analysis. The material properties are graded along thickness direction. Linear and exponential gradation functions are implemented to illustrate the influence of gradation on the transducer pressure response, electrical impedance, and resonance frequencies.
A Green's function quantum average atom model
Starrett, Charles Edward
2015-05-21
A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.
A Functional Genomic Approach to Chlorinated Ethenes Bioremediation
NASA Astrophysics Data System (ADS)
Lee, P. K.; Brodie, E. L.; MacBeth, T. W.; Deeb, R. A.; Sorenson, K. S.; Andersen, G. L.; Alvarez-Cohen, L.
2007-12-01
With the recent advances in genomic sciences, a knowledge-based approach can now be taken to optimize the bioremediation of trichloroethene (TCE). During the bioremediation of a heterogeneous subsurface, it is vital to identify and quantify the functionally important microorganisms present, characterize the microbial community and measure their physiological activity. In our field experiments, quantitative PCR (qPCR) was coupled with reverse-transcription (RT) to analyze both copy numbers and transcripts expressed by the 16S rRNA gene and three reductive dehalogenase (RDase) genes as biomarkers of Dehalococcoides spp. in the groundwater of a TCE-DNAPL site at Ft. Lewis (WA) that was serially subjected to biostimulation and bioaugmentation. Genes in the Dehalococcoides genus were targeted as they are the only known organisms that can completely dechlorinate TCE to the innocuous product ethene. Biomarker quantification revealed an overall increase of more than three orders of magnitude in the total Dehalococcoides population and quantification of the more liable and stringently regulated mRNAs confirmed that Dehalococcoides spp. were active. Parallel with our field experiments, laboratory studies were conducted to explore the physiology of Dehalococcoides isolates in order to develop relevant biomarkers that are indicative of the metabolic state of cells. Recently, we verified the function of the nitrogenase operon in Dehalococcoides sp. strain 195 and nitrogenase-encoding genes are ideal biomarker targets to assess cellular nitrogen requirement. To characterize the microbial community, we applied a high-density phylogenetic microarray (16S PhyloChip) that simultaneous monitors over 8,700 unique taxa to track the bacterial and archaeal populations through different phases of treatment. As a measure of species richness, 1,300 to 1,520 taxa were detected in groundwater samples extracted during different stages of treatment as well as in the bioaugmentation culture. We
Versatile approach for the fabrication of functional wrinkled polymer surfaces.
Palacios-Cuesta, Marta; Liras, Marta; del Campo, Adolfo; García, Olga; Rodríguez-Hernández, Juan
2014-11-11
A simple and versatile approach to obtaining patterned surfaces via wrinkle formation with variable dimensions and functionality is described. The method consists of the simultaneous heating and irradiation with UV light of a photosensitive monomer solution confined between two substrates with variable spacer thicknesses. Under these conditions, the system is photo-cross-linked, producing a rapid volume contraction while capillary forces attempt to maintain the contact between the monomer mixture and the cover. As a result of these two interacting forces, surface wrinkles were formed. Several parameters play a key role in the formation and final characteristics (amplitude and period) of the wrinkles generated, including the formulation of the photosensitive solution (e.g., the composition of the monomer mixture) and preparation conditions (e.g., temperature employed, irradiation time, and film thickness). Finally, in addition, the possibility of modifying the surface chemical composition of these wrinkled surfaces was investigated. For this purpose, either hydrophilic or hydrophobic comonomers were included in the photosensitive mixture. The resulting surface chemical composition could be finely tuned as was demonstrated by significant variations in the wettability of the structured surfaces, between 56° and 104°, when hydrophilic and hydrophobic monomers were incorporated, respectively. PMID:25316583
A simplified approach to calculate atomic partition functions in plasmas
D'Ammando, Giuliano; Colonna, Gianpiero
2013-03-15
A simplified method to calculate the electronic partition functions and the corresponding thermodynamic properties of atomic species is presented and applied to C(I) up to C(VI) ions. The method consists in reducing the complex structure of an atom to three lumped levels. The ground level of the lumped model describes the ground term of the real atom, while the second lumped level represents the low lying states and the last one groups all the other atomic levels. It is also shown that for the purpose of thermodynamic function calculation, the energy and the statistical weight of the upper lumped level, describing high-lying excited atomic states, can be satisfactorily approximated by an analytic hydrogenlike formula. The results of the simplified method are in good agreement with those obtained by direct summation over a complete set (i.e., including all possible terms and configurations below a given cutoff energy) of atomic energy levels. The method can be generalized to include more lumped levels in order to improve the accuracy.
ERIC Educational Resources Information Center
Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.
2011-01-01
Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…
Structure, function, and behaviour of computational models in systems biology
2013-01-01
Background Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such “bio-models” necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. Results We present a conceptual framework – the meaning facets – which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model’s components (structure), the meaning of the model’s intended use (function), and the meaning of the model’s dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. Conclusions The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research
Model Adequacy and the Macroevolution of Angiosperm Functional Traits.
Pennell, Matthew W; FitzJohn, Richard G; Cornwell, William K; Harmon, Luke J
2015-08-01
Making meaningful inferences from phylogenetic comparative data requires a meaningful model of trait evolution. It is thus important to determine whether the model is appropriate for the data and the question being addressed. One way to assess this is to ask whether the model provides a good statistical explanation for the variation in the data. To date, researchers have focused primarily on the explanatory power of a model relative to alternative models. Methods have been developed to assess the adequacy, or absolute explanatory power, of phylogenetic trait models, but these have been restricted to specific models or questions. Here we present a general statistical framework for assessing the adequacy of phylogenetic trait models. We use our approach to evaluate the statistical performance of commonly used trait models on 337 comparative data sets covering three key angiosperm functional traits. In general, the models we tested often provided poor statistical explanations for the evolution of these traits. This was true for many different groups and at many different scales. Whether such statistical inadequacy will qualitatively alter inferences drawn from comparative data sets will depend on the context. Regardless, assessing model adequacy can provide interesting biological insights-how and why a model fails to describe variation in a data set give us clues about what evolutionary processes may have driven trait evolution across time. PMID:26655160
NASA Astrophysics Data System (ADS)
Han, S.; Hu, H.; Tian, F.
2007-12-01
Evapotranspiration, which occurs in the boundary layer between the land surface and the bottom atmospheric layer, plays an important role in both water balance and energy balance. Models based on the Penman hypothesis (1948) and the Budyko hypothesis (1974) estimate actual evapotranspiration from a land surface process prospective, while models based on the complementary hypothesis (Bouchet, 1963) do this from the atmospheric perspective. Penman-based models require detailed data on soil moisture or stomatal resistance (Crago and Crowley, 2006); Budyko models, e.g. Fu's equation (1981), estimate the mean annual evapotranspiration only; while models based on the complementary hypothesis, including advection aridity model (AA for short) (Brutsaert and Stricker, 1979) and the Granger model (1989, 1991, 1996) estimate actual evapotranspiration at various time scales using climate data only. The AA and Granger models use different definitions for wet environment evaporation and potential evaporation and their comparative study are conducted by several researchers (Xu and Singh, 2005; Liu et al., 2006; Crago and Crowley, 2006). In this paper we explore the uniformity of the two complementary models by dimensional analysis. A new index (the proportion of the radiation term in Penman equation, termed the air humidity index) is proposed as a measure of the wetness of the evaporating surface via the wetness of over-passing air, and a general functional form for actual evaporation is developed in which the evaporation ratio is expressed as a function of the air humidity index. The similarity and differences between the AA and Granger models are interpreted via this rational function approach, and a new power function method is proposed. The theoretical analysis is confirmed by observational data under various climate conditions.
A multi-label approach using binary relevance and decision trees applied to functional genomics.
Tanaka, Erica Akemi; Nozawa, Sérgio Ricardo; Macedo, Alessandra Alaniz; Baranauskas, José Augusto
2015-04-01
Many classification problems, especially in the field of bioinformatics, are associated with more than one class, known as multi-label classification problems. In this study, we propose a new adaptation for the Binary Relevance algorithm taking into account possible relations among labels, focusing on the interpretability of the model, not only on its performance. Experiments were conducted to compare the performance of our approach against others commonly found in the literature and applied to functional genomic datasets. The experimental results show that our proposal has a performance comparable to that of other methods and that, at the same time, it provides an interpretable model from the multi-label problem.
Mathematical Models of Cardiac Pacemaking Function
NASA Astrophysics Data System (ADS)
Li, Pan; Lines, Glenn T.; Maleckar, Mary M.; Tveito, Aslak
2013-10-01
Over the past half century, there has been intense and fruitful interaction between experimental and computational investigations of cardiac function. This interaction has, for example, led to deep understanding of cardiac excitation-contraction coupling; how it works, as well as how it fails. However, many lines of inquiry remain unresolved, among them the initiation of each heartbeat. The sinoatrial node, a cluster of specialized pacemaking cells in the right atrium of the heart, spontaneously generates an electro-chemical wave that spreads through the atria and through the cardiac conduction system to the ventricles, initiating the contraction of cardiac muscle essential for pumping blood to the body. Despite the fundamental importance of this primary pacemaker, this process is still not fully understood, and ionic mechanisms underlying cardiac pacemaking function are currently under heated debate. Several mathematical models of sinoatrial node cell membrane electrophysiology have been constructed as based on different experimental data sets and hypotheses. As could be expected, these differing models offer diverse predictions about cardiac pacemaking activities. This paper aims to present the current state of debate over the origins of the pacemaking function of the sinoatrial node. Here, we will specifically review the state-of-the-art of cardiac pacemaker modeling, with a special emphasis on current discrepancies, limitations, and future challenges.
A general phenomenological model for work function
NASA Astrophysics Data System (ADS)
Brodie, I.; Chou, S. H.; Yuan, H.
2014-07-01
A general phenomenological model is presented for obtaining the zero Kelvin work function of any crystal facet of metals and semiconductors, both clean and covered with a monolayer of electropositive atoms. It utilizes the known physical structure of the crystal and the Fermi energy of the two-dimensional electron gas assumed to form on the surface. A key parameter is the number of electrons donated to the surface electron gas per surface lattice site or adsorbed atom, which is taken to be an integer. Initially this is found by trial and later justified by examining the state of the valence electrons of the relevant atoms. In the case of adsorbed monolayers of electropositive atoms a satisfactory justification could not always be found, particularly for cesium, but a trial value always predicted work functions close to the experimental values. The model can also predict the variation of work function with temperature for clean crystal facets. The model is applied to various crystal faces of tungsten, aluminium, silver, and select metal oxides, and most demonstrate good fits compared to available experimental values.
Comparison of Thermal Modeling Approaches for Complex Measurement Equipment
NASA Astrophysics Data System (ADS)
Schalles, M.; Thewes, R.
2014-04-01
Thermal modeling is used for thermal investigation and optimization of sensors, instruments, and structures. Here, results depend on the chosen modeling approach, the complexity of the model, the quality of material data, and the information about the heat transport conditions of the object of investigation. Despite the widespread application, the advantages and limits of the modeling approaches are partially unknown. For comparison of different modeling approaches, a simplified and analytically describable demonstration object is used. This object is a steel rod at well-defined heat exchange conditions with the environment. For this, analytically describable models, equivalent electrical circuits, and simple and complex finite-element-analysis models are presented. Using the different approaches, static and dynamic simulations are performed and temperatures and temperature fields in the rod are estimated. The results of those calculations, comparisons with measurements, and identification of the sensitive points of the approaches are shown. General conclusions for thermal modeling of complex equipment are drawn.
Astrocytes, Synapses and Brain Function: A Computational Approach
NASA Astrophysics Data System (ADS)
Nadkarni, Suhita
2006-03-01
Modulation of synaptic reliability is one of the leading mechanisms involved in long- term potentiation (LTP) and long-term depression (LTD) and therefore has implications in information processing in the brain. A recently discovered mechanism for modulating synaptic reliability critically involves recruitments of astrocytes - star- shaped cells that outnumber the neurons in most parts of the central nervous system. Astrocytes until recently were thought to be subordinate cells merely participating in supporting neuronal functions. New evidence, however, made available by advances in imaging technology has changed the way we envision the role of these cells in synaptic transmission and as modulator of neuronal excitability. We put forward a novel mathematical framework based on the biophysics of the bidirectional neuron-astrocyte interactions that quantitatively accounts for two distinct experimental manifestation of recruitment of astrocytes in synaptic transmission: a) transformation of a low fidelity synapse transforms into a high fidelity synapse and b) enhanced postsynaptic spontaneous currents when astrocytes are activated. Such a framework is not only useful for modeling neuronal dynamics in a realistic environment but also provides a conceptual basis for interpreting experiments. Based on this modeling framework, we explore the role of astrocytes for neuronal network behavior such as synchrony and correlations and compare with experimental data from cultured networks.
Implicit electrostatic solvent model with continuous dielectric permittivity function.
Basilevsky, Mikhail V; Grigoriev, Fedor V; Nikitina, Ekaterina A; Leszczynski, Jerzy
2010-02-25
The modification of the electrostatic continuum solvent model considered in the present work is based on the exact solution of the Poisson equation, which can be constructed provided that the dielectric permittivity epsilon of the total solute and solvent system is an isotropic and continuous spatial function. This assumption allows one to formulate a numerically efficient and universal computational scheme that covers the important case of a variable epsilon function inherent to the solvent region. The obtained type of solution is unavailable for conventional dielectric continuum models such as the Onsager and Kirkwood models for spherical cavities and the polarizable continuum model (PCM) for solute cavities of general shape, which imply that epsilon is discontinuous on the boundary confining the excluded volume cavity of the solute particle. Test computations based on the present algorithm are performed for water and several nonaqueous solvents. They illustrate specific features of this approach, called the "smooth boundary continuum model" (SBCM), as compared to the PCM procedure, and suggest primary tentative results of its parametrization for different solvents. The calculation for the case of a binary solvent mixture with variable epsilon in the solvent space region demonstrates the applicability of this approach to a novel application field covered by the SBCM.
Mathematical Modelling Approach in Mathematics Education
ERIC Educational Resources Information Center
Arseven, Ayla
2015-01-01
The topic of models and modeling has come to be important for science and mathematics education in recent years. The topic of "Modeling" topic is especially important for examinations such as PISA which is conducted at an international level and measures a student's success in mathematics. Mathematical modeling can be defined as using…
Rival approaches to mathematical modelling in immunology
NASA Astrophysics Data System (ADS)
Andrew, Sarah M.; Baker, Christopher T. H.; Bocharov, Gennady A.
2007-08-01
In order to formulate quantitatively correct mathematical models of the immune system, one requires an understanding of immune processes and familiarity with a range of mathematical techniques. Selection of an appropriate model requires a number of decisions to be made, including a choice of the modelling objectives, strategies and techniques and the types of model considered as candidate models. The authors adopt a multidisciplinary perspective.
The Pleiades mass function: Models versus observations
NASA Astrophysics Data System (ADS)
Moraux, E.; Kroupa, P.; Bouvier, J.
2004-10-01
Two stellar-dynamical models of binary-rich embedded proto-Orion-Nebula-type clusters that evolve to Pleiades-like clusters are studied with an emphasis on comparing the stellar mass function with observational constraints. By the age of the Pleiades (about 100 Myr) both models show a similar degree of mass segregation which also agrees with observational constraints. This thus indicates that the Pleiades is well relaxed and that it is suffering from severe amnesia. It is found that the initial mass function (IMF) must have been indistinguishable from the standard or Galactic-field IMF for stars with mass m ≲ 2 M⊙, provided the Pleiades precursor had a central density of about 104.8 stars/pc3. A denser model with 105.8 stars/pc3 also leads to reasonable agreement with observational constraints, but owing to the shorter relaxation time of the embedded cluster it evolves through energy equipartition to a mass-segregated condition just prior to residual-gas expulsion. This model consequently preferentially loses low-mass stars and brown dwarfs (BDs), but the effect is not very pronounced. The empirical data indicate that the Pleiades IMF may have been steeper than the Salpeter for stars with m⪆ 2 M⊙.
Longitudinal functional magnetic resonance imaging in animal models.
Silva, Afonso C; Liu, Junjie V; Hirano, Yoshiyuki; Leoni, Renata F; Merkle, Hellmut; Mackel, Julie B; Zhang, Xian Feng; Nascimento, George C; Stefanovic, Bojana
2011-01-01
Functional magnetic resonance imaging (fMRI) has had an essential role in furthering our understanding of brain physiology and function. fMRI techniques are nowadays widely applied in neuroscience research, as well as in translational and clinical studies. The use of animal models in fMRI studies has been fundamental in helping elucidate the mechanisms of cerebral blood-flow regulation, and in the exploration of basic neuroscience questions, such as the mechanisms of perception, behavior, and cognition. Because animals are inherently non-compliant, most fMRI performed to date have required the use of anesthesia, which interferes with brain function and compromises interpretability and applicability of results to our understanding of human brain function. An alternative approach that eliminates the need for anesthesia involves training the animal to tolerate physical restraint during the data acquisition. In the present chapter, we review these two different approaches to obtaining fMRI data from animal models, with a specific focus on the acquisition of longitudinal data from the same subjects.
Vlah, Zvonimir; Seljak, Uroš; Baldauf, Tobias; McDonald, Patrick; Okumura, Teppei E-mail: seljak@physik.uzh.ch E-mail: teppei@ewha.ac.kr
2012-11-01
We develop a perturbative approach to redshift space distortions (RSD) using the phase space distribution function approach and apply it to the dark matter redshift space power spectrum and its moments. RSD can be written as a sum over density weighted velocity moments correlators, with the lowest order being density, momentum density and stress energy density. We use standard and extended perturbation theory (PT) to determine their auto and cross correlators, comparing them to N-body simulations. We show which of the terms can be modeled well with the standard PT and which need additional terms that include higher order corrections which cannot be modeled in PT. Most of these additional terms are related to the small scale velocity dispersion effects, the so called finger of god (FoG) effects, which affect some, but not all, of the terms in this expansion, and which can be approximately modeled using a simple physically motivated ansatz such as the halo model. We point out that there are several velocity dispersions that enter into the detailed RSD analysis with very different amplitudes, which can be approximately predicted by the halo model. In contrast to previous models our approach systematically includes all of the terms at a given order in PT and provides a physical interpretation for the small scale dispersion values. We investigate RSD power spectrum as a function of μ, the cosine of the angle between the Fourier mode and line of sight, focusing on the lowest order powers of μ and multipole moments which dominate the observable RSD power spectrum. Overall we find considerable success in modeling many, but not all, of the terms in this expansion. This is similar to the situation in real space, but predicting power spectrum in redshift space is more difficult because of the explicit influence of small scale dispersion type effects in RSD, which extend to very large scales.
A displaced-solvent functional analysis of model hydrophobic enclosures
Abel, Robert; Wang, Lingle; Friesner, Richard A.; Berne, B. J.
2010-01-01
Calculation of protein-ligand binding affinities continues to be a hotbed of research. Although many techniques for computing protein-ligand binding affinities have been introduced--ranging from computationally very expensive approaches, such as free energy perturbation (FEP) theory; to more approximate techniques, such as empirically derived scoring functions, which, although computationally efficient, lack a clear theoretical basis--there remains pressing need for more robust approaches. A recently introduced technique, the displaced-solvent functional (DSF) method, was developed to bridge the gap between the high accuracy of FEP and the computational efficiency of empirically derived scoring functions. In order to develop a set of reference data to test the DSF theory for calculating absolute protein-ligand binding affinities, we have pursued FEP theory calculations of the binding free energies of a methane ligand with 13 different model hydrophobic enclosures of varying hydrophobicity. The binding free energies of the methane ligand with the various hydrophobic enclosures were then recomputed by DSF theory and compared with the FEP reference data. We find that the DSF theory, which relies on no empirically tuned parameters, shows excellent quantitative agreement with the FEP. We also explored the ability of buried solvent accessible surface area and buried molecular surface area models to describe the relevant physics, and find the buried molecular surface area model to offer superior performance over this dataset. PMID:21135914
Measuring neuronal branching patterns using model-based approach.
Luczak, Artur
2010-01-01
Neurons have complex branching systems which allow them to communicate with thousands of other neurons. Thus understanding neuronal geometry is clearly important for determining connectivity within the network and how this shapes neuronal function. One of the difficulties in uncovering relationships between neuronal shape and its function is the problem of quantifying complex neuronal geometry. Even by using multiple measures such as: dendritic length, distribution of segments, direction of branches, etc, a description of three dimensional neuronal embedding remains incomplete. To help alleviate this problem, here we propose a new measure, a shape diffusiveness index (SDI), to quantify spatial relations between branches at the local and global scale. It was shown that growth of neuronal trees can be modeled by using diffusion limited aggregation (DLA) process. By measuring "how easy" it is to reproduce the analyzed shape by using the DLA algorithm it can be measured how "diffusive" is that shape. Intuitively, "diffusiveness" measures how tree-like is a given shape. For example shapes like an oak tree will have high values of SDI. This measure is capturing an important feature of dendritic tree geometry, which is difficult to assess with other measures. This approach also presents a paradigm shift from well-defined deterministic measures to model-based measures, which estimate how well a model with specific properties can account for features of analyzed shape. PMID:21079752
Mathematical Modelling: A New Approach to Teaching Applied Mathematics.
ERIC Educational Resources Information Center
Burghes, D. N.; Borrie, M. S.
1979-01-01
Describes the advantages of mathematical modeling approach in teaching applied mathematics and gives many suggestions for suitable material which illustrates the links between real problems and mathematics. (GA)
Structure-function relationship of lapemis toxin: a synthetic approach.
Miller, R A; Tu, A T
1991-11-15
The synthetic approach to the structure-function relationship of lapemis toxin has been very useful in clarifying the important binding regions. To identify the neurotoxic binding domain(s) of lapemis toxin, several peptides were synthesized using the 9-fluorenylmethoxycarbonyl protocols. These peptides were based on the sequence of lapemis toxin, a 60-amino-acid, short-chain postsynaptic neurotoxin found in sea snake (Lapemis hardwickii) venom. The peptides were purified using high-performance liquid chromatography and sequenced to verify the correct synthesis, isolation, and purity. The synthetic peptide names and single letter sequences were Peptide A1 (15 mer) CCNQQSSQPKTTTNC Peptide B1 (18 mer) CYKKTWSDHRGTRIERGC Peptide B2 (16 mer) YKKTWSDHRGTRIERG Peptide C1 (12 mer) CPQVKPGIKLEC Peptide NS (20 mer) EACDFGHIKLMNPQRSTVWY. The peptide NS (nonsense peptide) sequence was arbitrarily determined and used as a control peptide. Biological activities of the synthetic peptides were determined by in vivo as well as by in vitro assay methods. For the in vivo assay, lethality was determined by intravenous injection in mice (Swiss Webster). For the in vitro assay, peptide binding to the Torpedo californica nicotinic acetylcholine receptor was determined. The peptides were found to be nontoxic at approximately 114 times the known LD50 of lapemis toxin. Binding studies with 125I-radiolabeled lapemis toxin and tyrosine-containing peptides indicated that lapemis toxin and peptide B1 bound the receptor, while the other peptides had no detectable binding. The central loop domain of lapemis toxin (peptide B1) plays a dominate role in the toxin's binding ability to the receptor. These results and the hydrophilicity analysis predict peptide B1 may serve as an antagonist or antigen to neutralize the neurotoxin effects in vivo.
de Almeida, Patrícia Maria Duarte
2006-02-01
Considering the body structures and systems loss of function, after a Spinal Cord Injury, with is respective activities limitations and social participation restriction, the rehabilitation process goals are to achieve the maximal functional independence and quality of life allowed by the clinical lesion. For this is necessary a rehabilitation period with a rehabilitation team, including the physiotherapist whose interventions will depend on factors such degree of completeness or incompleteness and patient clinical stage. Physiotherapy approach includes several procedures and techniques related with a traditional model or with the recent perspective of neuronal regeneration. Following a traditional model, the interventions in complete A and incomplete B lesions, is based on compensatory method of functional rehabilitation using the non affected muscles. In the incomplete C and D lesions, motor re-education below the lesion, using key points to facilitate normal and selective patterns of movement is preferable. In other way if the neuronal regeneration is possible with respective function improve; the physiotherapy approach goals are to maintain muscular trofism and improve the recruitment of motor units using intensive techniques. In both, there is no scientific evidence to support the procedures, exists a lack of investigation and most of the research are methodologically poor.
Functionalized anatomical models for EM-neuron Interaction modeling
NASA Astrophysics Data System (ADS)
Neufeld, Esra; Cassará, Antonino Mario; Montanaro, Hazael; Kuster, Niels; Kainz, Wolfgang
2016-06-01
The understanding of interactions between electromagnetic (EM) fields and nerves are crucial in contexts ranging from therapeutic neurostimulation to low frequency EM exposure safety. To properly consider the impact of in vivo induced field inhomogeneity on non-linear neuronal dynamics, coupled EM-neuronal dynamics modeling is required. For that purpose, novel functionalized computable human phantoms have been developed. Their implementation and the systematic verification of the integrated anisotropic quasi-static EM solver and neuronal dynamics modeling functionality, based on the method of manufactured solutions and numerical reference data, is described. Electric and magnetic stimulation of the ulnar and sciatic nerve were modeled to help understanding a range of controversial issues related to the magnitude and optimal determination of strength-duration (SD) time constants. The results indicate the importance of considering the stimulation-specific inhomogeneous field distributions (especially at tissue interfaces), realistic models of non-linear neuronal dynamics, very short pulses, and suitable SD extrapolation models. These results and the functionalized computable phantom will influence and support the development of safe and effective neuroprosthetic devices and novel electroceuticals. Furthermore they will assist the evaluation of existing low frequency exposure standards for the entire population under all exposure conditions.
A Unified Approach to Model-Based Planning and Execution
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Norvig, Peter (Technical Monitor)
2000-01-01
Writing autonomous software is complex, requiring the coordination of functionally and technologically diverse software modules. System and mission engineers must rely on specialists familiar with the different software modules to translate requirements into application software. Also, each module often encodes the same requirement in different forms. The results are high costs and reduced reliability due to the difficulty of tracking discrepancies in these encodings. In this paper we describe a unified approach to planning and execution that we believe provides a unified representational and computational framework for an autonomous agent. We identify the four main components whose interplay provides the basis for the agent's autonomous behavior: the domain model, the plan database, the plan running module, and the planner modules. This representational and problem solving approach can be applied at all levels of the architecture of a complex agent, such as Remote Agent. In the rest of the paper we briefly describe the Remote Agent architecture. The new agent architecture proposed here aims at achieving the full Remote Agent functionality. We then give the fundamental ideas behind the new agent architecture and point out some implication of the structure of the architecture, mainly in the area of reactivity and interaction between reactive and deliberative decision making. We conclude with related work and current status.
A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images
Hammoudi, Karim; Dornaika, Fadi
2011-01-01
This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575
A featureless approach to 3D polyhedral building modeling from aerial images.
Hammoudi, Karim; Dornaika, Fadi
2011-01-01
This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575
NASA Astrophysics Data System (ADS)
Vuelban, E. M.; Dekker, P. R.
2013-09-01
In this work, a model based on the transfer function approach of the propagation of radiation through several apertures and optical components is presented. This transfer function formalism offers the possibility of studying various measurement scenarios involving different source geometries, distances, and varying complexities of the optics of the radiation thermometer. The impact of different types of source geometries, and the variation of source-thermometer distance are investigated using the above model. Simulation results and experimental validation are presented.
Testing process predictions of models of risky choice: a quantitative model comparison approach
Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard
2013-01-01
This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472
Evaluating survival model performance: a graphical approach.
Mandel, M; Galai, N; Simchen, E
2005-06-30
In the last decade, many statistics have been suggested to evaluate the performance of survival models. These statistics evaluate the overall performance of a model ignoring possible variability in performance over time. Using an extension of measures used in binary regression, we propose a graphical method to depict the performance of a survival model over time. The method provides estimates of performance at specific time points and can be used as an informal test for detecting time varying effects of covariates in the Cox model framework. The method is illustrated on real and simulated data using Cox proportional hazard model and rank statistics.
A modeling approach for compounds affecting body composition.
Gennemark, Peter; Jansson-Löfmark, Rasmus; Hyberg, Gina; Wigstrand, Maria; Kakol-Palm, Dorota; Håkansson, Pernilla; Hovdal, Daniel; Brodin, Peter; Fritsch-Fredin, Maria; Antonsson, Madeleine; Ploj, Karolina; Gabrielsson, Johan
2013-12-01
Body composition and body mass are pivotal clinical endpoints in studies of welfare diseases. We present a combined effort of established and new mathematical models based on rigorous monitoring of energy intake (EI) and body mass in mice. Specifically, we parameterize a mechanistic turnover model based on the law of energy conservation coupled to a drug mechanism model. Key model variables are fat-free mass (FFM) and fat mass (FM), governed by EI and energy expenditure (EE). An empirical Forbes curve relating FFM to FM was derived experimentally for female C57BL/6 mice. The Forbes curve differs from a previously reported curve for male C57BL/6 mice, and we thoroughly analyse how the choice of Forbes curve impacts model predictions. The drug mechanism function acts on EI or EE, or both. Drug mechanism parameters (two to three parameters) and system parameters (up to six free parameters) could be estimated with good precision (coefficients of variation typically <20 % and not greater than 40 % in our analyses). Model simulations were done to predict the EE and FM change at different drug provocations in mice. In addition, we simulated body mass and FM changes at different drug provocations using a similar model for man. Surprisingly, model simulations indicate that an increase in EI (e.g. 10 %) was more efficient than an equal lowering of EI. Also, the relative change in body mass and FM is greater in man than in mouse at the same relative change in either EI or EE. We acknowledge that this assumes the same drug mechanism impact across the two species. A set of recommendations regarding the Forbes curve, vehicle control groups, dual action on EI and loss, and translational aspects are discussed. This quantitative approach significantly improves data interpretation, disease system understanding, safety assessment and translation across species.
Model of local temperature changes in brain upon functional activation.
Collins, Christopher M; Smith, Michael B; Turner, Robert
2004-12-01
Experimental results for changes in brain temperature during functional activation show large variations. It is, therefore, desirable to develop a careful numerical model for such changes. Here, a three-dimensional model of temperature in the human head using the bioheat equation, which includes effects of metabolism, perfusion, and thermal conduction, is employed to examine potential temperature changes due to functional activation in brain. It is found that, depending on location in brain and corresponding baseline temperature relative to blood temperature, temperature may increase or decrease on activation and concomitant increases in perfusion and rate of metabolism. Changes in perfusion are generally seen to have a greater effect on temperature than are changes in metabolism, and hence active brain is predicted to approach blood temperature from its initial temperature. All calculated changes in temperature for reasonable physiological parameters have magnitudes <0.12 degrees C and are well within the range reported in recent experimental studies involving human subjects.
Electrostatics of a simple membrane model using Green's functions formalism.
von Kitzing, E; Soumpasis, D M
1996-01-01
The electrostatics of a simple membrane model picturing a lipid bilayer as a low dielectric constant slab immersed in a homogeneous medium of high dielectric constant (water) can be accurately computed using the exact Green's functions obtainable for this geometry. We present an extensive discussion of the analysis and numerical aspects of the problem and apply the formalism and algorithms developed to the computation of the energy profiles of a test charge (e.g., ion) across the bilayer and a molecular model of the acetylcholine receptor channel embedded in it. The Green's function approach is a very convenient tool for the computer simulation of ionic transport across membrane channels and other membrane problems where a good and computationally efficient first-order treatment of dielectric polarization effects is crucial. PMID:8842218
Piecewise Linear Membership Function Generator-Divider Approach
NASA Technical Reports Server (NTRS)
Hart, Ron; Martinez, Gene; Yuan, Bo; Zrilic, Djuro; Ramirez, Jaime
1997-01-01
In this paper a simple, inexpensive, membership function circuit for fuzzy controllers is presented. The proposed circuit may be used to generate a general trapezoidal membership function. The slope and horizontal shift are fully programmable parameters.
A model-based multisensor data fusion knowledge management approach
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2014-06-01
A variety of approaches exist for combining data from multiple sensors. The model-based approach combines data based on its support for or refutation of elements of the model which in turn can be used to evaluate an experimental thesis. This paper presents a collection of algorithms for mapping various types of sensor data onto a thesis-based model and evaluating the truth or falsity of the thesis, based on the model. The use of this approach for autonomously arriving at findings and for prioritizing data are considered. Techniques for updating the model (instead of arriving at a true/false assertion) are also discussed.
Traffic flow forecasting: Comparison of modeling approaches
Smith, B.L.; Demetsky, M.J.
1997-08-01
The capability to forecast traffic volume in an operational setting has been identified as a critical need for intelligent transportation systems (ITS). In particular, traffic volume forecasts will support proactive, dynamic traffic control. However, previous attempts to develop traffic volume forecasting models have met with limited success. This research effort focused on developing traffic volume forecasting models for two sites on Northern Virginia`s Capital Beltway. Four models were developed and tested for the freeway traffic flow forecasting problem, which is defined as estimating traffic flow 15 min into the future. They were the historical average, time-series, neural network, and nonparametric regression models. The nonparametric regression model significantly outperformed the other models. A Wilcoxon signed-rank test revealed that the nonparametric regression model experienced significantly lower errors than the other models. In addition, the nonparametric regression model was easy to implement, and proved to be portable, performing well at two distinct sites. Based on its success, research is ongoing to refine the nonparametric regression model and to extend it to produce multiple interval forecasts.
Metal mixture modeling evaluation project: 2. Comparison of four modeling approaches.
Farley, Kevin J; Meyer, Joseph S; Balistrieri, Laurie S; De Schamphelaere, Karel A C; Iwasaki, Yuichi; Janssen, Colin R; Kamo, Masashi; Lofts, Stephen; Mebane, Christopher A; Naito, Wataru; Ryan, Adam C; Santore, Robert C; Tipping, Edward
2015-04-01
As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the US Geological Survey (USA), HDR|HydroQual (USA), and the Centre for Ecology and Hydrology (United Kingdom) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME workshop in Brussels, Belgium (May 2012), is provided in the present study. Overall, the models were found to be similar in structure (free ion activities computed by the Windermere humic aqueous model [WHAM]; specific or nonspecific binding of metals/cations in or on the organism; specification of metal potency factors or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single vs multiple types of binding sites on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong interrelationships among the model parameters (binding constants, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.
The Person Approach: Concepts, Measurement Models, and Research Strategy
ERIC Educational Resources Information Center
Magnusson, David
2003-01-01
This chapter discusses the "person approach" to studying developmental processes by focusing on the distinction and complementarity between this holistic-interactionistic framework and what has become designated as the variable approach. Particular attention is given to measurement models for use in the person approach. The discussion on the…
Caustics in asymptotic Green Function transmission models
NASA Astrophysics Data System (ADS)
Roberts, R. A.
2000-05-01
Green Function-based beam transmission models are attractive due to their ability to explicitly handle transmission through complicated geometrical surfaces, such as flat-to-circular arc compound profiles. The beam model considered in this paper integrates the field generated by a point source positioned within a solid body over a radiating aperture surface (transducer face) in a fluid medium exterior to the solid body. In full generality, evaluation of the Green function at a point on the aperture surface requires an integration over the component surface (solid-water interface). For geometries of practical interest, this integration can be effectively evaluated by applying high-frequency asymptotic techniques (stationary phase analysis=ray theory). However, first-order asymptotic methods fail at focusing caustics, that is, when the component surface curvature focuses the field generated by the interior point source onto the aperture surface. Uniform asymptotic methods are available to treat such problems. However, implementation of uniform expansion methods in an algorithm applicable to arbitrarily curved component surfaces entails a complexity that outweighs algorithm utility. Past algorithms have therefore evaluated the Green function in these anomalous cases by performing an explicit numerical integration over the component surface. Work reported here hypothesizes that the singularity in the Green function amplitude from first-order analysis is an integrable singularity, and hence can be handled in the aperture surface integration through appropriate integration variable transformation. It is shown that an effective transformation of variables is provided by the ray coordinates which map the interior source location to points on the component surface, then onto points on the aperture surface. It is seen that zeros in the Jacobian of the surface aperture coordinate-to-ray coordinate mapping mollify the singularities in the first-order analysis Green function
A New Approach in Regression Analysis for Modeling Adsorption Isotherms
Onjia, Antonije E.
2014-01-01
Numerous regression approaches to isotherm parameters estimation appear in the literature. The real insight into the proper modeling pattern can be achieved only by testing methods on a very big number of cases. Experimentally, it cannot be done in a reasonable time, so the Monte Carlo simulation method was applied. The objective of this paper is to introduce and compare numerical approaches that involve different levels of knowledge about the noise structure of the analytical method used for initial and equilibrium concentration determination. Six levels of homoscedastic noise and five types of heteroscedastic noise precision models were considered. Performance of the methods was statistically evaluated based on median percentage error and mean absolute relative error in parameter estimates. The present study showed a clear distinction between two cases. When equilibrium experiments are performed only once, for the homoscedastic case, the winning error function is ordinary least squares, while for the case of heteroscedastic noise the use of orthogonal distance regression or Margart's percent standard deviation is suggested. It was found that in case when experiments are repeated three times the simple method of weighted least squares performed as well as more complicated orthogonal distance regression method. PMID:24672394
Functional GI disorders: from animal models to drug development
Mayer, E A; Bradesi, S; Chang, L; Spiegel, B M R; Bueller, J A; Naliboff, B D
2014-01-01
Despite considerable efforts by academic researchers and by the pharmaceutical industry, the development of novel pharmacological treatments for irritable bowel syndrome (IBS) and other functional gastrointestinal (GI) disorders has been slow and disappointing. The traditional approach to identifying and evaluating novel drugs for these symptom-based syndromes has relied on a fairly standard algorithm using animal models, experimental medicine models and clinical trials. In the current article, the empirical basis for this process is reviewed, focusing on the utility of the assessment of visceral hypersensitivity and GI transit, in both animals and humans, as well as the predictive validity of preclinical and clinical models of IBS for identifying successful treatments for IBS symptoms and IBS-related quality of life impairment. A review of published evidence suggests that abdominal pain, defecation-related symptoms (urgency, straining) and psychological factors all contribute to overall symptom severity and to health-related quality of life. Correlations between readouts obtained in preclinical and clinical models and respective symptoms are small, and the ability to predict drug effectiveness for specific as well as for global IBS symptoms is limited. One possible drug development algorithm is proposed which focuses on pharmacological imaging approaches in both preclinical and clinical models, with decreased emphasis on evaluating compounds in symptom-related animal models, and more rapid screening of promising candidate compounds in man. PMID:17965064
Modeling population dynamics: A quantile approach.
Chavas, Jean-Paul
2015-04-01
The paper investigates the modeling of population dynamics, both conceptually and empirically. It presents a reduced form representation that provides a flexible characterization of population dynamics. It leads to the specification of a threshold quantile autoregression (TQAR) model, which captures nonlinear dynamics by allowing lag effects to vary across quantiles of the distribution as well as with previous population levels. The usefulness of the model is illustrated in an application to the dynamics of lynx population. We find statistical evidence that the quantile autoregression parameters vary across quantiles (thus rejecting the AR model as well as the TAR model) as well as with past populations (thus rejecting the quantile autoregression QAR model). The results document the nature of dynamics and cycle in the lynx population over time. They show how both the period of the cycle and the speed of population adjustment vary with population level and environmental conditions. PMID:25661501
Longitudinal functional principal component modeling via Stochastic Approximation Monte Carlo
Martinez, Josue G.; Liang, Faming; Zhou, Lan; Carroll, Raymond J.
2010-01-01
The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented. PMID:20689648
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; Tartakovsky, Daniel M.
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulic head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.
Consumer preference models: fuzzy theory approach
NASA Astrophysics Data System (ADS)
Turksen, I. B.; Wilson, I. A.
1993-12-01
Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).
Serpentinization reaction pathways: implications for modeling approach
Janecky, D.R.
1986-01-01
Experimental seawater-peridotite reaction pathways to form serpentinites at 300/sup 0/C, 500 bars, can be accurately modeled using the EQ3/6 codes in conjunction with thermodynamic and kinetic data from the literature and unpublished compilations. These models provide both confirmation of experimental interpretations and more detailed insight into hydrothermal reaction processes within the oceanic crust. The accuracy of these models depends on careful evaluation of the aqueous speciation model, use of mineral compositions that closely reproduce compositions in the experiments, and definition of realistic reactive components in terms of composition, thermodynamic data, and reaction rates.
Shi, Wuxian; Chance, Mark R
2011-02-01
About one-third of all proteins are associated with a metal. Metalloproteomics is defined as the structural and functional characterization of metalloproteins on a genome-wide scale. The methodologies utilized in metalloproteomics, including both forward (bottom-up) and reverse (top-down) technologies, to provide information on the identity, quantity, and function of metalloproteins are discussed. Important techniques frequently employed in metalloproteomics include classical proteomic tools such as mass spectrometry and 2D gels, immobilized-metal affinity chromatography, bioinformatic sequence analysis and homology modeling, X-ray absorption spectroscopy and other synchrotron radiation based tools. Combinative applications of these techniques provide a powerful approach to understand the function of metalloproteins.
Shi, Wuxian; Chance, Mark R.
2010-01-01
About one-third of all proteins are associated with a metal. Metalloproteomics is defined as the structural and functional characterization of metalloproteins on a genome-wide scale. The methodologies utilized in metalloproteomics, including both forward (bottom-up) and reverse (top-down) technologies, to provide information on the identity, quantity and function of metalloproteins are discussed. Important techniques frequently employed in metalloproteomics include classical proteomics tools such as mass spectrometry and 2-D gels, immobilized-metal affinity chromatography, bioinformatics sequence analysis and homology modeling, X-ray absorption spectroscopy and other synchrotron radiation based tools. Combinative applications of these techniques provide a powerful approach to understand the function of metalloproteins. PMID:21130021
Katanin, A. A.
2015-06-15
We consider formulations of the functional renormalization-group (fRG) flow for correlated electronic systems with the dynamical mean-field theory as a starting point. We classify the corresponding renormalization-group schemes into those neglecting one-particle irreducible six-point vertices (with respect to the local Green’s functions) and neglecting one-particle reducible six-point vertices. The former class is represented by the recently introduced DMF{sup 2}RG approach [31], but also by the scale-dependent generalization of the one-particle irreducible representation (with respect to local Green’s functions, 1PI-LGF) of the generating functional [20]. The second class is represented by the fRG flow within the dual fermion approach [16, 32]. We compare formulations of the fRG approach in each of these cases and suggest their further application to study 2D systems within the Hubbard model.
An inverse problem approach to modelling coastal effluent plumes
NASA Astrophysics Data System (ADS)
Lam, D. C. L.; Murthy, C. R.; Miners, K. C.
Formulated as an inverse problem, the diffusion parameters associated with length-scale dependent eddy diffusivities can be viewed as the unknowns in the mass conservation equation for coastal zone transport problems. The values of the diffusion parameters can be optimized according to an error function incorporated with observed concentration data. Examples are given for the Fickian, shear diffusion and inertial subrange diffusion models. Based on a new set of dyeplume data collected in the coastal zone off Bronte, Lake Ontario, it is shown that the predictions of turbulence closure models can be evaluated for different flow conditions. The choice of computational schemes for this diagnostic approach is based on tests with analytic solutions and observed data. It is found that the optimized shear diffusion model produced a better agreement with observations for both high and low advective flows than, e.g., the unoptimized semi-empirical model, Ky=0.075 σy1.2, described by Murthy and Kenney.
A Mixed Approach for Modeling Blood Flow in Brain Microcirculation
NASA Astrophysics Data System (ADS)
Peyrounette, M.; Sylvie, L.; Davit, Y.; Quintard, M.
2014-12-01
We have previously demonstrated [1] that the vascular system of the healthy human brain cortex is a superposition of two structural components, each corresponding to a different spatial scale. At small-scale, the vascular network has a capillary structure, which is homogeneous and space-filling over a cut-off length. At larger scale, veins and arteries conform to a quasi-fractal branched structure. This structural duality is consistent with the functional duality of the vasculature, i.e. distribution and exchange. From a modeling perspective, this can be viewed as the superposition of: (a) a continuum model describing slow transport in the small-scale capillary network, characterized by a representative elementary volume and effective properties; and (b) a discrete network approach [2] describing fast transport in the arterial and venous network, which cannot be homogenized because of its fractal nature. This problematic is analogous to modeling problems encountered in geological media, e.g, in petroleum engineering, where fast conducting channels (wells or fractures) are embedded in a porous medium (reservoir rock). An efficient method to reduce the computational cost of fractures/continuum simulations is to use relatively large grid blocks for the continuum model. However, this also makes it difficult to accurately couple both structural components. In this work, we solve this issue by adapting the "well model" concept used in petroleum engineering [3] to brain specific 3-D situations. We obtain a unique linear system of equations describing the discrete network, the continuum and the well model coupling. Results are presented for realistic geometries and compared with a non-homogenized small-scale network model of an idealized periodic capillary network of known permeability. [1] Lorthois & Cassot, J. Theor. Biol. 262, 614-633, 2010. [2] Lorthois et al., Neuroimage 54 : 1031-1042, 2011. [3] Peaceman, SPE J. 18, 183-194, 1978.
Mouse models of p53 functions.
Lozano, Guillermina
2010-04-01
Studies in mice have yielded invaluable insight into our understanding of the p53 pathway. Mouse models with activated p53, no p53, and mutant p53 have queried the role of p53 in development and tumorigenesis. In these models, p53 is activated and stabilized via redundant posttranslational modifications. On activation, p53 initiates two major responses: inhibition of proliferation (via cell-cycle arrest, quiescence, senescence, and differentiation) and induction of apoptosis. Importantly, these responses are cell-type and tumor-type-specific. The analysis of mutant p53 alleles has established a gain-of-function role for p53 mutants in metastasis. The development of additional models that can precisely time the oncogenic events in single cells will provide further insight into the evolution of tumors, the importance of the stroma, and the cooperating events that lead to disruption of the p53 pathway. Ultimately, these models should serve to study the effects of novel drugs on tumor response as well as normal homeostasis.
Modelling the Eukaryotic Chromosome: A Stepped Approach.
ERIC Educational Resources Information Center
Nicholl, Linda A. A.; Nicholl, Desmond S. T.
1987-01-01
Describes how a series of models can be constructed to illustrate the structure of eukaryotic chromosomes, emphasizing the structure of DNA. Suggests that by adapting a different scale for each series of models, a complete picture of the complex nature of the chromosome can be built up. (TW)
Teacher Consultation Model: An Operant Approach
ERIC Educational Resources Information Center
Halfacre, John; Welch, Frances
1973-01-01
This article describes a model for changing teacher behavior in dealing with problem students. The model reflects the incorporation of learning theory techniques (pinpointing behavior, reinforcement, shaping, etc.). A step-by-step account of how a psychologist deals with a teacher concerned about a boy's cursing is given. The teacher is encouraged…
Quantum Supersymmetric Models in the Causal Approach
NASA Astrophysics Data System (ADS)
Grigore, Dan-Radu
2007-04-01
We consider the massless supersymmetric vector multiplet in a purely quantum framework. First order gauge invariance determines uniquely the interaction Lagrangian as in the case of Yang-Mills models. Going to the second order of perturbation theory produces an anomaly which cannot be eliminated. We make the analysis of the model working only with the component fields.
COMPARING AND LINKING PLUMES ACROSS MODELING APPROACHES
River plumes carry many pollutants, including microorganisms, into lakes and the coastal ocean. The physical scales of many stream and river plumes often lie between the scales for mixing zone plume models, such as the EPA Visual Plumes model, and larger-sized grid scales for re...
The Hourglass Approach: A Conceptual Model for Group Facilitators.
ERIC Educational Resources Information Center
Kriner, Lon S.; Goulet, Everett F.
1983-01-01
Presents a model to clarify the facilitator's role in working with groups. The Hourglass Approach model incorporates Carkhuff's empathetic levels of communication and Schultz's theory of personality. It is designed to be a systematic and comprehensive method usable with a variety of counseling approaches in all types of groups. (JAC)
Students' Approaches to Learning a New Mathematical Model
ERIC Educational Resources Information Center
Flegg, Jennifer A.; Mallet, Daniel G.; Lupton, Mandy
2013-01-01
In this article, we report on the findings of an exploratory study into the experience of undergraduate students as they learn new mathematical models. Qualitative and quantitative data based around the students' approaches to learning new mathematical models were collected. The data revealed that students actively adopt three approaches to…
Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
Gyrokinetic modeling: A multi-water-bag approach
Morel, P.; Gravier, E.; Besse, N.; Klein, R.; Ghizzo, A.; Bertrand, P.; Garbet, X.; Ghendrih, P.; Grandgirard, V.; Sarazin, Y.
2007-11-15
Predicting turbulent transport in nearly collisionless fusion plasmas requires one to solve kinetic (or, more precisely, gyrokinetic) equations. In spite of considerable progress, several pending issues remain; although more accurate, the kinetic calculation of turbulent transport is much more demanding in computer resources than fluid simulations. An alternative approach is based on a water-bag representation of the distribution function that is not an approximation but rather a special class of initial conditions, allowing one to reduce the full kinetic Vlasov equation into a set of hydrodynamic equations while keeping its kinetic character. The main result for the water-bag model is a lower cost in the parallel velocity direction since no differential operator associated with some approximate numerical scheme has to be carried out on this variable v{sub parallel}. Indeed, a small bag number is sufficient to correctly describe the ion temperature gradient instability.
IONONEST—A Bayesian approach to modeling the lower ionosphere
NASA Astrophysics Data System (ADS)
Martin, Poppy L.; Scaife, Anna M. M.; McKay, Derek; McCrea, Ian
2016-08-01
Obtaining high-resolution electron density height profiles for the D region of the ionosphere as a well-sampled function of time is difficult for most methods of ionospheric measurement. Here we present a new method of using multifrequency riometry data for producing D region height profiles via inverse methods. To obtain these profiles, we use the nested sampling technique, implemented through our code, IONONEST. We demonstrate this approach using new data from the Kilpisjärvi Atmospheric Imaging Receiver Array (KAIRA) instrument and consider two electron density models. We compare the recovered height profiles from the KAIRA data with those from incoherent scatter radar using data from the European Incoherent Scatter Facility (EISCAT) instrument and find that there is good agreement between the two techniques, allowing for instrumental differences.
Monte Carlo path sampling approach to modeling aeolian sediment transport
NASA Astrophysics Data System (ADS)
Hardin, E. J.; Mitasova, H.; Mitas, L.
2011-12-01
Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient
Hierarchical organization of functional connectivity in the mouse brain: a complex network approach
NASA Astrophysics Data System (ADS)
Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano
2016-08-01
This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges.
Hierarchical organization of functional connectivity in the mouse brain: a complex network approach.
Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano
2016-01-01
This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges. PMID:27534708
Hierarchical organization of functional connectivity in the mouse brain: a complex network approach
Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano
2016-01-01
This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges. PMID:27534708
Modeling the three-point correlation function
Marin, Felipe; Wechsler, Risa; Frieman, Joshua A.; Nichol, Robert; /Portsmouth U., ICG
2007-04-01
We present new theoretical predictions for the galaxy three-point correlation function (3PCF) using high-resolution dissipationless cosmological simulations of a flat {Lambda}CDM Universe which resolve galaxy-size halos and subhalos. We create realistic mock galaxy catalogs by assigning luminosities and colors to dark matter halos and subhalos, and we measure the reduced 3PCF as a function of luminosity and color in both real and redshift space. As galaxy luminosity and color are varied, we find small differences in the amplitude and shape dependence of the reduced 3PCF, at a level qualitatively consistent with recent measurements from the SDSS and 2dFGRS. We confirm that discrepancies between previous 3PCF measurements can be explained in part by differences in binning choices. We explore the degree to which a simple local bias model can fit the simulated 3PCF. The agreement between the model predictions and galaxy 3PCF measurements lends further credence to the straightforward association of galaxies with CDM halos and subhalos.
Genetically modified mouse models addressing gonadotropin function.
Ratner, Laura D; Rulli, Susana B; Huhtaniemi, Ilpo T
2014-03-01
The development of genetically modified animals has been useful to understand the mechanisms involved in the regulation of the gonadotropin function. It is well known that alterations in the secretion of a single hormone is capable of producing profound reproductive abnormalities. Human chorionic gonadotropin (hCG) is a glycoprotein hormone normally secreted by the human placenta, and structurally and functionally it is related to pituitary LH. LH and hCG bind to the same LH/hCG receptor, and hCG is often used as an analog of LH to boost gonadotropin action. There are many physiological and pathological conditions where LH/hCG levels and actions are elevated. In order to understand how elevated LH/hCG levels may impact on the hypothalamic-pituitary-gonadal axis we have developed a transgenic mouse model with chronic hCG hypersecretion. Female mice develop many gonadal and extragonadal phenotypes including obesity, infertility, hyperprolactinemia, and pituitary and mammary gland tumors. This article summarizes recent findings on the mechanisms involved in pituitary gland tumorigenesis and hyperprolactinemia in the female mice hypersecreting hCG, in particular the relationship of progesterone with the hyperprolactinemic condition of the model. In addition, we describe the role of hyperprolactinemia as the main cause of infertility and the phenotypic abnormalities in these mice, and the use of dopamine agonists bromocriptine and cabergoline to normalize these conditions.
Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches
Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward
2015-01-01
As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.
A simple approach to modeling ductile failure.
Wellman, Gerald William
2012-06-01
Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.
A new approach to assessing function in elderly people.
Williams, Mark E.; Owens, Justine E.; Parker, B. Eugene; Granata, Kevin P.
2003-01-01
Current functional assessment methods and measures of elderly people are limited in their ability to detect small decrements in function or in discriminating between different patterns of functional loss. Nor do they directly assess function in the patient's usual environment. Recent technological advances have led to the development of small, wearable microelectronic devices that detect motion, velocity and acceleration. These devices can be used to develop new tools for more precise monitoring, assessment, and prediction of function by characterizing the 'electronic signatures' of successful or unsuccessful task-specific performance, and to allow for continuous assessment in a home environment. This presentation will summarize current efforts to translate new technologies into a clinical and research tool for improved assessment, monitoring, and prediction of function among older individuals. Images Fig. 1 PMID:12813921
A new approach to assessing function in elderly people.
Williams, Mark E; Owens, Justine E; Parker, B Eugene; Granata, Kevin P
2003-01-01
Current functional assessment methods and measures of elderly people are limited in their ability to detect small decrements in function or in discriminating between different patterns of functional loss. Nor do they directly assess function in the patient's usual environment. Recent technological advances have led to the development of small, wearable microelectronic devices that detect motion, velocity and acceleration. These devices can be used to develop new tools for more precise monitoring, assessment, and prediction of function by characterizing the 'electronic signatures' of successful or unsuccessful task-specific performance, and to allow for continuous assessment in a home environment. This presentation will summarize current efforts to translate new technologies into a clinical and research tool for improved assessment, monitoring, and prediction of function among older individuals.
The imprint of plants on ecosystem functioning: A data-driven approach
NASA Astrophysics Data System (ADS)
Musavi, Talie; Mahecha, Miguel D.; Migliavacca, Mirco; Reichstein, Markus; van de Weg, Martine Janet; van Bodegom, Peter M.; Bahn, Michael; Wirth, Christian; Reich, Peter B.; Schrodt, Franziska; Kattge, Jens
2015-12-01
Terrestrial ecosystems strongly determine the exchange of carbon, water and energy between the biosphere and atmosphere. These exchanges are influenced by environmental conditions (e.g., local meteorology, soils), but generally mediated by organisms. Often, mathematical descriptions of these processes are implemented in terrestrial biosphere models. Model implementations of this kind should be evaluated by empirical analyses of relationships between observed patterns of ecosystem functioning, vegetation structure, plant traits, and environmental conditions. However, the question of how to describe the imprint of plants on ecosystem functioning based on observations has not yet been systematically investigated. One approach might be to identify and quantify functional attributes or responsiveness of ecosystems (often very short-term in nature) that contribute to the long-term (i.e., annual but also seasonal or daily) metrics commonly in use. Here we define these patterns as "ecosystem functional properties", or EFPs. Such as the ecosystem capacity of carbon assimilation or the maximum light use efficiency of an ecosystem. While EFPs should be directly derivable from flux measurements at the ecosystem level, we posit that these inherently include the influence of specific plant traits and their local heterogeneity. We present different options of upscaling in situ measured plant traits to the ecosystem level (ecosystem vegetation properties - EVPs) and provide examples of empirical analyses on plants' imprint on ecosystem functioning by combining in situ measured plant traits and ecosystem flux measurements. Finally, we discuss how recent advances in remote sensing contribute to this framework.
Hubbard Model Approach to X-ray Spectroscopy
NASA Astrophysics Data System (ADS)
Ahmed, Towfiq
We have implemented a Hubbard model based first-principles approach for real-space calculations of x-ray spectroscopy, which allows one to study excited state electronic structure of correlated systems. Theoretical understanding of many electronic features in d and f electron systems remains beyond the scope of conventional density functional theory (DFT). In this work our main effort is to go beyond the local density approximation (LDA) by incorporating the Hubbard model within the real-space multiple-scattering Green's function (RSGF) formalism. Historically, the first theoretical description of correlated systems was published by Sir Neville Mott and others in 1937. They realized that the insulating gap and antiferromagnetism in the transition metal oxides are mainly caused by the strong on-site Coulomb interaction of the localized unfilled 3d orbitals. Even with the recent progress of first principles methods (e.g. DFT) and model Hamiltonian approaches (e.g., Hubbard-Anderson model), the electronic description of many of these systems remains a non-trivial combination of both. X-ray absorption near edge spectra (XANES) and x-ray emission spectra (XES) are very powerful spectroscopic probes for many electronic features near Fermi energy (EF), which are caused by the on-site Coulomb interaction of localized electrons. In this work we focus on three different cases of many-body effects due to the interaction of localized d electrons. Here, for the first time, we have applied the Hubbard model in the real-space multiple scattering (RSGF) formalism for the calculation of x-ray spectra of Mott insulators (e.g., NiO and MnO). Secondly, we have implemented in our RSGF approach a doping dependent self-energy that was constructed from a single-band Hubbard model for the over doped high-T c cuprate La2-xSrxCuO4. Finally our RSGF calculation of XANES is calculated with the spectral function from Lee and Hedin's charge transfer satellite model. For all these cases our
A functional approach to geometry optimization of complex systems
NASA Astrophysics Data System (ADS)
Maslen, P. E.
A quadratically convergent procedure is presented for the geometry optimization of complex systems, such as biomolecules and molecular complexes. The costly evaluation of the exact Hessian is avoided by expanding the density functional to second order in both nuclear and electronic variables, and then searching for the minimum of the quadratic functional. The dependence of the functional on the choice of nuclear coordinate system is described, and illustrative geometry optimizations using Cartesian and internal coordinates are presented for Taxol™.
Damped sinusoidal function to model acute irradiation in radiotherapy patients.
Tukiendorf, Andrzej; Miszczyk, Leszek; Bojarski, Jacek
2013-09-01
In the paper, we suggest a damped sinusoidal function be used to model a regenerative response of mucosa in time after the radiotherapy treatment. The medical history of 389 RT patients irradiated within the years 1994-2000 at the Radiotherapy Department, Cancer Center, Maria Skłodowska-Curie Memorial Institute of Oncology, Gliwice, Poland, was taken into account. In the analyzed group of patients, the number of observations of a single patient ranged from 2 to 25 (mean = 8.3, median = 8) with severity determined by use of Dische's scores from 0 to 24 (mean = 7.4, median = 7). Statistical modeling of radiation-induced mucositis was performed for five groups of patients irradiated within the following radiotherapy schedules: CAIR, CB, Manchester, CHA-CHA, and Conventional. All of the regression parameters of the assumed model, i.e. amplitude, damping coefficient, angular frequency, phase of component, and offset, estimated in the analysis were statistically significant (p-value < 0.05) for the radiotherapy schedules. The model was validated using a non-oscillatory function. Following goodness-of-fit statistics, the damped sinusoidal function fits the data better than the non-oscillatory damped function. Model curves for harmonic characteristics with confidence intervals were plotted separately for each of the RT schedules and together in a combined design. The suggested model might be helpful in the numeric evaluation of the RT toxicity in the groups of patients under analysis as it allows for practical comparisons and treatment optimization. A statistical approach is also briefly described in the paper.
Multiscale modeling approach for calculating grain-boundary energies from first principles
Shenderova, O.A.; Brenner, D.W.; Nazarov, A.A.; Romanov, A.E.; Yang, L.H.
1998-02-01
A multiscale modeling approach is proposed for calculating energies of tilt-grain boundaries in covalent materials from first principles over an entire misorientation range for given tilt axes. The method uses energies from density-functional calculations for a few key structures as input into a disclination structural-units model. This approach is demonstrated by calculating energies of {l_angle}001{r_angle}-symmetrical tilt-grain boundaries in diamond. {copyright} {ital 1998} {ital The American Physical Society}
An Approach to Communication Model Building.
ERIC Educational Resources Information Center
Casstevens, E. Reber
1979-01-01
Suggests the expansion and refinement of the basic sender-channel-receiver communication model. Offers several designs, each highlighting a particular aspect of the communication process. Discusses the effects of environment and feedback on the message. (JMF)
An improved approach for tank purge modeling
NASA Astrophysics Data System (ADS)
Roth, Jacob R.; Chintalapati, Sunil; Gutierrez, Hector M.; Kirk, Daniel R.
2013-05-01
Many launch support processes use helium gas to purge rocket propellant tanks and fill lines to rid them of hazardous contaminants. As an example, the purge of the Space Shuttle's External Tank used approximately 1,100 kg of helium. With the rising cost of helium, initiatives are underway to examine methods to reduce helium consumption. Current helium purge processes have not been optimized using physics-based models, but rather use historical 'rules of thumb'. To develop a more accurate and useful model of the tank purge process, computational fluid dynamics simulations of several tank configurations were completed and used as the basis for the development of an algebraic model of the purge process. The computationally efficient algebraic model of the purge process compares well with a detailed transient, three-dimensional computational fluid dynamics (CFD) simulation as well as with experimental data from two external tank purges.
Development on electromagnetic impedance function modeling and its estimation
Sutarno, D.
2015-09-30
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition
Development on electromagnetic impedance function modeling and its estimation
NASA Astrophysics Data System (ADS)
Sutarno, D.
2015-09-01
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition
Integrating Person- and Function-Centered Approaches in Career Development Theory and Research.
ERIC Educational Resources Information Center
Vondracek, Fred W.; Porfeli, Erik
2002-01-01
Compares life-span approaches (function and variable centered) and life-course approaches (person centered and holistic) in the study of career development. Advocates their integration in developmental theory and research. (Contains 69 references.) (SK)
An Odds Ratio Approach for Detecting DDF under the Nested Logit Modeling Framework
ERIC Educational Resources Information Center
Terzi, Ragip; Suh, Youngsuk
2015-01-01
An odds ratio approach (ORA) under the framework of a nested logit model was proposed for evaluating differential distractor functioning (DDF) in multiple-choice items and was compared with an existing ORA developed under the nominal response model. The performances of the two ORAs for detecting DDF were investigated through an extensive…
Runoff-rainfall (sic!) modelling: Comparing two different approaches
NASA Astrophysics Data System (ADS)
Herrnegger, Mathew; Schulz, Karsten
2015-04-01
rainfall estimates from the two models. Here, time series from a station observation in the proximity of the catchment and the independent INCA rainfall analysis of Austrian Central Institute for Meteorology and Geodynamics (ZAMG, Haiden et al., 2011) are used. References: Adamovic, M., Braud, I., Branger, F., and Kirchner, J. W. (2014). Does the simple dynamical systems approach provide useful information about catchment hydrological functioning in a Mediterranean context? Application to the Ardèche catchment (France), Hydrol. Earth Syst. Sci. Discuss., 11, 10725-10786. Haiden, T., Kann, A., Wittman, C., Pistotnik, G., Bica, B., and Gruber, C. (2011). The Integrated Nowcasting through Comprehensive Analysis (INCA) system and its validation over the Eastern Alpine region. Wea. Forecasting 26, 166-183, doi: 10.1175/2010WAF2222451.1. Herrnegger, M., Nachtnebel, H.P., and Schulz, K. (2014). From runoff to rainfall: inverse rainfall-runoff modelling in a high temporal resolution, Hydrol. Earth Syst. Sci. Discuss., 11, 13259-13309. Kirchner, J. W. (2009). Catchments as simple dynamical systems: catchment characterization, rainfall-runoff modeling, and doing hydrology backward. Water Resour .Res., 45, W02429. Krier, R., Matgen, P., Goergen, K., Pfister, L., Hoffmann, L., Kirchner, J. W., Uhlenbrook, S., and Savenije, H.H.G. (2012). Inferring catchment precipitation by doing hydrology backward: A test in 24 small and mesoscale catchments in Luxembourg, Water Resour. Res., 48, W10525.
The Motivational Function of Private Speech: An Experimental Approach.
ERIC Educational Resources Information Center
de Dios, M. J.; Montero, I.
Recently, some works have been published exploring the role of private speech as a tool for motivation, reaching beyond the classical research on its regulatory function for cognitive processes such as attention or executive function. In fact, the authors' own previous research has shown that a moderate account of spontaneous private speech of…
Approaching Functions: Cabri Tools as Instruments of Semiotic Mediation
ERIC Educational Resources Information Center
Falcade, Rossana; Laborde, Colette; Mariotti, Maria Alessandra
2007-01-01
Assuming that dynamic features of Dynamic Geometry Software may provide a basic representation of both variation and functional dependency, and taking the Vygotskian perspective of semiotic mediation, a teaching experiment was designed with the aim of introducing students to the idea of function. This paper focuses on the use of the Trace tool and…
A New Approach to Radial Basis Function Approximation and Its Application to QSAR
2015-01-01
We describe a novel approach to RBF approximation, which combines two new elements: (1) linear radial basis functions and (2) weighting the model by each descriptor’s contribution. Linear radial basis functions allow one to achieve more accurate predictions for diverse data sets. Taking into account the contribution of each descriptor produces more accurate similarity values used for model development. The method was validated on 14 public data sets comprising nine physicochemical properties and five toxicity endpoints. We also compared the new method with five different QSAR methods implemented in the EPA T.E.S.T. program. Our approach, implemented in the program GUSAR, showed a reasonable accuracy of prediction and high coverage for all external test sets, providing more accurate prediction results than the comparison methods and even the consensus of these methods. Using our new method, we have created models for physicochemical and toxicity endpoints, which we have made freely available in the form of an online service at http://cactus.nci.nih.gov/chemical/apps/cap. PMID:24451033
A Prediction Model of the Capillary Pressure J-Function
Xu, W. S.; Luo, P. Y.; Sun, L.; Lin, N.
2016-01-01
The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701
A Prediction Model of the Capillary Pressure J-Function.
Xu, W S; Luo, P Y; Sun, L; Lin, N
2016-01-01
The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701
Recent approaches in physical modification of protein functionality.
Mirmoghtadaie, Leila; Shojaee Aliabadi, Saeedeh; Hosseini, Seyede Marzieh
2016-05-15
Today, there is a growing demand for novel technologies, such as high hydrostatic pressure, irradiation, ultrasound, filtration, supercritical carbon dioxide, plasma technology, and electrical methods, which are not based on chemicals or heat treatment for modifying ingredient functionality and extending product shelf life. Proteins are essential components in many food processes, and provide various functions in food quality and stability. They can create interfacial films that stabilize emulsions and foams as well as interact to make networks that play key roles in gel and edible film production. These properties of protein are referred to as 'protein functionality', because they can be modified by different processing. The common protein modification (chemical, enzymatic and physical) methods have strong effects on the structure and functionality of food proteins. Furthermore, novel technologies can modify protein structure and functional properties that will be reviewed in this study.
Recent approaches in physical modification of protein functionality.
Mirmoghtadaie, Leila; Shojaee Aliabadi, Saeedeh; Hosseini, Seyede Marzieh
2016-05-15
Today, there is a growing demand for novel technologies, such as high hydrostatic pressure, irradiation, ultrasound, filtration, supercritical carbon dioxide, plasma technology, and electrical methods, which are not based on chemicals or heat treatment for modifying ingredient functionality and extending product shelf life. Proteins are essential components in many food processes, and provide various functions in food quality and stability. They can create interfacial films that stabilize emulsions and foams as well as interact to make networks that play key roles in gel and edible film production. These properties of protein are referred to as 'protein functionality', because they can be modified by different processing. The common protein modification (chemical, enzymatic and physical) methods have strong effects on the structure and functionality of food proteins. Furthermore, novel technologies can modify protein structure and functional properties that will be reviewed in this study. PMID:26776016
A chain reaction approach to modelling gene pathways
Cheng, Gary C.; Chen, Dung-Tsa; Chen, James J.; Soong, Seng-jaw; Lamartiniere, Coral; Barnes, Stephen
2012-01-01
the nutrient-containing diets regulate gene expression in the estrogen synthesis pathway during puberty; (II) global tests to assess an overall association of this particular pathway with time factor by utilizing generalized linear models to analyze microarray data; and (III) a chain reaction model to simulate the pathway. This is a novel application because we are able to translate the gene pathway into the chemical reactions in which each reaction channel describes gene-gene relationship in the pathway. In the chain reaction model, the implicit scheme is employed to efficiently solve the differential equations. Data analysis results show the proposed model is capable of predicting gene expression changes and demonstrating the effect of nutrient-containing diets on gene expression changes in the pathway. One of the objectives of this study is to explore and develop a numerical approach for simulating the gene expression change so that it can be applied and calibrated when the data of more time slices are available, and thus can be used to interpolate the expression change at a desired time point without conducting expensive experiments for a large amount of time points. Hence, we are not claiming this is either essential or the most efficient way for simulating this problem, rather a mathematical/numerical approach that can model the expression change of a large set of genes of a complex pathway. In addition, we understand the limitation of this experiment and realize that it is still far from being a complete model of predicting nutrient-gene interactions. The reason is that in the present model, the reaction rates were estimated based on available data at two time points; hence, the gene expression change is dependent upon the reaction rates and a linear function of the gene expressions. More data sets containing gene expression at various time slices are needed in order to improve the present model so that a non-linear variation of gene expression changes at
NASA Astrophysics Data System (ADS)
Dries, M.; Trager, S. C.; Koopmans, L. V. E.
2016-08-01
Recent studies based on the integrated light of distant galaxies suggest that the initial mass function (IMF) might not be universal. Variations of the IMF with galaxy type and/or formation time may have important consequences for our understanding of galaxy evolution. We have developed a new stellar population synthesis (SPS) code specifically designed to reconstruct the IMF. We implement a novel approach combining regularization with hierarchical Bayesian inference. Within this approach we use a parametrized IMF prior to regulate a direct inference of the IMF. This direct inference gives more freedom to the IMF and allows the model to deviate from parametrized models when demanded by the data. We use Markov Chain Monte Carlo sampling techniques to reconstruct the best parameters for the IMF prior, the age, and the metallicity of a single stellar population. We present our code and apply our model to a number of mock single stellar populations with different ages, metallicities, and IMFs. When systematic uncertainties are not significant, we are able to reconstruct the input parameters that were used to create the mock populations. Our results show that if systematic uncertainties do play a role, this may introduce a bias on the results. Therefore, it is important to objectively compare different ingredients of SPS models. Through its Bayesian framework, our model is well-suited for this.
NASA Astrophysics Data System (ADS)
Dries, M.; Trager, S. C.; Koopmans, L. V. E.
2016-11-01
Recent studies based on the integrated light of distant galaxies suggest that the initial mass function (IMF) might not be universal. Variations of the IMF with galaxy type and/or formation time may have important consequences for our understanding of galaxy evolution. We have developed a new stellar population synthesis (SPS) code specifically designed to reconstruct the IMF. We implement a novel approach combining regularization with hierarchical Bayesian inference. Within this approach, we use a parametrized IMF prior to regulate a direct inference of the IMF. This direct inference gives more freedom to the IMF and allows the model to deviate from parametrized models when demanded by the data. We use Markov chain Monte Carlo sampling techniques to reconstruct the best parameters for the IMF prior, the age and the metallicity of a single stellar population. We present our code and apply our model to a number of mock single stellar populations with different ages, metallicities and IMFs. When systematic uncertainties are not significant, we are able to reconstruct the input parameters that were used to create the mock populations. Our results show that if systematic uncertainties do play a role, this may introduce a bias on the results. Therefore, it is important to objectively compare different ingredients of SPS models. Through its Bayesian framework, our model is well suited for this.
A systematic approach to a self-generating fuzzy rule-table for function approximation.
Pomares, H; Rojas, I; Ortega, J; Gonzalez, J; Prieto, A
2000-01-01
In this paper, a systematic design is proposed to determine fuzzy system structure and learning its parameters, from a set of given training examples. In particular, two fundamental problems concerning fuzzy system modeling are addressed: 1) fuzzy rule parameter optimization and 2) the identification of system structure (i.e., the number of membership functions and fuzzy rules). A four-step approach to build a fuzzy system automatically is presented: Step 1 directly obtains the optimum fuzzy rules for a given membership function configuration. Step 2 optimizes the allocation of the membership functions and the conclusion of the rules, in order to achieve a better approximation. Step 3 determines a new and more suitable topology with the information derived from the approximation error distribution; it decides which variables should increase the number of membership functions. Finally, Step 4 determines which structure should be selected to approximate the function, from the possible configurations provided by the algorithm in the three previous steps. The results of applying this method to the problem of function approximation are presented and then compared with other methodologies proposed in the bibliography. PMID:18252375
A consortium approach to glass furnace modeling.
Chang, S.-L.; Golchert, B.; Petrick, M.
1999-04-20
Using computational fluid dynamics to model a glass furnace is a difficult task for any one glass company, laboratory, or university to accomplish. The task of building a computational model of the furnace requires knowledge and experience in modeling two dissimilar regimes (the combustion space and the liquid glass bath), along with the skill necessary to couple these two regimes. Also, a detailed set of experimental data is needed in order to evaluate the output of the code to ensure that the code is providing proper results. Since all these diverse skills are not present in any one research institution, a consortium was formed between Argonne National Laboratory, Purdue University, Mississippi State University, and five glass companies in order to marshal these skills into one three-year program. The objective of this program is to develop a fully coupled, validated simulation of a glass melting furnace that may be used by industry to optimize the performance of existing furnaces.
A Comparison of Functional Models for Use in the Function-Failure Design Method
NASA Technical Reports Server (NTRS)
Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.
2006-01-01
When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based
New approaches for modeling type Ia supernovae
Zingale, Michael; Almgren, Ann S.; Bell, John B.; Day, Marcus S.; Rendleman, Charles A.; Woosley, Stan
2007-06-25
Type Ia supernovae (SNe Ia) are the largest thermonuclearexplosions in the Universe. Their light output can be seen across greatstances and has led to the discovery that the expansion rate of theUniverse is accelerating. Despite the significance of SNe Ia, there arestill a large number of uncertainties in current theoretical models.Computational modeling offers the promise to help answer the outstandingquestions. However, even with today's supercomputers, such calculationsare extremely challenging because of the wide range of length and timescales. In this paper, we discuss several new algorithms for simulationsof SNe Ia and demonstrate some of their successes.
Aircraft engine mathematical model - linear system approach
NASA Astrophysics Data System (ADS)
Rotaru, Constantin; Roateşi, Simona; Cîrciu, Ionicǎ
2016-06-01
This paper examines a simplified mathematical model of the aircraft engine, based on the theory of linear and nonlinear systems. The dynamics of the engine was represented by a linear, time variant model, near a nominal operating point within a finite time interval. The linearized equations were expressed in a matrix form, suitable for the incorporation in the MAPLE program solver. The behavior of the engine was included in terms of variation of the rotational speed following a deflection of the throttle. The engine inlet parameters can cover a wide range of altitude and Mach numbers.
A Neuroeconomics Approach to Inferring Utility Functions in Sensorimotor Control
2004-01-01
Making choices is a fundamental aspect of human life. For over a century experimental economists have characterized the decisions people make based on the concept of a utility function. This function increases with increasing desirability of the outcome, and people are assumed to make decisions so as to maximize utility. When utility depends on several variables, indifference curves arise that represent outcomes with identical utility that are therefore equally desirable. Whereas in economics utility is studied in terms of goods and services, the sensorimotor system may also have utility functions defining the desirability of various outcomes. Here, we investigate the indifference curves when subjects experience forces of varying magnitude and duration. Using a two-alternative forced-choice paradigm, in which subjects chose between different magnitude–duration profiles, we inferred the indifference curves and the utility function. Such a utility function defines, for example, whether subjects prefer to lift a 4-kg weight for 30 s or a 1-kg weight for a minute. The measured utility function depends nonlinearly on the force magnitude and duration and was remarkably conserved across subjects. This suggests that the utility function, a central concept in economics, may be applicable to the study of sensorimotor control. PMID:15383835
Murphy, Matthew C; Poplawsky, Alexander J; Vazquez, Alberto L; Chan, Kevin C; Kim, Seong-Gi; Fukuda, Mitsuhiro
2016-08-15
Functional MRI (fMRI) is a popular and important tool for noninvasive mapping of neural activity. As fMRI measures the hemodynamic response, the resulting activation maps do not perfectly reflect the underlying neural activity. The purpose of this work was to design a data-driven model to improve the spatial accuracy of fMRI maps in the rat olfactory bulb. This system is an ideal choice for this investigation since the bulb circuit is well characterized, allowing for an accurate definition of activity patterns in order to train the model. We generated models for both cerebral blood volume weighted (CBVw) and blood oxygen level dependent (BOLD) fMRI data. The results indicate that the spatial accuracy of the activation maps is either significantly improved or at worst not significantly different when using the learned models compared to a conventional general linear model approach, particularly for BOLD images and activity patterns involving deep layers of the bulb. Furthermore, the activation maps computed by CBVw and BOLD data show increased agreement when using the learned models, lending more confidence to their accuracy. The models presented here could have an immediate impact on studies of the olfactory bulb, but perhaps more importantly, demonstrate the potential for similar flexible, data-driven models to improve the quality of activation maps calculated using fMRI data. PMID:27236085
Linking geophysics and soil function modelling - two examples
NASA Astrophysics Data System (ADS)
Krüger, J.; Franko, U.; Werban, U.; Dietrich, P.; Behrens, T.; Schmidt, K.; Fank, J.; Kroulik, M.
2011-12-01
potential hot spots where local adaptations of agricultural management would be required to improve soil functions. Example B realizes a soil function modelling with an adapted model parameterization based on data of ground penetration radar (GPR). This work shows an approach to handle heterogeneity of soil properties with geophysical data used for modelling. The field site in Austria is characterised by highly heterogenic soil with fluvioglacial gravel sediments. The variation of thickness of topsoil above a sandy subsoil with gravels strongly influences the soil water balance. GPR detected exact soil horizon depth between topsoil and subsoil. The extension of the input data improves the model performance of CANDY PLUS for plant biomass production. Both examples demonstrate how geophysics provide a surplus of data for agroecosystem modelling which identifies and contributes alternative options for agricultural management decisions.
"Dispersion modeling approaches for near road
Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of app...
Modelling approaches to dose estimation in children
Johnson, Trevor N
2005-01-01
Introduction Most of the drugs on the market are originally developed for adults and dosage selection is based on an optimal balance between clinical efficacy and safety. The aphorism ‘children are not small adults’ not only holds true for the selection of suitable drugs and dosages for use in children but also their susceptibility to adverse drug reactions [1]. Since children may not be subject to dose escalation studies similar to those carried out in the adult population, some initial estimation of dose in paediatrics should be obtained via extrapolation approaches. However, following such an exercise, well-conducted PK-PD or PK studies will still be needed to determine the most appropriate doses for neonates, infants, children and adolescents. PMID:15948929
Integration models: multicultural and liberal approaches confronted
NASA Astrophysics Data System (ADS)
Janicki, Wojciech
2012-01-01
European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.
Models of protocellular structures, functions and evolution
NASA Technical Reports Server (NTRS)
Pohorille, Andrew; New, Michael H.; DeVincenzi, Donald L. (Technical Monitor)
2000-01-01
The central step in the origin of life was the emergence of organized structures from organic molecules available on the early earth. These predecessors to modern cells, called 'proto-cells,' were simple, membrane bounded structures able to maintain themselves, grow, divide, and evolve. Since there is no fossil record of these earliest of life forms, it is a scientific challenge to discover plausible mechanisms for how these entities formed and functioned. To meet this challenge, it is essential to create laboratory models of protocells that capture the main attributes associated with living systems, while remaining consistent with known, or inferred, protobiological conditions. This report provides an overview of a project which has focused on protocellular metabolism and the coupling of metabolism to energy transduction. We have assumed that the emergence of systems endowed with genomes and capable of Darwinian evolution was preceded by a pre-genomic phase, in which protocells functioned and evolved using mostly proteins, without self-replicating nucleic acids such as RNA.
Path probability of stochastic motion: A functional approach
NASA Astrophysics Data System (ADS)
Hattori, Masayuki; Abe, Sumiyoshi
2016-06-01
The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.
Novel approaches to assessing renal function in cirrhotic liver disease.
Portal, Andrew J; Austin, Mark; Heneghan, Michael A
2007-09-01
Renal dysfunction is common in patients with end-stage liver disease. Etiological factors include conditions as diverse as acute tubular necrosis, immunoglobulin A nephropathy and hepatorenal syndrome. Current standard tests of renal function, such as measurement of serum urea and creatinine levels, are inaccurate as the synthesis of these markers is affected by the native liver pathology. This article reviews novel markers of renal function and their potential use in patients with liver disease.
Time-dependent Green's functions approach to nuclear reactions
Rios, Arnau; Danielewicz, Pawel
2008-04-04
Nonequilibrium Green's functions represent underutilized means of studying evolution of quantum many-body systems. In view of a rising computer power, an effort is underway to apply the Green's functions to the dynamics of central nuclear reactions. As the first step, mean-field evolution for the density matrix for colliding slabs is studied in one dimension. Strategy to extend the dynamics to correlations is described.
A Gaussian graphical model approach to climate networks
NASA Astrophysics Data System (ADS)
Zerenner, Tanja; Friederichs, Petra; Lehnertz, Klaus; Hense, Andreas
2014-06-01
Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.
A Gaussian graphical model approach to climate networks
Zerenner, Tanja; Friederichs, Petra; Hense, Andreas; Lehnertz, Klaus
2014-06-15
Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.
Functional integral approach to the kinetic theory of inhomogeneous systems
NASA Astrophysics Data System (ADS)
Fouvry, Jean-Baptiste; Chavanis, Pierre-Henri; Pichon, Christophe
2016-10-01
We present a derivation of the kinetic equation describing the secular evolution of spatially inhomogeneous systems with long-range interactions, the so-called inhomogeneous Landau equation, by relying on a functional integral formalism. We start from the BBGKY hierarchy derived from the Liouville equation. At the order 1 / N, where N is the number of particles, the evolution of the system is characterised by its 1-body distribution function and its 2-body correlation function. Introducing associated auxiliary fields, the evolution of these quantities may be rewritten as a traditional functional integral. By functionally integrating over the 2-body autocorrelation, one obtains a new constraint connecting the 1-body DF and the auxiliary fields. When inverted, this constraint allows us to obtain the closed non-linear kinetic equation satisfied by the 1-body distribution function. This derivation provides an alternative to previous methods, either based on the direct resolution of the truncated BBGKY hierarchy or on the Klimontovich equation. It may turn out to be fruitful to derive more accurate kinetic equations, e.g., accounting for collective effects, or higher order correlation terms.
Nonpoint source pollution: a distributed water quality modeling approach.
León, L F; Soulis, E D; Kouwen, N; Farquhar, G J
2001-03-01
A distributed water quality model for nonpoint source pollution modeling in agricultural watersheds is described in this paper. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. With data extracted using Geographical Information Systems (GIS) technology for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. The transferability of model parameters to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse modeling. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve nonpoint source modeling at the watershed-scale level.
Different experimental approaches in modelling cataractogenesis
Kyselova, Zuzana
2010-01-01
Cataract, the opacification of eye lens, is the leading cause of blindness worldwide. At present, the only remedy is surgical removal of the cataractous lens and substitution with a lens made of synthetic polymers. However, besides significant costs of operation and possible complications, an artificial lens just does not have the overall optical qualities of a normal one. Hence it remains a significant public health problem, and biochemical solutions or pharmacological interventions that will maintain the transparency of the lens are highly required. Naturally, there is a persistent demand for suitable biological models. The ocular lens would appear to be an ideal organ for maintaining culture conditions because of lacking blood vessels and nerves. The lens in vivo obtains its nutrients and eliminates waste products via diffusion with the surrounding fluids. Lens opacification observed in vivo can be mimicked in vitro by addition of the cataractogenic agent sodium selenite (Na2SeO3) to the culture medium. Moreover, since an overdose of sodium selenite induces also cataract in young rats, it became an extremely rapid and convenient model of nuclear cataract in vivo. The main focus of this review will be on selenium (Se) and its salt sodium selenite, their toxicological characteristics and safety data in relevance of modelling cataractogenesis, either under in vivo or in vitro conditions. The studies revealing the mechanisms of lens opacification induced by selenite are highlighted, the representatives from screening for potential anti-cataract agents are listed. PMID:21217865
Walking in circles: a modelling approach
Maus, Horst-Moritz; Seyfarth, Andre
2014-01-01
Blindfolded or disoriented people have the tendency to walk in circles rather than on a straight line even if they wanted to. Here, we use a minimalistic walking model to examine this phenomenon. The bipedal spring-loaded inverted pendulum exhibits asymptotically stable gaits with centre of mass (CoM) dynamics and ground reaction forces similar to human walking in the sagittal plane. We extend this model into three dimensions, and show that stable walking patterns persist if the leg is aligned with respect to the body (here: CoM velocity) instead of a world reference frame. Further, we demonstrate that asymmetric leg configurations, which are common in humans, will typically lead to walking in circles. The diameter of these circles depends strongly on parameter configuration, but is in line with empirical data from human walkers. Simulation results suggest that walking radius and especially direction of rotation are highly dependent on leg configuration and walking velocity, which explains inconsistent veering behaviour in repeated trials in human data. Finally, we discuss the relation between findings in the model and implications for human walking. PMID:25056215
Walking in circles: a modelling approach.
Maus, Horst-Moritz; Seyfarth, Andre
2014-10-01
Blindfolded or disoriented people have the tendency to walk in circles rather than on a straight line even if they wanted to. Here, we use a minimalistic walking model to examine this phenomenon. The bipedal spring-loaded inverted pendulum exhibits asymptotically stable gaits with centre of mass (CoM) dynamics and ground reaction forces similar to human walking in the sagittal plane. We extend this model into three dimensions, and show that stable walking patterns persist if the leg is aligned with respect to the body (here: CoM velocity) instead of a world reference frame. Further, we demonstrate that asymmetric leg configurations, which are common in humans, will typically lead to walking in circles. The diameter of these circles depends strongly on parameter configuration, but is in line with empirical data from human walkers. Simulation results suggest that walking radius and especially direction of rotation are highly dependent on leg configuration and walking velocity, which explains inconsistent veering behaviour in repeated trials in human data. Finally, we discuss the relation between findings in the model and implications for human walking.
Dailey, Lisa
2015-09-01
Completion of the human and mouse genomes has inspired new initiatives to obtain a global understanding of the functional regulatory networks governing gene expression. Enhancers are primary regulatory DNA elements determining precise spatio- and temporal gene expression patterns, but the observation that they can function at any distance from the gene(s) they regulate has made their genome-wide characterization challenging. Since traditional, single reporter approaches would be unable to accomplish this enormous task, high throughput technologies for mapping chromatin features associated with enhancers have emerged as an effective surrogate for enhancer discovery. However, the last few years have witnessed the development of several new innovative approaches that can effectively screen for and discover enhancers based on their functional activation of transcription using massively parallel reporter systems. In addition to their application for genome annotation, these new high throughput functional approaches open new and exciting avenues for modeling gene regulatory networks.
A fuzzy logic approach to modeling a vehicle crash test
NASA Astrophysics Data System (ADS)
Pawlus, Witold; Karimi, Hamid; Robbersmyr, Kjell
2013-03-01
This paper presents an application of fuzzy approach to vehicle crash modeling. A typical vehicle to pole collision is described and kinematics of a car involved in this type of crash event is thoroughly characterized. The basics of fuzzy set theory and modeling principles based on fuzzy logic approach are presented. In particular, exceptional attention is paid to explain the methodology of creation of a fuzzy model of a vehicle collision. Furthermore, the simulation results are presented and compared to the original vehicle's kinematics. It is concluded which factors have influence on the accuracy of the fuzzy model's output and how they can be adjusted to improve the model's fidelity.
Cooperative fuzzy games approach to setting target levels of ECs in quality function deployment.
Yang, Zhihui; Chen, Yizeng; Yin, Yunqiang
2014-01-01
Quality function deployment (QFD) can provide a means of translating customer requirements (CRs) into engineering characteristics (ECs) for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.
Cooperative Fuzzy Games Approach to Setting Target Levels of ECs in Quality Function Deployment
Yang, Zhihui; Chen, Yizeng; Yin, Yunqiang
2014-01-01
Quality function deployment (QFD) can provide a means of translating customer requirements (CRs) into engineering characteristics (ECs) for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach. PMID:25097884
Cooperative fuzzy games approach to setting target levels of ECs in quality function deployment.
Yang, Zhihui; Chen, Yizeng; Yin, Yunqiang
2014-01-01
Quality function deployment (QFD) can provide a means of translating customer requirements (CRs) into engineering characteristics (ECs) for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach. PMID:25097884
NASA Astrophysics Data System (ADS)
Das, Priyanka; Ahmad, Zeeshan; Singh, P. N.; Prasad, Ashutosh
2011-11-01
The present work makes use of experimental data for real part of microwave complex permittivity of spring oats (Avena sativa L.) at 2.45 GHz and 24 °C as a function of moisture content, as extracted from the literature. These permittivity data were individually converted to those for solid materials using seven independent mixture equations for effective permittivity of random media. Moisture dependent quadratic models for complex permittivity of spring oats (Avena sativa L.), as developed by the present group, were used to evaluate the dielectric loss factor of spring oats kernels. Using these data, seven density—independent permittivity functions were evaluated and plotted as a function of moisture content of the samples. Second and third order polynomial regression equations were used for curve fittings with these data and their performances are reported. Coefficients of determination (r2) approaching unity (˜ 0.95-0.9999) and very small Standard Deviation (SD) ˜0.001-8.87 show good acceptability for these models. The regularity in the nature of these variations revealed the usefulness of the density—independent permittivity functions as indicators/calibrators of moisture content of spring oats kernels. Keeping in view the fact that moisture content of grains and seeds is an important factor determining quality and affecting the storage, transportation, and milling of grains and seeds, the work has the potentiality of its practical applications.
β-deformed matrix model and Nekrasov partition function
NASA Astrophysics Data System (ADS)
Nishinaka, Takahiro; Rim, Chaiho
2012-02-01
We study Penner type matrix models in relation with the Nekrasov partition function of four dimensional mathcal{N} = {2} , SU(2) supersymmetric gauge theories with N F = 2 , 3 and 4. By evaluating the resolvent using the loop equation for general β, we explicitly construct the first half-genus correction to the free energy and demonstrate the result coincides with the corresponding Nekrasov partition function with general Ω-background, including higher instanton contributions after modifying the relation of the Coulomb branch parameter with the filling fraction. Our approach complements the proof using the Selberg integrals directly which is useful to find the contribution in the series of instanton numbers for a given deformation parameter.
Swell-Dissipation Function for Wave Models
NASA Astrophysics Data System (ADS)
Babanin, A.
2012-04-01
In the paper, we will investigate swell attenuation due to production of turbulence by the wave orbital motion. Theoreticaly, potential waves cannot generate the vortex motion, but the scale considerations indicate that if the steepness of waves is not too small, the Reynolds number can exceed the critical values. This means that in presence of initial non-potential disturbances the orbital velocities can generate the vortex motion and turbulence. This problem was investigated by laboratory means, numerical simulations and field observations. As a sink of wave energy, such dissipation is small in presence of wave breaking, but is essential for swell. Swell prediction by spectral wave models is often poor, but is important for offshore and maritime industry, and across a broad range of oceanographic and air-sea interaction applications. Based on the research of wave-induced turbulence, new swell-dissipation function is proposed. It agrees well with satellite observations of long-distance swell propagation and has been employed and tested in spectral wave models.
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; Tartakovsky, Daniel M.
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less
A modeling approach to thermoplastic pultrusion. I - Formulation of models
NASA Astrophysics Data System (ADS)
Astrom, B. T.; Pipes, R. B.
1993-06-01
Models to predict temperature and pressure distributions within a thermoplastic composed as it travels through a pultrusion line and a model to predict the pulling resistance of a die are presented and discussed. A set of mathematical models of the thermoplastic pultrusion process comprising temperature, pressure, and pulling force models are discussed and extensively verified with experimental data.
A modular approach for item response theory modeling with the R package flirt.
Jeon, Minjeong; Rijmen, Frank
2016-06-01
The new R package flirt is introduced for flexible item response theory (IRT) modeling of psychological, educational, and behavior assessment data. flirt integrates a generalized linear and nonlinear mixed modeling framework with graphical model theory. The graphical model framework allows for efficient maximum likelihood estimation. The key feature of flirt is its modular approach to facilitate convenient and flexible model specifications. Researchers can construct customized IRT models by simply selecting various modeling modules, such as parametric forms, number of dimensions, item and person covariates, person groups, link functions, etc. In this paper, we describe major features of flirt and provide examples to illustrate how flirt works in practice.
A mixed basis density functional approach for one-dimensional systems with B-splines
NASA Astrophysics Data System (ADS)
Ren, Chung-Yuan; Chang, Yia-Chung; Hsue, Chen-Shiung
2016-05-01
A mixed basis approach based on density functional theory is extended to one-dimensional (1D) systems. The basis functions here are taken to be the localized B-splines for the two finite non-periodic dimensions and the plane waves for the third periodic direction. This approach will significantly reduce the number of the basis and therefore is computationally efficient for the diagonalization of the Kohn-Sham Hamiltonian. For 1D systems, B-spline polynomials are particularly useful and efficient in two-dimensional spatial integrations involved in the calculations because of their absolute localization. Moreover, B-splines are not associated with atomic positions when the geometry structure is optimized, making the geometry optimization easy to implement. With such a basis set we can directly calculate the total energy of the isolated system instead of using the conventional supercell model with artificial vacuum regions among the replicas along the two non-periodic directions. The spurious Coulomb interaction between the charged defect and its repeated images by the supercell approach for charged systems can also be avoided. A rigorous formalism for the long-range Coulomb potential of both neutral and charged 1D systems under the mixed basis scheme will be derived. To test the present method, we apply it to study the infinite carbon-dimer chain, graphene nanoribbon, carbon nanotube and positively-charged carbon-dimer chain. The resulting electronic structures are presented and discussed in detail.
Mathematical Modeling in Mathematics Education: Basic Concepts and Approaches
ERIC Educational Resources Information Center
Erbas, Ayhan Kürsat; Kertil, Mahmut; Çetinkaya, Bülent; Çakiroglu, Erdinç; Alacaci, Cengiz; Bas, Sinem
2014-01-01
Mathematical modeling and its role in mathematics education have been receiving increasing attention in Turkey, as in many other countries. The growing body of literature on this topic reveals a variety of approaches to mathematical modeling and related concepts, along with differing perspectives on the use of mathematical modeling in teaching and…
Shen, Hua; McHale, Cliona M.; Smith, Martyn T; Zhang, Luoping
2015-01-01
Characterizing variability in the extent and nature of responses to environmental exposures is a critical aspect of human health risk assessment. Chemical toxicants act by many different mechanisms, however, and the genes involved in adverse outcome pathways (AOPs) and AOP networks are not yet characterized. Functional genomic approaches can reveal both toxicity pathways and susceptibility genes, through knockdown or knockout of all non-essential genes in a cell of interest, and identification of genes associated with a toxicity phenotype following toxicant exposure. Screening approaches in yeast and human near-haploid leukemic KBM7 cells, have identified roles for genes and pathways involved in response to many toxicants but are limited by partial homology among yeast and human genes and limited relevance to normal diploid cells. RNA interference (RNAi) suppresses mRNA expression level but is limited by off-target effects (OTEs) and incomplete knockdown. The recently developed gene editing approach called clustered regularly interspaced short palindrome repeats-associated nuclease (CRISPR)-Cas9, can precisely knock-out most regions of the genome at the DNA level with fewer OTEs than RNAi, in multiple human cell types, thus overcoming the limitations of the other approaches. It has been used to identify genes involved in the response to chemical and microbial toxicants in several human cell types and could readily be extended to the systematic screening of large numbers of environmental chemicals. CRISPR-Cas9 can also repress and activate gene expression, including that of non-coding RNA, with near-saturation, thus offering the potential to more fully characterize AOPs and AOP networks. Finally, CRISPR-Cas9 can generate complex animal models in which to conduct preclinical toxicity testing at the level of individual genotypes or haplotypes. Therefore, CRISPR-Cas9 is a powerful and flexible functional genomic screening approach that can be harnessed to provide
Shen, Hua; McHale, Cliona M; Smith, Martyn T; Zhang, Luoping
2015-01-01
Characterizing variability in the extent and nature of responses to environmental exposures is a critical aspect of human health risk assessment. Chemical toxicants act by many different mechanisms, however, and the genes involved in adverse outcome pathways (AOPs) and AOP networks are not yet characterized. Functional genomic approaches can reveal both toxicity pathways and susceptibility genes, through knockdown or knockout of all non-essential genes in a cell of interest, and identification of genes associated with a toxicity phenotype following toxicant exposure. Screening approaches in yeast and human near-haploid leukemic KBM7 cells have identified roles for genes and pathways involved in response to many toxicants but are limited by partial homology among yeast and human genes and limited relevance to normal diploid cells. RNA interference (RNAi) suppresses mRNA expression level but is limited by off-target effects (OTEs) and incomplete knockdown. The recently developed gene editing approach called clustered regularly interspaced short palindrome repeats-associated nuclease (CRISPR)-Cas9, can precisely knock-out most regions of the genome at the DNA level with fewer OTEs than RNAi, in multiple human cell types, thus overcoming the limitations of the other approaches. It has been used to identify genes involved in the response to chemical and microbial toxicants in several human cell types and could readily be extended to the systematic screening of large numbers of environmental chemicals. CRISPR-Cas9 can also repress and activate gene expression, including that of non-coding RNA, with near-saturation, thus offering the potential to more fully characterize AOPs and AOP networks. Finally, CRISPR-Cas9 can generate complex animal models in which to conduct preclinical toxicity testing at the level of individual genotypes or haplotypes. Therefore, CRISPR-Cas9 is a powerful and flexible functional genomic screening approach that can be harnessed to provide
A new genetic fuzzy system approach for parameter estimation of ARIMA model
NASA Astrophysics Data System (ADS)
Hassan, Saima; Jaafar, Jafreezal; Belhaouari, Brahim S.; Khosravi, Abbas
2012-09-01
The Autoregressive Integrated moving Average model is the most powerful and practical time series model for forecasting. Parameter estimation is the most crucial part in ARIMA modeling. Inaccurate and wrong estimated parameters lead to bias and unacceptable forecasting results. Parameter optimization can be adopted in order to increase the demand forecasting accuracy. A paradigm of the fuzzy system and a genetic algorithm is proposed in this paper as a parameter estimation approach for ARIMA. The new approach will optimize the parameters by tuning the fuzzy membership functions with a genetic algorithm. The proposed Hybrid model of ARIMA and the genetic fuzzy system will yield acceptable forecasting results.
Coupling approaches used in atmospheric entry models
NASA Astrophysics Data System (ADS)
Gritsevich, M. I.
2012-09-01
While a planet orbits the Sun, it is subject to impact by smaller objects, ranging from tiny dust particles and space debris to much larger asteroids and comets. Such collisions have taken place frequently over geological time and played an important role in the evolution of planets and the development of life on the Earth. Though the search for near-Earth objects addresses one of the main points of the Asteroid and Comet Hazard, one should not underestimate the useful information to be gleaned from smaller atmospheric encounters, known as meteors or fireballs. Not only do these events help determine the linkages between meteorites and their parent bodies; due to their relative regularity they provide a good statistical basis for analysis. For successful cases with found meteorites, the detailed atmospheric path record is an excellent tool to test and improve existing entry models assuring the robustness of their implementation. There are many more important scientific questions meteoroids help us to answer, among them: Where do these objects come from, what are their origins, physical properties and chemical composition? What are the shapes and bulk densities of the space objects which fully ablate in an atmosphere and do not reach the planetary surface? Which values are directly measured and which are initially assumed as input to various models? How to couple both fragmentation and ablation effects in the model, taking real size distribution of fragments into account? How to specify and speed up the recovery of a recently fallen meteorites, not letting weathering to affect samples too much? How big is the pre-atmospheric projectile to terminal body ratio in terms of their mass/volume? Which exact parameters beside initial mass define this ratio? More generally, how entering object affects Earth's atmosphere and (if applicable) Earth's surface? How to predict these impact consequences based on atmospheric trajectory data? How to describe atmospheric entry
Extension of the Nakajima-Zwanzig approach to multitime correlation functions of open systems
NASA Astrophysics Data System (ADS)
Ivanov, Anton; Breuer, Heinz-Peter
2015-09-01
We extend the Nakajima-Zwanzig projection operator technique to the determination of multitime correlation functions of open quantum systems. The correlation functions are expressed in terms of certain multitime homogeneous and inhomogeneous memory kernels for which suitable equations of motion are derived. We show that under the condition of finite memory times, these equations can be used to determine the memory kernels by employing an exact stochastic unraveling of the full system-environment dynamics. The approach thus allows us to combine exact stochastic methods, feasible for short times, with long-time master equation simulations. The applicability of the method is demonstrated by numerical simulations of two-dimensional spectra for a donor-acceptor model, and by comparison of the results with those obtained from the reduced hierarchy equations of motion. We further show that the formalism is also applicable to the time evolution of a periodically driven two-level system initially in equilibrium with its environment.
Mixtures of ions and amphiphilic molecules in slit-like pores: A density functional approach
Pizio, O.; Rżysko, W. Sokołowski, S.; Sokołowska, Z.
2015-04-28
We investigate microscopic structure and thermodynamic properties of a mixture that contains amphiphilic molecules and charged hard spheres confined in slit-like pores with uncharged hard walls. The model and the density functional approach are the same as described in details in our previous work [Pizio et al., J. Chem. Phys. 140, 174706 (2014)]. Our principal focus is in exploring the effects brought by the presence of ions on the structure of confined amphiphilic particles. We have found that for some cases of anisotropic interactions, the change of the structure of confined fluids occurs via the first-order transitions. Moreover, if anions and cations are attracted by different hemispheres of amphiphiles, a charge at the walls appears at the zero value of the wall electrostatic potential. For a given thermodynamic state, this charge is an oscillating function of the pore width.
Approaches to organizing public relations functions in healthcare.
Guy, Bonnie; Williams, David R; Aldridge, Alicia; Roggenkamp, Susan D
2007-01-01
This article provides health care audiences with a framework for understanding different perspectives of the role and functions of public relations in healthcare organizations and the resultant alternatives for organizing and enacting public relations functions. Using an example of a current issue receiving much attention in US healthcare (improving rates of organ donation), the article provides examples of how these different perspectives influence public relations goals and objectives, definitions of 'public', activities undertaken, who undertakes them and where they fit into the organizational hierarchy.
Approaches to organizing public relations functions in healthcare.
Guy, Bonnie; Williams, David R; Aldridge, Alicia; Roggenkamp, Susan D
2007-01-01
This article provides health care audiences with a framework for understanding different perspectives of the role and functions of public relations in healthcare organizations and the resultant alternatives for organizing and enacting public relations functions. Using an example of a current issue receiving much attention in US healthcare (improving rates of organ donation), the article provides examples of how these different perspectives influence public relations goals and objectives, definitions of 'public', activities undertaken, who undertakes them and where they fit into the organizational hierarchy. PMID:19042525
Smith, David V; Utevsky, Amanda V; Bland, Amy R; Clement, Nathan; Clithero, John A; Harsch, Anne E W; McKell Carter, R; Huettel, Scott A
2014-07-15
A central challenge for neuroscience lies in relating inter-individual variability to the functional properties of specific brain regions. Yet, considerable variability exists in the connectivity patterns between different brain areas, potentially producing reliable group differences. Using sex differences as a motivating example, we examined two separate resting-state datasets comprising a total of 188 human participants. Both datasets were decomposed into resting-state networks (RSNs) using a probabilistic spatial independent component analysis (ICA). We estimated voxel-wise functional connectivity with these networks using a dual-regression analysis, which characterizes the participant-level spatiotemporal dynamics of each network while controlling for (via multiple regression) the influence of other networks and sources of variability. We found that males and females exhibit distinct patterns of connectivity with multiple RSNs, including both visual and auditory networks and the right frontal-parietal network. These results replicated across both datasets and were not explained by differences in head motion, data quality, brain volume, cortisol levels, or testosterone levels. Importantly, we also demonstrate that dual-regression functional connectivity is better at detecting inter-individual variability than traditional seed-based functional connectivity approaches. Our findings characterize robust-yet frequently ignored-neural differences between males and females, pointing to the necessity of controlling for sex in neuroscience studies of individual differences. Moreover, our results highlight the importance of employing network-based models to study variability in functional connectivity. PMID:24662574
Parry, David A D
2016-01-01
Experimental and theoretical research aimed at determining the structure and function of the family of intermediate filament proteins has made significant advances over the past 20 years. Much of this has either contributed to or relied on the amino acid sequence databases that are now available online, and the data mining approaches that have been developed to analyze these sequences. As the quality of sequence data is generally high, it follows that it is the design of the computational and graphical methodologies that are of especial importance to researchers who aspire to gain a greater understanding of those sequence features that specify both function and structural hierarchy. However, these techniques are necessarily subject to limitations and it is important that these be recognized. In addition, no single method is likely to be successful in solving a particular problem, and a coordinated approach using a suite of methods is generally required. A final step in the process involves the interpretation of the results obtained and the construction of a working model or hypothesis that suggests further experimentation. While such methods allow meaningful progress to be made it is still important that the data are interpreted correctly and conservatively. New data mining methods are continually being developed, and it can be expected that even greater understanding of the relationship between structure and function will be gleaned from sequence data in the coming years.
Automatic determination of radial basis functions: an immunity-based approach.
de Castro, L N; Von Zuben, F J
2001-12-01
The appropriate operation of a radial basis function (RBF) neural network depends mainly upon an adequate choice of the parameters of its basis functions. The simplest approach to train an RBF network is to assume fixed radial basis functions defining the activation of the hidden units. Once the RBF parameters are fixed, the optimal set of output weights can be determined straightforwardly by using a linear least squares algorithm, which generally means reduction in the learning time as compared to the determination of all RBF network parameters using supervised learning. The main drawback of this strategy is the requirement of an efficient algorithm to determine the number, position, and dispersion of the RBFs. The approach proposed here is inspired by models derived from the vertebrate immune system, that will be shown to perform unsupervised cluster analysis. The algorithm is introduced and its performance is compared to that of the random, k-means center selection procedures and other results from the literature. By automatically defining the number of RBF centers, their positions and dispersions, the proposed method leads to parsimonious solutions. Simulation results are reported concerning regression and classification problems.