Thermoplasmonics modeling: A Green's function approach
NASA Astrophysics Data System (ADS)
Baffou, Guillaume; Quidant, Romain; Girard, Christian
2010-10-01
We extend the discrete dipole approximation (DDA) and the Green’s dyadic tensor (GDT) methods—previously dedicated to all-optical simulations—to investigate the thermodynamics of illuminated plasmonic nanostructures. This extension is based on the use of the thermal Green’s function and a original algorithm that we named Laplace matrix inversion. It allows for the computation of the steady-state temperature distribution throughout plasmonic systems. This hybrid photothermal numerical method is suited to investigate arbitrarily complex structures. It can take into account the presence of a dielectric planar substrate and is simple to implement in any DDA or GDT code. Using this numerical framework, different applications are discussed such as thermal collective effects in nanoparticles assembly, the influence of a substrate on the temperature distribution and the heat generation in a plasmonic nanoantenna. This numerical approach appears particularly suited for new applications in physics, chemistry, and biology such as plasmon-induced nanochemistry and catalysis, nanofluidics, photothermal cancer therapy, or phase-transition control at the nanoscale.
Functional state modelling approach validation for yeast and bacteria cultivations
Roeva, Olympia; Pencheva, Tania
2014-01-01
In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778
Functional state modelling approach validation for yeast and bacteria cultivations.
Roeva, Olympia; Pencheva, Tania
2014-09-03
In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach.
Stochastic Functional Data Analysis: A Diffusion Model-based Approach
Zhu, Bin; Song, Peter X.-K.; Taylor, Jeremy M.G.
2011-01-01
Summary This paper presents a new modeling strategy in functional data analysis. We consider the problem of estimating an unknown smooth function given functional data with noise. The unknown function is treated as the realization of a stochastic process, which is incorporated into a diffusion model. The method of smoothing spline estimation is connected to a special case of this approach. The resulting models offer great flexibility to capture the dynamic features of functional data, and allow straightforward and meaningful interpretation. The likelihood of the models is derived with Euler approximation and data augmentation. A unified Bayesian inference method is carried out via a Markov Chain Monte Carlo algorithm including a simulation smoother. The proposed models and methods are illustrated on some prostate specific antigen data, where we also show how the models can be used for forecasting. PMID:21418053
The Thirring-Wess model revisited: a functional integral approach
Belvedere, L.V. . E-mail: armflavio@if.uff.br
2005-06-01
We consider the Wess-Zumino-Witten theory to obtain the functional integral bosonization of the Thirring-Wess model with an arbitrary regularization parameter. Proceeding a systematic of decomposing the Bose field algebra into gauge-invariant- and gauge-non-invariant field subalgebras, we obtain the local decoupled quantum action. The generalized operator solutions for the equations of motion are reconstructed from the functional integral formalism. The isomorphism between the QED {sub 2} (QCD {sub 2}) with broken gauge symmetry by a regularization prescription and the Abelian (non-Abelian) Thirring-Wess model with a fixed bare mass for the meson field is established.
A Model-Based Approach to Constructing Music Similarity Functions
NASA Astrophysics Data System (ADS)
West, Kris; Lamere, Paul
2006-12-01
Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.
Model approach to starch functionality in bread making.
Goesaert, Hans; Leman, Pedro; Delcour, Jan A
2008-08-13
We used modified wheat starches in gluten-starch flour models to study the role of starch in bread making. Incorporation of hydroxypropylated starch in the recipe reduced loaf volume and initial crumb firmness and increased crumb gas cell size. Firming rate and firmness after storage increased for loaves containing the least hydroxypropylated starch. Inclusion of cross-linked starch had little effect on loaf volume or crumb structure but increased crumb firmness. The firming rate was mostly similar to that of control samples. Presumably, the moment and extent of starch gelatinization and the concomitant water migration influence the structure formation during baking. Initial bread firmness seems determined by the rigidity of the gelatinized granules and leached amylose. Amylopectin retrogradation and strengthening of a long-range network by intensifying the inter- and intramolecular starch-starch and possibly also starch-gluten interactions (presumably because of water incorporation in retrograded amylopectin crystallites) play an important role in firming.
NASA Astrophysics Data System (ADS)
Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.
2016-10-01
Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach to the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (< ˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe tradeoffs - an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.
NASA Astrophysics Data System (ADS)
Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.
2017-01-01
Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper-mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach for the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (<˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe trade-offs-an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.
a Radiative Transfer Equation/phase Function Approach to Vegetation Canopy Reflectance Modeling
NASA Astrophysics Data System (ADS)
Randolph, Marion Herbert
Vegetation canopy reflectance models currently in use differ considerably in their treatment of the radiation scattering problem, and it is this fundamental difference which stimulated this investigation of the radiative transfer equation/phase function approach. The primary objective of this thesis is the development of vegetation canopy phase functions which describe the probability of radiation scattering within a canopy in terms of its biological and physical characteristics. In this thesis a technique based upon quadrature formulae is used to numerically generate a variety of vegetation canopy phase functions. Based upon leaf inclination distribution functions, phase functions are generated for plagiophile, extremophile, erectophile, spherical, planophile, blue grama (Bouteloua gracilis), and soybean canopies. The vegetation canopy phase functions generated are symmetric with respect to the incident and exitant angles, and hence satisfy the principle of reciprocity. The remaining terms in the radiative transfer equation are also derived in terms of canopy geometry and optical properties to complete the development of the radiative transfer equation/phase function description for vegetation canopy reflectance modeling. In order to test the radiative transfer equation/phase function approach the iterative discrete ordinates method for solving the radiative transfer equation is implemented. In comparison with field data, the approach tends to underestimate the visible reflectance and overestimate infrared reflectance. The approach does compare well, however, with other extant canopy reflectance models; for example, it agrees to within ten to fifteen percent of the Suits model (Suits, 1972). Sensitivity analysis indicates that canopy geometry may influence reflectance as much as 100 percent for a given wavelength. Optical thickness produces little change in reflectance after a depth of 2.5 (Leaf area index of 4.0) is reached, and reflectance generally increases
Functional modelling of planar cell polarity: an approach for identifying molecular function
2013-01-01
Background Cells in some tissues acquire a polarisation in the plane of the tissue in addition to apical-basal polarity. This polarisation is commonly known as planar cell polarity and has been found to be important in developmental processes, as planar polarity is required to define the in-plane tissue coordinate system at the cellular level. Results We have built an in-silico functional model of cellular polarisation that includes cellular asymmetry, cell-cell signalling and a response to a global cue. The model has been validated and parameterised against domineering non-autonomous wing hair phenotypes in Drosophila. Conclusions We have carried out a systematic comparison of in-silico polarity phenotypes with patterns observed in vivo under different genetic manipulations in the wing. This has allowed us to classify the specific functional roles of proteins involved in generating cell polarity, providing new hypotheses about their specific functions, in particular for Pk and Dsh. The predictions from the model allow direct assignment of functional roles of genes from genetic mosaic analysis of Drosophila wings. PMID:23672397
Berhane, Kiros; Molitor, Nuoo-Ting
2008-10-01
Flexible multilevel models are proposed to allow for cluster-specific smooth estimation of growth curves in a mixed-effects modeling format that includes subject-specific random effects on the growth parameters. Attention is then focused on models that examine between-cluster comparisons of the effects of an ecologic covariate of interest (e.g. air pollution) on nonlinear functionals of growth curves (e.g. maximum rate of growth). A Gibbs sampling approach is used to get posterior mean estimates of nonlinear functionals along with their uncertainty estimates. A second-stage ecologic random-effects model is used to examine the association between a covariate of interest (e.g. air pollution) and the nonlinear functionals. A unified estimation procedure is presented along with its computational and theoretical details. The models are motivated by, and illustrated with, lung function and air pollution data from the Southern California Children's Health Study.
Ruggieri, Alexander P; Pakhomov, Serguei V; Chute, Christopher G
2004-01-01
In an effort to unearth semantic models that could prove fruitful to functional-status terminology development we applied the "frame semantic" method, derived from the linguistic theory of thematic roles currently exemplified in the Berkeley "FrameNet" Project. Full descriptive sentences with functional-status conceptual meaning were derived from structured content within a corpus of questionnaire assessment instruments commonly used in clinical practice for functional-status assessment. Syntactic components in those sentences were delineated through manual annotation and mark-up. The annotated syntactic constituents were tagged as frame elements according to their semantic role within the context of the derived functional-status expression. Through this process generalizable "semantic frames" were elaborated with recurring "frame elements". The "frame semantic" method as an approach to rendering semantic models for functional-status terminology development and its use as a basis for machine recognition of functional status data in clinical narratives are discussed.
An approach to numerical quantification of room shape and its function in diffuse sound field model.
Šumarac-Pavlović, Dragana; Mijić, Miomir
2016-10-01
This paper deals with an approach to the numerical quantification of room shape and its possible role in diffuse field modeling. The normalized shape factor of the room is introduced as a function of the room volume and the room interior surface. It was shown that in real rooms the value of normalized shape factor ranges from about 0.57 to 0.9. Some simple transformations of well-known formulas by introducing the room shape factor are also discussed. Such approach seems appropriate in architectural acoustics courses as a straightforward way to explain the factors influencing the acoustic response in a room.
NASA Astrophysics Data System (ADS)
Reich, P. B.; Butler, E. E.
2015-12-01
This project will advance global land models by shifting from the current plant functional type approach to one that better utilizes what is known about the importance and variability of plant traits, within a framework of simultaneously improving fundamental physiological relations that are at the core of model carbon cycling algorithms. Existing models represent the global distribution of vegetation types using the Plant Functional Typeconcept. Plant Functional Types are classes of plant species with similar evolutionary and life history withpresumably similar responses to environmental conditions like CO2, water and nutrient availability. Fixedproperties for each Plant Functional Type are specified through a collection of physiological parameters, or traits.These traits, mostly physiological in nature (e.g., leaf nitrogen and longevity) are used in model algorithms to estimate ecosystem properties and/or drive calculated process rates. In most models, 5 to 15 functional types represent terrestrial vegetation; in essence, they assume there are a total of only 5 to 15 different kinds of plants on the entire globe. This assumption of constant plant traits captured within the functional type concept has serious limitations, as a single set of traits does not reflect trait variation observed within and between species and communities. While this simplification was necessary decades past, substantial improvement is now possible. Rather than assigning a small number of constant parameter values to all grid cells in a model, procedures will be developed that predict a frequency distribution of values for any given grid cell. Thus, the mean and variance, and how these change with time, will inform and improve model performance. The trait-based approach will improve land modeling by (1) incorporating patterns and heterogeneity of traits into model parameterization, thus evolving away from a framework that considers large areas of vegetation to have near identical trait
Modeling and Simulation Approaches for Cardiovascular Function and Their Role in Safety Assessment
Collins, TA; Bergenholm, L; Abdulla, T; Yates, JWT; Evans, N; Chappell, MJ; Mettetal, JT
2015-01-01
Systems pharmacology modeling and pharmacokinetic-pharmacodynamic (PK/PD) analysis of drug-induced effects on cardiovascular (CV) function plays a crucial role in understanding the safety risk of new drugs. The aim of this review is to outline the current modeling and simulation (M&S) approaches to describe and translate drug-induced CV effects, with an emphasis on how this impacts drug safety assessment. Current limitations are highlighted and recommendations are made for future effort in this vital area of drug research. PMID:26225237
A new approach to wall modeling in LES of incompressible flow via function enrichment
NASA Astrophysics Data System (ADS)
Krank, Benjamin; Wall, Wolfgang A.
2016-07-01
A novel approach to wall modeling for the incompressible Navier-Stokes equations including flows of moderate and large Reynolds numbers is presented. The basic idea is that a problem-tailored function space allows prediction of turbulent boundary layer gradients with very coarse meshes. The proposed function space consists of a standard polynomial function space plus an enrichment, which is constructed using Spalding's law-of-the-wall. The enrichment function is not enforced but "allowed" in a consistent way and the overall methodology is much more general and also enables other enrichment functions. The proposed method is closely related to detached-eddy simulation as near-wall turbulence is modeled statistically and large eddies are resolved in the bulk flow. Interpreted in terms of a three-scale separation within the variational multiscale method, the standard scale resolves large eddies and the enrichment scale represents boundary layer turbulence in an averaged sense. The potential of the scheme is shown applying it to turbulent channel flow of friction Reynolds numbers from Reτ = 590 and up to 5,000, flow over periodic constrictions at the Reynolds numbers ReH = 10 , 595 and 19,000 as well as backward-facing step flow at Reh = 5 , 000, all with extremely coarse meshes. Excellent agreement with experimental and DNS data is observed with the first grid point located at up to y1+ = 500 and especially under adverse pressure gradients as well as in separated flows.
A signal subspace approach for modeling the hemodynamic response function in fMRI.
Hossein-Zadeh, Gholam-Ali; Ardekani, Babak A; Soltanian-Zadeh, Hamid
2003-10-01
Many fMRI analysis methods use a model for the hemodynamic response function (HRF). Common models of the HRF, such as the Gaussian or Gamma functions, have parameters that are usually selected a priori by the data analyst. A new method is presented that characterizes the HRF over a wide range of parameters via three basis signals derived using principal component analysis (PCA). Covering the HRF variability, these three basis signals together with the stimulation pattern define signal subspaces which are applicable to both linear and nonlinear modeling and identification of the HRF and for various activation detection strategies. Analysis of simulated fMRI data using the proposed signal subspace showed increased detection sensitivity compared to the case of using a previously proposed trigonometric subspace. The methodology was also applied to activation detection in both event-related and block design experimental fMRI data using both linear and nonlinear modeling of the HRF. The activated regions were consistent with previous studies, indicating the ability of the proposed approach in detecting brain activation without a priori assumptions about the shape parameters of the HRF. The utility of the proposed basis functions in identifying the HRF is demonstrated by estimating the HRF in different activated regions.
Cojocaru, C; Khayet, M; Zakrzewska-Trznadel, G; Jaworska, A
2009-08-15
The factorial design of experiments and desirability function approach has been applied for multi-response optimization in pervaporation separation process. Two organic aqueous solutions were considered as model mixtures, water/acetonitrile and water/ethanol mixtures. Two responses have been employed in multi-response optimization of pervaporation, total permeate flux and organic selectivity. The effects of three experimental factors (feed temperature, initial concentration of organic compound in feed solution, and downstream pressure) on the pervaporation responses have been investigated. The experiments were performed according to a 2(3) full factorial experimental design. The factorial models have been obtained from experimental design and validated statistically by analysis of variance (ANOVA). The spatial representations of the response functions were drawn together with the corresponding contour line plots. Factorial models have been used to develop the overall desirability function. In addition, the overlap contour plots were presented to identify the desirability zone and to determine the optimum point. The optimal operating conditions were found to be, in the case of water/acetonitrile mixture, a feed temperature of 55 degrees C, an initial concentration of 6.58% and a downstream pressure of 13.99 kPa, while for water/ethanol mixture a feed temperature of 55 degrees C, an initial concentration of 4.53% and a downstream pressure of 9.57 kPa. Under such optimum conditions it was observed experimentally an improvement of both the total permeate flux and selectivity.
Optogenetic approaches to evaluate striatal function in animal models of Parkinson disease
Parker, Krystal L.; Kim, Youngcho; Alberico, Stephanie L.; Emmons, Eric B.; Narayanan, Nandakumar S.
2016-01-01
Optogenetics refers to the ability to control cells that have been genetically modified to express light-sensitive ion channels. The introduction of optogenetic approaches has facilitated the dissection of neural circuits. Optogenetics allows for the precise stimulation and inhibition of specific sets of neurons and their projections with fine temporal specificity. These techniques are ideally suited to investigating neural circuitry underlying motor and cognitive dysfunction in animal models of human disease. Here, we focus on how optogenetics has been used over the last decade to probe striatal circuits that are involved in Parkinson disease, a neurodegenerative condition involving motor and cognitive abnormalities resulting from degeneration of midbrain dopaminergic neurons. The precise mechanisms underlying the striatal contribution to both cognitive and motor dysfunction in Parkinson disease are unknown. Although optogenetic approaches are somewhat removed from clinical use, insight from these studies can help identify novel therapeutic targets and may inspire new treatments for Parkinson disease. Elucidating how neuronal and behavioral functions are influenced and potentially rescued by optogenetic manipulation in animal models could prove to be translatable to humans. These insights can be used to guide future brain-stimulation approaches for motor and cognitive abnormalities in Parkinson disease and other neuropsychiatric diseases. PMID:27069384
Optogenetic approaches to evaluate striatal function in animal models of Parkinson disease.
Parker, Krystal L; Kim, Youngcho; Alberico, Stephanie L; Emmons, Eric B; Narayanan, Nandakumar S
2016-03-01
Optogenetics refers to the ability to control cells that have been genetically modified to express light-sensitive ion channels. The introduction of optogenetic approaches has facilitated the dissection of neural circuits. Optogenetics allows for the precise stimulation and inhibition of specific sets of neurons and their projections with fine temporal specificity. These techniques are ideally suited to investigating neural circuitry underlying motor and cognitive dysfunction in animal models of human disease. Here, we focus on how optogenetics has been used over the last decade to probe striatal circuits that are involved in Parkinson disease, a neurodegenerative condition involving motor and cognitive abnormalities resulting from degeneration of midbrain dopaminergic neurons. The precise mechanisms underlying the striatal contribution to both cognitive and motor dysfunction in Parkinson disease are unknown. Although optogenetic approaches are somewhat removed from clinical use, insight from these studies can help identify novel therapeutic targets and may inspire new treatments for Parkinson disease. Elucidating how neuronal and behavioral functions are influenced and potentially rescued by optogenetic manipulation in animal models could prove to be translatable to humans. These insights can be used to guide future brain-stimulation approaches for motor and cognitive abnormalities in Parkinson disease and other neuropsychiatric diseases.
Uga, Minako; Dan, Ippeita; Sano, Toshifumi; Dan, Haruka; Watanabe, Eiju
2014-01-01
Abstract. An increasing number of functional near-infrared spectroscopy (fNIRS) studies utilize a general linear model (GLM) approach, which serves as a standard statistical method for functional magnetic resonance imaging (fMRI) data analysis. While fMRI solely measures the blood oxygen level dependent (BOLD) signal, fNIRS measures the changes of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) signals at a temporal resolution severalfold higher. This suggests the necessity of adjusting the temporal parameters of a GLM for fNIRS signals. Thus, we devised a GLM-based method utilizing an adaptive hemodynamic response function (HRF). We sought the optimum temporal parameters to best explain the observed time series data during verbal fluency and naming tasks. The peak delay of the HRF was systematically changed to achieve the best-fit model for the observed oxy- and deoxy-Hb time series data. The optimized peak delay showed different values for each Hb signal and task. When the optimized peak delays were adopted, the deoxy-Hb data yielded comparable activations with similar statistical power and spatial patterns to oxy-Hb data. The adaptive HRF method could suitably explain the behaviors of both Hb parameters during tasks with the different cognitive loads during a time course, and thus would serve as an objective method to fully utilize the temporal structures of all fNIRS data. PMID:26157973
NASA Astrophysics Data System (ADS)
Stradi, Daniele; Martinez, Umberto; Blom, Anders; Brandbyge, Mads; Stokbro, Kurt
2016-04-01
Metal-semiconductor contacts are a pillar of modern semiconductor technology. Historically, their microscopic understanding has been hampered by the inability of traditional analytical and numerical methods to fully capture the complex physics governing their operating principles. Here we introduce an atomistic approach based on density functional theory and nonequilibrium Green's function, which includes all the relevant ingredients required to model realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via I -Vbias curve simulations. We apply this method to characterize an Ag/Si interface relevant for photovoltaic applications and study the rectifying-to-Ohmic transition as a function of the semiconductor doping. We also demonstrate that the standard "activation energy" method for the analysis of I -Vbias data might be inaccurate for nonideal interfaces as it neglects electron tunneling, and that finite-size atomistic models have problems in describing these interfaces in the presence of doping due to a poor representation of space-charge effects. Conversely, the present method deals effectively with both issues, thus representing a valid alternative to conventional procedures for the accurate characterization of metal-semiconductor interfaces.
NASA Astrophysics Data System (ADS)
Wang, Sicheng; Huang, Sixun; Xiang, Jie; Fang, Hanxian; Feng, Jian; Wang, Yu
2016-12-01
Ionospheric tomography is based on the observed slant total electron content (sTEC) along different satellite-receiver rays to reconstruct the three-dimensional electron density distributions. Due to incomplete measurements provided by the satellite-receiver geometry, it is a typical ill-posed problem, and how to overcome the ill-posedness is still a crucial content of research. In this paper, Tikhonov regularization method is used and the model function approach is applied to determine the optimal regularization parameter. This algorithm not only balances the weights between sTEC observations and background electron density field but also converges globally and rapidly. The background error covariance is given by multiplying background model variance and location-dependent spatial correlation, and the correlation model is developed by using sample statistics from an ensemble of the International Reference Ionosphere 2012 (IRI2012) model outputs. The Global Navigation Satellite System (GNSS) observations in China are used to present the reconstruction results, and measurements from two ionosondes are used to make independent validations. Both the test cases using artificial sTEC observations and actual GNSS sTEC measurements show that the regularization method can effectively improve the background model outputs.
Milcu, Alexandru; Eugster, Werner; Bachmann, Dörte; Guderle, Marcus; Roscher, Christiane; Gockele, Annette; Landais, Damien; Ravel, Olivier; Gessler, Arthur; Lange, Markus; Ebeling, Anne; Weisser, Wolfgang W; Roy, Jacques; Hildebrandt, Anke; Buchmann, Nina
2016-08-01
The impact of species richness and functional diversity of plants on ecosystem water vapor fluxes has been little investigated. To address this knowledge gap, we combined a lysimeter setup in a controlled environment facility (Ecotron) with large ecosystem samples/monoliths originating from a long-term biodiversity experiment (The Jena Experiment) and a modeling approach. Our goals were (1) quantifying the impact of plant species richness (four vs. 16 species) on day- and nighttime ecosystem water vapor fluxes; (2) partitioning ecosystem evapotranspiration into evaporation and plant transpiration using the Shuttleworth and Wallace (SW) energy partitioning model; and (3) identifying the most parsimonious predictors of water vapor fluxes using plant functional-trait-based metrics such as functional diversity and community weighted means. Daytime measured and modeled evapotranspiration were significantly higher in the higher plant diversity treatment, suggesting increased water acquisition. The SW model suggests that, at low plant species richness, a higher proportion of the available energy was diverted to evaporation (a non-productive flux), while, at higher species richness, the proportion of ecosystem transpiration (a productivity-related water flux) increased. While it is well established that LAI controls ecosystem transpiration, here we also identified that the diversity of leaf nitrogen concentration among species in a community is a consistent predictor of ecosystem water vapor fluxes during daytime. The results provide evidence that, at the peak of the growing season, higher leaf area index (LAI) and lower percentage of bare ground at high plant diversity diverts more of the available water to transpiration, a flux closely coupled with photosynthesis and productivity. Higher rates of transpiration presumably contribute to the positive effect of diversity on productivity.
An overview of the recent approaches for terroir functional modelling, footprinting and zoning
NASA Astrophysics Data System (ADS)
Vaudour, E.; Costantini, E.; Jones, G. V.; Mocali, S.
2014-11-01
Notions of terroir and their conceptualization through agri-environmental sciences have become popular in many parts of world. Originally developed for wine, terroir now encompasses many other crops including fruits, vegetables, cheese, olive oil, coffee, cacao and other crops, linking the uniqueness and quality of both beverages and foods to the environment where they are produced, giving the consumer a sense of place. Climate, geology, geomorphology, and soil are the main environmental factors which compose the terroir effect at different scales. Often considered immutable at the cultural scale, the natural components of terroir are actually a set of processes, which together create a delicate equilibrium and regulation of its effect on products in both space and time. Due to both a greater need to better understand regional to site variations in crop production and the growth in spatial analytic technologies, the study of terroir has shifted from a largely descriptive regional science to a more applied, technical research field. Furthermore, the explosion of spatial data availability and sensing technologies has made the within-field scale of study more valuable to the individual grower. The result has been greater adoption but also issues associated with both the spatial and temporal scales required for practical applications, as well as the relevant approaches for data synthesis. Moreover, as soil microbial communities are known to be of vital importance for terrestrial processes by driving the major soil geochemical cycles and supporting healthy plant growth, an intensive investigation of the microbial organization and their function is also required. Our objective is to present an overview of existing data and modelling approaches for terroir functional modelling, footprinting and zoning at local and regional scales. This review will focus on three main areas of recent terroir research: (1) quantifying the influences of terroir components on plant growth
A conditional Granger causality model approach for group analysis in functional MRI
Zhou, Zhenyu; Wang, Xunheng; Klahr, Nelson J.; Liu, Wei; Arias, Diana; Liu, Hongzhi; von Deneen, Karen M.; Wen, Ying; Lu, Zuhong; Xu, Dongrong; Liu, Yijun
2011-01-01
Granger causality model (GCM) derived from multivariate vector autoregressive models of data has been employed for identifying effective connectivity in the human brain with functional MR imaging (fMRI) and to reveal complex temporal and spatial dynamics underlying a variety of cognitive processes. In the most recent fMRI effective connectivity measures, pairwise GCM has commonly been applied based on single voxel values or average values from special brain areas at the group level. Although a few novel conditional GCM methods have been proposed to quantify the connections between brain areas, our study is the first to propose a viable standardized approach for group analysis of an fMRI data with GCM. To compare the effectiveness of our approach with traditional pairwise GCM models, we applied a well-established conditional GCM to pre-selected time series of brain regions resulting from general linear model (GLM) and group spatial kernel independent component analysis (ICA) of an fMRI dataset in the temporal domain. Datasets consisting of one task-related and one resting-state fMRI were used to investigate connections among brain areas with the conditional GCM method. With the GLM detected brain activation regions in the emotion related cortex during the block design paradigm, the conditional GCM method was proposed to study the causality of the habituation between the left amygdala and pregenual cingulate cortex during emotion processing. For the resting-state dataset, it is possible to calculate not only the effective connectivity between networks but also the heterogeneity within a single network. Our results have further shown a particular interacting pattern of default mode network (DMN) that can be characterized as both afferent and efferent influences on the medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC). These results suggest that the conditional GCM approach based on a linear multivariate vector autoregressive (MVAR) model can achieve
Zhou, Zhenyu; Wang, Xunheng; Klahr, Nelson J; Liu, Wei; Arias, Diana; Liu, Hongzhi; von Deneen, Karen M; Wen, Ying; Lu, Zuhong; Xu, Dongrong; Liu, Yijun
2011-04-01
Granger causality model (GCM) derived from multivariate vector autoregressive models of data has been employed to identify effective connectivity in the human brain with functional magnetic resonance imaging (fMRI) and to reveal complex temporal and spatial dynamics underlying a variety of cognitive processes. In the most recent fMRI effective connectivity measures, pair-wise GCM has commonly been applied based on single-voxel values or average values from special brain areas at the group level. Although a few novel conditional GCM methods have been proposed to quantify the connections between brain areas, our study is the first to propose a viable standardized approach for group analysis of fMRI data with GCM. To compare the effectiveness of our approach with traditional pair-wise GCM models, we applied a well-established conditional GCM to preselected time series of brain regions resulting from general linear model (GLM) and group spatial kernel independent component analysis of an fMRI data set in the temporal domain. Data sets consisting of one task-related and one resting-state fMRI were used to investigate connections among brain areas with the conditional GCM method. With the GLM-detected brain activation regions in the emotion-related cortex during the block design paradigm, the conditional GCM method was proposed to study the causality of the habituation between the left amygdala and pregenual cingulate cortex during emotion processing. For the resting-state data set, it is possible to calculate not only the effective connectivity between networks but also the heterogeneity within a single network. Our results have further shown a particular interacting pattern of default mode network that can be characterized as both afferent and efferent influences on the medial prefrontal cortex and posterior cingulate cortex. These results suggest that the conditional GCM approach based on a linear multivariate vector autoregressive model can achieve greater accuracy
An overview of the recent approaches to terroir functional modelling, footprinting and zoning
NASA Astrophysics Data System (ADS)
Vaudour, E.; Costantini, E.; Jones, G. V.; Mocali, S.
2015-03-01
Notions of terroir and their conceptualization through agro-environmental sciences have become popular in many parts of world. Originally developed for wine, terroir now encompasses many other crops including fruits, vegetables, cheese, olive oil, coffee, cacao and other crops, linking the uniqueness and quality of both beverages and foods to the environment where they are produced, giving the consumer a sense of place. Climate, geology, geomorphology and soil are the main environmental factors which make up the terroir effect on different scales. Often considered immutable culturally, the natural components of terroir are actually a set of processes, which together create a delicate equilibrium and regulation of its effect on products in both space and time. Due to both a greater need to better understand regional-to-site variations in crop production and the growth in spatial analytic technologies, the study of terroir has shifted from a largely descriptive regional science to a more applied, technical research field. Furthermore, the explosion of spatial data availability and sensing technologies has made the within-field scale of study more valuable to the individual grower. The result has been greater adoption of these technologies but also issues associated with both the spatial and temporal scales required for practical applications, as well as the relevant approaches for data synthesis. Moreover, as soil microbial communities are known to be of vital importance for terrestrial processes by driving the major soil geochemical cycles and supporting healthy plant growth, an intensive investigation of the microbial organization and their function is also required. Our objective is to present an overview of existing data and modelling approaches for terroir functional modelling, footprinting and zoning on local and regional scales. This review will focus on two main areas of recent terroir research: (1) using new tools to unravel the biogeochemical cycles of both
An overview of the recent approaches for terroir functional modelling, footprinting and zoning
NASA Astrophysics Data System (ADS)
Costantini, Edoardo; Emmanuelle, Vaudour; Jones, Gregory; Mocali, Stefano
2014-05-01
Notions of terroir and their conceptualization through agri-environmental sciences have become popular in many parts of world. Originally developed for wine, terroir is now investigated for fruits, vegetables, cheese, olive oil, coffee, cacao and other crops, linking the uniqueness and quality of both beverages and foods to the environment where they are produced, giving the consumer a sense of place. Climate, geology, geomorphology, and soil are the main environmental factors which compose the terroir effect at different scales. Often considered immutable at the cultural scale, the natural components of terroir are actually a set of processes, which together create a delicate equilibrium and regulation of its effect on products in both space and time. Due to both a greater need to better understand regional to site variations in crop production and the growth in spatial analytic technologies, the study of terroir has shifted from a largely descriptive regional science to a more applied, technical research field. Furthermore, the explosion of spatial data availability and elaboration technologies have made the scale of study more valuable to the individual grower, resulting in greater adoption and application. Moreover, as soil microbial communities are known to be of vital importance for terrestrial processes by driving the major soil geochemical cycles and supporting healthy plant growth, an intensive investigation of the microbial organization and their function is also required. Our objective is to present an overview of existing data and modeling approaches for terroir functional modeling, footprinting and zoning at local and regional scales. This review will focus on four main areas of recent terroir research: 1) quantifying the influences of terroir components on plant growth, fruit composition and quality, mostly examining climate-soil-water relationships; 2) the metagenomic approach as new tool to unravel the biogeochemical cycles of both macro- and
Modeling solvation effects in real-space and real-time within density functional approaches.
Delgado, Alain; Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea
2015-10-14
The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the Octopus code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.
Modeling solvation effects in real-space and real-time within density functional approaches
Delgado, Alain; Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea
2015-10-14
The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the OCTOPUS code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.
Modeling solvation effects in real-space and real-time within density functional approaches
NASA Astrophysics Data System (ADS)
Delgado, Alain; Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea
2015-10-01
The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the Octopus code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.
Vitkin, Edward; Shlomi, Tomer
2012-11-29
Genome-scale metabolic network reconstructions are considered a key step in quantifying the genotype-phenotype relationship. We present a novel gap-filling approach, MetabolIc Reconstruction via functionAl GEnomics (MIRAGE), which identifies missing network reactions by integrating metabolic flux analysis and functional genomics data. MIRAGE's performance is demonstrated on the reconstruction of metabolic network models of E. coli and Synechocystis sp. and validated via existing networks for these species. Then, it is applied to reconstruct genome-scale metabolic network models for 36 sequenced cyanobacteria amenable for constraint-based modeling analysis and specifically for metabolic engineering. The reconstructed network models are supplied via standard SBML files.
ERIC Educational Resources Information Center
Herndon, Mary Anne
1978-01-01
In a model of the functioning of short term memory, the encoding of information for subsequent storage in long term memory is simulated. In the encoding process, semantically equivalent paragraphs are detected for recombination into a macro information unit. (HOD)
Integrative approaches for modeling regulation and function of the respiratory system.
Ben-Tal, Alona; Tawhai, Merryn H
2013-01-01
Mathematical models have been central to understanding the interaction between neural control and breathing. Models of the entire respiratory system-which comprises the lungs and the neural circuitry that controls their ventilation-have been derived using simplifying assumptions to compartmentalize each component of the system and to define the interactions between components. These full system models often rely-through necessity-on empirically derived relationships or parameters, in addition to physiological values. In parallel with the development of whole respiratory system models are mathematical models that focus on furthering a detailed understanding of the neural control network, or of the several functions that contribute to gas exchange within the lung. These models are biophysically based, and rely on physiological parameters. They include single-unit models for a breathing lung or neural circuit, through to spatially distributed models of ventilation and perfusion, or multicircuit models for neural control. The challenge is to bring together these more recent advances in models of neural control with models of lung function, into a full simulation for the respiratory system that builds upon the more detailed models but remains computationally tractable. This requires first understanding the mathematical models that have been developed for the respiratory system at different levels, and which could be used to study how physiological levels of O2 and CO2 in the blood are maintained.
Integrative approaches for modeling regulation and function of the respiratory system
Ben-Tal, Alona
2013-01-01
Mathematical models have been central to understanding the interaction between neural control and breathing. Models of the entire respiratory system – which comprises the lungs and the neural circuitry that controls their ventilation - have been derived using simplifying assumptions to compartmentalise each component of the system and to define the interactions between components. These full system models often rely – through necessity - on empirically derived relationships or parameters, in addition to physiological values. In parallel with the development of whole respiratory system models are mathematical models that focus on furthering a detailed understanding of the neural control network, or of the several functions that contribute to gas exchange within the lung. These models are biophysically based, and rely on physiological parameters. They include single-unit models for a breathing lung or neural circuit, through to spatially-distributed models of ventilation and perfusion, or multi-circuit models for neural control. The challenge is to bring together these more recent advances in models of neural control with models of lung function, into a full simulation for the respiratory system that builds upon the more detailed models but remains computationally tractable. This requires first understanding the mathematical models that have been developed for the respiratory system at different levels, and which could be used to study how physiological levels of O2 and CO2 in the blood are maintained. PMID:24591490
A new approach for determining fully empirical altimeter wind speed model functions
NASA Technical Reports Server (NTRS)
Freilich, M. H.; Challenor, Peter G.
1994-01-01
A statistical technique is developed for determining fully empirical model functions relating altimeter backscatter (sigma(sub 0)) measurements to near-surface neutral stability wind speed. By assuming that sigma(sub 0) varies monotonically and uniquely with wind speed, the method requires knowledge only of the separate, rather than joint distribution functions of sigma(sub 0) and wind speed. Analytic simplifications result from using a Weibull distribution to approximate the global ocean wind speed distribution; several different wind data sets are used to demonstrate the validity of the Weibull approximation. The technique has been applied to 1 year of Geosat data. Validation of the new and historical model functions using an independent buoy data set demonstrates that the present model function not only has small overall bias and root mean square (RMS) errors, but yields smaller systematic error trends with wind speed and pseudowave age than previously published models. The present analysis suggests that generally accuracte altimeter model functions can be derived without the use of colocated measurements, nor is additional significant wave height information measured by the altimeter necessary.
Optimization of global model composed of radial basis functions using the term-ranking approach
Cai, Peng; Tao, Chao Liu, Xiao-Jun
2014-03-15
A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.
NASA Astrophysics Data System (ADS)
Bodegom, P. V.
2015-12-01
In recent years a number of approaches have been developed to provide alternatives to the use of plant functional types (PFTs) with constant vegetation characteristics for simulating vegetation responses to climate changes. In this presentation, an overview of those approaches and their challenges is given. Some new approaches aim at removing PFTs altogether by determining the combination of vegetation characteristics that would fit local conditions best. Others describe the variation in traits within PFTs as a function of environmental drivers, based on community assembly principles. In the first approach, after an equilibrium has been established, vegetation composition and its functional attributes can change by allowing the emergence of a new type that is more fit. In the latter case, changes in vegetation attributes in space and time as assumed to be the result intraspecific variation, genetic adaptation and species turnover, without quantifying their respective importance. Hence, it is assumed that -by whatever mechanism- the community as a whole responds without major time lags to changes in environmental drivers. Recently, we showed that intraspecific variation is highly species- and trait-specific and that none of the current hypotheses on drivers of this variation seems to hold. Also genetic adaptation varies considerably among species and it is uncertain whether it will be fast enough to cope with climate change. Species turnover within a community is especially fast in herbaceous communities, but much slower in forest communities. Hence, it seems that assumptions made may not hold for forested ecosystems, but solutions to deal with this do not yet exist. Even despite the fact that responsiveness of vegetation to environmental change may be overestimated, we showed that -upon implementation of trait-environment relationships- major changes in global vegetation distribution are projected, to similar extents as to those without such responsiveness.
Operator function modeling: An approach to cognitive task analysis in supervisory control systems
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1987-01-01
In a study of models of operators in complex, automated space systems, an operator function model (OFM) methodology was extended to represent cognitive as well as manual operator activities. Development continued on a software tool called OFMdraw, which facilitates construction of an OFM by permitting construction of a heterarchic network of nodes and arcs. Emphasis was placed on development of OFMspert, an expert system designed both to model human operation and to assist real human operators. The system uses a blackboard method of problem solving to make an on-line representation of operator intentions, called ACTIN (actions interpreter).
Characteristic-function approach to the Jaynes-Cummings-model revivals
NASA Astrophysics Data System (ADS)
Pimenta, Hudson; James, Daniel F. V.
2016-11-01
A two-level system interacting with an electromagnetic mode experiences inversion collapses and revivals. They are an indirect signature of the field quantization and also hold information about the mode. Thus, they may be harnessed for quantum-state reconstruction. In this work, we investigate the inversion via the characteristic function of the field mode photon-number distribution. The characteristic function is the spectral representation of the photon-number probability distribution. Exploiting the characteristic function periodicity, we find that the inversion can be understood as the result of interference between a set of structures akin to a free quantum-mechanical wave packet, with each structure corresponding to a snapshot of this packet for different degrees of dispersion. The Fourier representation of each packet Fourier determines the photon-number distribution. We also derive an integral equation whose solution yields the underlying packets. This approach allows the retrieval of the field photon-number distribution directly from the inversion under fairly general conditions and paves the way for a partial tomography technique.
de Vries, Natalie Jane; Carlson, Jamie; Moscato, Pablo
2014-01-01
Online consumer behavior in general and online customer engagement with brands in particular, has become a major focus of research activity fuelled by the exponential increase of interactive functions of the internet and social media platforms and applications. Current research in this area is mostly hypothesis-driven and much debate about the concept of Customer Engagement and its related constructs remains existent in the literature. In this paper, we aim to propose a novel methodology for reverse engineering a consumer behavior model for online customer engagement, based on a computational and data-driven perspective. This methodology could be generalized and prove useful for future research in the fields of consumer behaviors using questionnaire data or studies investigating other types of human behaviors. The method we propose contains five main stages; symbolic regression analysis, graph building, community detection, evaluation of results and finally, investigation of directed cycles and common feedback loops. The ‘communities’ of questionnaire items that emerge from our community detection method form possible ‘functional constructs’ inferred from data rather than assumed from literature and theory. Our results show consistent partitioning of questionnaire items into such ‘functional constructs’ suggesting the method proposed here could be adopted as a new data-driven way of human behavior modeling. PMID:25036766
de Vries, Natalie Jane; Carlson, Jamie; Moscato, Pablo
2014-01-01
Online consumer behavior in general and online customer engagement with brands in particular, has become a major focus of research activity fuelled by the exponential increase of interactive functions of the internet and social media platforms and applications. Current research in this area is mostly hypothesis-driven and much debate about the concept of Customer Engagement and its related constructs remains existent in the literature. In this paper, we aim to propose a novel methodology for reverse engineering a consumer behavior model for online customer engagement, based on a computational and data-driven perspective. This methodology could be generalized and prove useful for future research in the fields of consumer behaviors using questionnaire data or studies investigating other types of human behaviors. The method we propose contains five main stages; symbolic regression analysis, graph building, community detection, evaluation of results and finally, investigation of directed cycles and common feedback loops. The 'communities' of questionnaire items that emerge from our community detection method form possible 'functional constructs' inferred from data rather than assumed from literature and theory. Our results show consistent partitioning of questionnaire items into such 'functional constructs' suggesting the method proposed here could be adopted as a new data-driven way of human behavior modeling.
Gilles, Luc; Correia, Carlos; Véran, Jean-Pierre; Wang, Lianqi; Ellerbroek, Brent
2012-11-01
This paper discusses an innovative simulation model based approach for long exposure atmospheric point spread function (PSF) reconstruction in the context of laser guide star (LGS) multiconjugate adaptive optics (MCAO). The approach is inspired from the classical scheme developed by Véran et al. [J. Opt. Soc. Am. A14, 3057 (1997)] and Flicker et al. [Astron. Astrophys.400, 1199 (2003)] and reconstructs the long exposure optical transfer function (OTF), i.e., the Fourier transformed PSF, as a product of separate long-exposure tip/tilt removed and tip/tilt OTFs, each estimated by postprocessing system and simulation telemetry data. Sample enclosed energy results assessing reconstruction accuracy are presented for the Thirty Meter Telescope LGS MCAO system currently under design and show that percent level absolute and differential photometry over a 30 arcsec diameter field of view are achievable provided the simulation model faithfully represents the real system.
NASA Astrophysics Data System (ADS)
Freire, Hermann; Corrêa, Eberth
2012-02-01
We apply a functional implementation of the field-theoretical renormalization group (RG) method up to two loops to the single-impurity Anderson model. To achieve this, we follow a RG strategy similar to that proposed by Vojta et al. (in Phys. Rev. Lett. 85:4940, 2000), which consists of defining a soft ultraviolet regulator in the space of Matsubara frequencies for the renormalized Green's function. Then we proceed to derive analytically and solve numerically integro-differential flow equations for the effective couplings and the quasiparticle weight of the present model, which fully treat the interplay of particle-particle and particle-hole parquet diagrams and the effect of the two-loop self-energy feedback into them. We show that our results correctly reproduce accurate numerical renormalization group data for weak to slightly moderate interactions. These results are in excellent agreement with other functional Wilsonian RG works available in the literature. Since the field-theoretical RG method turns out to be easier to implement at higher loops than the Wilsonian approach, higher-order calculations within the present approach could improve further the results for this model at stronger couplings. We argue that the present RG scheme could thus offer a possible alternative to other functional RG methods to describe electronic correlations within this model.
NASA Astrophysics Data System (ADS)
Tomellini, Massimo; Fanfoni, Massimo
2014-11-01
The statistical methods exploiting the "Correlation-Functions" or the "Differential-Critical-Region" are both suitable for describing phase transformation kinetics ruled by nucleation and growth. We present a critical analysis of these two approaches, with particular emphasis to transformations ruled by diffusional growth which cannot be described by the Kolmogorov-Johnson-Mehl-Avrami (KJMA) theory. In order to bridge the gap between these two methods, the conditional probability functions entering the "Differential-Critical-Region" approach are determined in terms of correlation functions. The formulation of these probabilities by means of cluster expansion is also derived, which improves the accuracy of the computation. The model is applied to 2D and 3D parabolic growths occurring at constant value of either actual or phantom-included nucleation rates. Computer simulations have been employed for corroborating the theoretical modeling. The contribution to the kinetics of phantom overgrowth is estimated and it is found to be of a few percent in the case of constant value of the actual nucleation rate. It is shown that for a parabolic growth law both approaches do not provide a closed-form solution of the kinetics. In this respect, the two methods are equivalent and the longstanding overgrowth phenomenon, which limits the KJMA theory, does not admit an exact analytical solution.
Tomellini, Massimo; Fanfoni, Massimo
2014-11-01
The statistical methods exploiting the "Correlation-Functions" or the "Differential-Critical-Region" are both suitable for describing phase transformation kinetics ruled by nucleation and growth. We present a critical analysis of these two approaches, with particular emphasis to transformations ruled by diffusional growth which cannot be described by the Kolmogorov-Johnson-Mehl-Avrami (KJMA) theory. In order to bridge the gap between these two methods, the conditional probability functions entering the "Differential-Critical-Region" approach are determined in terms of correlation functions. The formulation of these probabilities by means of cluster expansion is also derived, which improves the accuracy of the computation. The model is applied to 2D and 3D parabolic growths occurring at constant value of either actual or phantom-included nucleation rates. Computer simulations have been employed for corroborating the theoretical modeling. The contribution to the kinetics of phantom overgrowth is estimated and it is found to be of a few percent in the case of constant value of the actual nucleation rate. It is shown that for a parabolic growth law both approaches do not provide a closed-form solution of the kinetics. In this respect, the two methods are equivalent and the longstanding overgrowth phenomenon, which limits the KJMA theory, does not admit an exact analytical solution.
Salazar, Ramon B. E-mail: hilatikh@purdue.edu; Appenzeller, Joerg; Ilatikhameneh, Hesameddin E-mail: hilatikh@purdue.edu; Rahman, Rajib; Klimeck, Gerhard
2015-10-28
A new compact modeling approach is presented which describes the full current-voltage (I-V) characteristic of high-performance (aggressively scaled-down) tunneling field-effect-transistors (TFETs) based on homojunction direct-bandgap semiconductors. The model is based on an analytic description of two key features, which capture the main physical phenomena related to TFETs: (1) the potential profile from source to channel and (2) the elliptic curvature of the complex bands in the bandgap region. It is proposed to use 1D Poisson's equations in the source and the channel to describe the potential profile in homojunction TFETs. This allows to quantify the impact of source/drain doping on device performance, an aspect usually ignored in TFET modeling but highly relevant in ultra-scaled devices. The compact model is validated by comparison with state-of-the-art quantum transport simulations using a 3D full band atomistic approach based on non-equilibrium Green's functions. It is shown that the model reproduces with good accuracy the data obtained from the simulations in all regions of operation: the on/off states and the n/p branches of conduction. This approach allows calculation of energy-dependent band-to-band tunneling currents in TFETs, a feature that allows gaining deep insights into the underlying device physics. The simplicity and accuracy of the approach provide a powerful tool to explore in a quantitatively manner how a wide variety of parameters (material-, size-, and/or geometry-dependent) impact the TFET performance under any bias conditions. The proposed model presents thus a practical complement to computationally expensive simulations such as the 3D NEGF approach.
Validity of a power law approach to model tablet strength as a function of compaction pressure.
Kloefer, Bastian; Henschel, Pascal; Kuentz, Martin
2010-03-01
Designing quality into dosage forms should not be only based on qualitative or purely heuristic relations. A knowledge space must be generated, in which at least some mechanistic understanding is included. This is of particular interest for critical dosage form parameters like the strength of tablets. In line with this consideration, the scope of the work is to explore the validity range of a theoretically derived power law for the tensile strength of tablets. Different grades of microcrystalline cellulose and lactose, as well as mixtures thereof, were used to compress model tablets. The power law was found to hold true in a low pressure range, which agreed with theoretical expectation. This low pressure range depended on the individual material characteristics, but as a rule of thumb, the tablets having a porosity of more than about 30% or being compressed below 100 MPa were generally well explained by the tensile strength relationship. Tablets at higher densities were less adequately described by the theory that is based on large-scale heterogeneity of the relevant contact points in the compact. Tablets close to the unity density therefore require other theoretical approaches. More research is needed to understand tablet strength in a wider range of compaction pressures.
2-D Modeling of Nanoscale MOSFETs: Non-Equilibrium Green's Function Approach
NASA Technical Reports Server (NTRS)
Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan
2001-01-01
We have developed physical approximations and computer code capable of realistically simulating 2-D nanoscale transistors, using the non-equilibrium Green's function (NEGF) method. This is the most accurate full quantum model yet applied to 2-D device simulation. Open boundary conditions and oxide tunneling are treated on an equal footing. Electrons in the ellipsoids of the conduction band are treated within the anisotropic effective mass approximation. Electron-electron interaction is treated within Hartree approximation by solving NEGF and Poisson equations self-consistently. For the calculations presented here, parallelization is performed by distributing the solution of NEGF equations to various processors, energy wise. We present simulation of the "benchmark" MIT 25nm and 90nm MOSFETs and compare our results to those from the drift-diffusion simulator and the quantum-corrected results available. In the 25nm MOSFET, the channel length is less than ten times the electron wavelength, and the electron scattering time is comparable to its transit time. Our main results are: (1) Simulated drain subthreshold current characteristics are shown, where the potential profiles are calculated self-consistently by the corresponding simulation methods. The current predicted by our quantum simulation has smaller subthreshold slope of the Vg dependence which results in higher threshold voltage. (2) When gate oxide thickness is less than 2 nm, gate oxide leakage is a primary factor which determines off-current of a MOSFET (3) Using our 2-D NEGF simulator, we found several ways to drastically decrease oxide leakage current without compromising drive current. (4) Quantum mechanically calculated electron density is much smaller than the background doping density in the poly silicon gate region near oxide interface. This creates an additional effective gate voltage. Different ways to. include this effect approximately will be discussed.
Tabacchi, G; Hutter, J; Mundy, C
2005-04-07
A combined linear response--frozen electron density model has been implemented in a molecular dynamics scheme derived from an extended Lagrangian formalism. This approach is based on a partition of the electronic charge distribution into a frozen region described by Kim-Gordon theory, and a response contribution determined by the instaneous ionic configuration of the system. The method is free from empirical pair-potentials and the parameterization protocol involves only calculations on properly chosen subsystems. They apply this method to a series of alkali halides in different physical phases and are able to reproduce experimental structural and thermodynamic properties with an accuracy comparable to Kohn-Sham density functional calculations.
Yetilmezsoy, Kaan
2012-08-01
An integrated multi-objective optimization approach within the framework of nonlinear regression-based kinetic modeling and desirability function was proposed to optimize an up-flow anaerobic sludge blanket (UASB) reactor treating poultry manure wastewater (PMW). Chen-Hashimoto and modified Stover-Kincannon models were applied to the UASB reactor for determination of bio-kinetic coefficients. A new empirical formulation of volumetric organic loading rate was derived for the first time for PMW to estimate the dimensionless kinetic parameter (K) in the Chen-Hashimoto model. Maximum substrate utilization rate constant and saturation constant were predicted as 11.83 g COD/L/day and 13.02 g COD/L/day, respectively, for the modified Stover-Kincannon model. Based on four process-related variables, three objective functions including a detailed bio-economic model were derived and optimized by using a LOQO/AMPL algorithm, with a maximum overall desirability of 0.896. The proposed optimization scheme demonstrated a useful tool for the UASB reactor to optimize several responses simultaneously.
Functional Generalized Additive Models.
McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David
2014-01-01
We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.
Turan, Başak; Selçuki, Cenk
2014-09-01
Amino acids are constituents of proteins and enzymes which take part almost in all metabolic reactions. Glutamic acid, with an ability to form a negatively charged side chain, plays a major role in intra and intermolecular interactions of proteins, peptides, and enzymes. An exhaustive conformational analysis has been performed for all eight possible forms at B3LYP/cc-pVTZ level. All possible neutral, zwitterionic, protonated, and deprotonated forms of glutamic acid structures have been investigated in solution by using polarizable continuum model mimicking water as the solvent. Nine families based on the dihedral angles have been classified for eight glutamic acid forms. The electrostatic effects included in the solvent model usually stabilize the charged forms more. However, the stability of the zwitterionic form has been underestimated due to the lack of hydrogen bonding between the solute and solvent; therefore, it is observed that compact neutral glutamic acid structures are more stable in solution than they are in vacuum. Our calculations have shown that among all eight possible forms, some are not stable in solution and are immediately converted to other more stable forms. Comparison of isoelectronic glutamic acid forms indicated that one of the structures among possible zwitterionic and anionic forms may dominate over the other possible forms. Additional investigations using explicit solvent models are necessary to determine the stability of charged forms of glutamic acid in solution as our results clearly indicate that hydrogen bonding and its type have a major role in the structure and energy of conformers.
Modolo, Luzia V; Blount, Jack W; Achnine, Lahoucine; Naoumkina, Marina A; Wang, Xiaoqiang; Dixon, Richard A
2007-07-01
Analysis of over 200,000 expressed sequence tags from a range of Medicago truncatula cDNA libraries resulted in the identification of over 150 different family 1 glycosyltransferase (UGT) genes. Of these, 63 were represented by full length clones in an EST library collection. Among these, 19 gave soluble proteins when expressed in E. coli, and these were screened for catalytic activity against a range of flavonoid and isoflavonoid substrates using a high-throughput HPLC assay method. Eight UGTs were identified with activity against isoflavones, flavones, flavonols or anthocyanidins, and several showed high catalytic specificity for more than one class of (iso)flavonoid substrate. All tested UGTs preferred UDP-glucose as sugar donor. Phylogenetic analysis indicated that the Medicago (iso)flavonoid glycosyltransferase gene sequences fell into a number of different clades, and several clustered with UGTs annotated as glycosylating non-flavonoid substrates. Quantitative RT-PCR and DNA microarray analysis revealed unique transcript expression patterns for each of the eight UGTs in Medicago organs and cell suspension cultures, and comparison of these patterns with known phytochemical profiles suggested in vivo functions for several of the enzymes.
Li, Chen; Nagasaki, Masao; Koh, Chuan Hock; Miyano, Satoru
2011-05-01
Mathematical modeling and simulation studies are playing an increasingly important role in helping researchers elucidate how living organisms function in cells. In systems biology, researchers typically tune many parameters manually to achieve simulation results that are consistent with biological knowledge. This severely limits the size and complexity of simulation models built. In order to break this limitation, we propose a computational framework to automatically estimate kinetic parameters for a given network structure. We utilized an online (on-the-fly) model checking technique (which saves resources compared to the offline approach), with a quantitative modeling and simulation architecture named hybrid functional Petri net with extension (HFPNe). We demonstrate the applicability of this framework by the analysis of the underlying model for the neuronal cell fate decision model (ASE fate model) in Caenorhabditis elegans. First, we built a quantitative ASE fate model containing 3327 components emulating nine genetic conditions. Then, using our developed efficient online model checker, MIRACH 1.0, together with parameter estimation, we ran 20-million simulation runs, and were able to locate 57 parameter sets for 23 parameters in the model that are consistent with 45 biological rules extracted from published biological articles without much manual intervention. To evaluate the robustness of these 57 parameter sets, we run another 20 million simulation runs using different magnitudes of noise. Our simulation results concluded that among these models, one model is the most reasonable and robust simulation model owing to the high stability against these stochastic noises. Our simulation results provide interesting biological findings which could be used for future wet-lab experiments.
[Partial lease squares approach to functional analysis].
Preda, C
2006-01-01
We extend the partial least squares (PLS) approach to functional data represented in our models by sample paths of stochastic process with continuous time. Due to the infinite dimension, when functional data are used as a predictor for linear regression and classification models, the estimation problem is an ill-posed one. In this context, PLS offers a simple and efficient alternative to the methods based on the principal components of the stochastic process. We compare the results given by the PLS approach and other linear models using several datasets from economy, industry and medical fields.
NASA Astrophysics Data System (ADS)
Maitra, Subrata; Banerjee, Debamalya
2010-10-01
Present article is based on application of the product quality and improvement of design related with the nature of failure of machineries and plant operational problems of an industrial blower fan Company. The project aims at developing the product on the basis of standardized production parameters for selling its products in the market. Special attention is also being paid to the blower fans which have been ordered directly by the customer on the basis of installed capacity of air to be provided by the fan. Application of quality function deployment is primarily a customer oriented approach. Proposed model of QFD integrated with AHP to select and rank the decision criterions on the commercial and technical factors and the measurement of the decision parameters for selection of best product in the compettitive environment. The present AHP-QFD model justifies the selection of a blower fan with the help of the group of experts' opinion by pairwise comparison of the customer's and ergonomy based technical design requirements. The steps invoved in implementation of the QFD—AHP and selection of weighted criterion may be helpful for all similar purpose industries maintaining cost and utility for competitive product.
Ian Robertson
2007-04-28
Development and validation of constitutive models for polycrystalline materials subjected to high strain-rate loading over a range of temperatures are needed to predict the response of engineering materials to in-service type conditions. To account accurately for the complex effects that can occur during extreme and variable loading conditions, requires significant and detailed computational and modeling efforts. These efforts must be integrated fully with precise and targeted experimental measurements that not only verify the predictions of the models, but also provide input about the fundamental processes responsible for the macroscopic response. Achieving this coupling between modeling and experiment is the guiding principle of this program. Specifically, this program seeks to bridge the length scale between discrete dislocation interactions with grain boundaries and continuum models for polycrystalline plasticity. Achieving this goal requires incorporating these complex dislocation-interface interactions into the well-defined behavior of single crystals. Despite the widespread study of metal plasticity, this aspect is not well understood for simple loading conditions, let alone extreme ones. Our experimental approach includes determining the high-strain rate response as a function of strain and temperature with post-mortem characterization of the microstructure, quasi-static testing of pre-deformed material, and direct observation of the dislocation behavior during reloading by using the in situ transmission electron microscope deformation technique. These experiments will provide the basis for development and validation of physically-based constitutive models. One aspect of the program involves the direct observation of specific mechanisms of micro-plasticity, as these indicate the boundary value problem that should be addressed. This focus on the pre-yield region in the quasi-static effort (the elasto-plastic transition) is also a tractable one from an
NASA Astrophysics Data System (ADS)
Yang, Min-Fong; Sun, Shih-Jye; Hong, Tzay-Ming
1993-12-01
We show that a special kind of slave-boson mean-field approximation, which allows for the symmetry-broken states appropriate for a bipartite lattice, can give essentially the same results as those by the variational-wave-function approach proposed by Gula´csi, Strack, and Vollhardt [Phys. Rev. B 47, 8594 (1993)]. The advantages of our approach are briefly discussed.
Various modeling approaches have been developed for metal binding on humic substances. However, most of these models are still curve-fitting exercises-- the resulting set of parameters such as affinity constants (or the distribution of them) is found to depend on pH, ionic stren...
Choi, Eunhee; Tang, Fengyan; Kim, Sung-Geun; Turk, Phillip
2016-10-01
This study examined the longitudinal relationships between functional health in later years and three types of productive activities: volunteering, full-time, and part-time work. Using the data from five waves (2000-2008) of the Health and Retirement Study, we applied multivariate latent growth curve modeling to examine the longitudinal relationships among individuals 50 or over. Functional health was measured by limitations in activities of daily living. Individuals who volunteered, worked either full time or part time exhibited a slower decline in functional health than nonparticipants. Significant associations were also found between initial functional health and longitudinal changes in productive activity participation. This study provides additional support for the benefits of productive activities later in life; engagement in volunteering and employment are indeed associated with better functional health in middle and old age.
Hadjipantelis, P Z; Aston, J A D; Müller, H G; Evans, J P
2015-04-03
Mandarin Chinese is characterized by being a tonal language; the pitch (or F0) of its utterances carries considerable linguistic information. However, speech samples from different individuals are subject to changes in amplitude and phase, which must be accounted for in any analysis that attempts to provide a linguistically meaningful description of the language. A joint model for amplitude, phase, and duration is presented, which combines elements from functional data analysis, compositional data analysis, and linear mixed effects models. By decomposing functions via a functional principal component analysis, and connecting registration functions to compositional data analysis, a joint multivariate mixed effect model can be formulated, which gives insights into the relationship between the different modes of variation as well as their dependence on linguistic and nonlinguistic covariates. The model is applied to the COSPRO-1 dataset, a comprehensive database of spoken Taiwanese Mandarin, containing approximately 50,000 phonetically diverse sample F0 contours (syllables), and reveals that phonetic information is jointly carried by both amplitude and phase variation. Supplementary materials for this article are available online.
Hadjipantelis, P. Z.; Aston, J. A. D.; Müller, H. G.; Evans, J. P.
2015-01-01
Mandarin Chinese is characterized by being a tonal language; the pitch (or F 0) of its utterances carries considerable linguistic information. However, speech samples from different individuals are subject to changes in amplitude and phase, which must be accounted for in any analysis that attempts to provide a linguistically meaningful description of the language. A joint model for amplitude, phase, and duration is presented, which combines elements from functional data analysis, compositional data analysis, and linear mixed effects models. By decomposing functions via a functional principal component analysis, and connecting registration functions to compositional data analysis, a joint multivariate mixed effect model can be formulated, which gives insights into the relationship between the different modes of variation as well as their dependence on linguistic and nonlinguistic covariates. The model is applied to the COSPRO-1 dataset, a comprehensive database of spoken Taiwanese Mandarin, containing approximately 50,000 phonetically diverse sample F 0 contours (syllables), and reveals that phonetic information is jointly carried by both amplitude and phase variation. Supplementary materials for this article are available online. PMID:26692591
ERIC Educational Resources Information Center
Lloyd, Rebecca
2015-01-01
Background: Physical Education (PE) programmes are expanding to include alternative activities yet what is missing is a conceptual model that facilitates how the learning process may be understood and assessed beyond the dominant sport-technique paradigm. Purpose: The purpose of this article was to feature the emergence of a Function-to-Flow (F2F)…
Shakouri, Payman; Ordys, Andrzej; Askari, Mohamad R
2012-09-01
In the design of adaptive cruise control (ACC) system two separate control loops - an outer loop to maintain the safe distance from the vehicle traveling in front and an inner loop to control the brake pedal and throttle opening position - are commonly used. In this paper a different approach is proposed in which a single control loop is utilized. The objective of the distance tracking is incorporated into the single nonlinear model predictive control (NMPC) by extending the original linear time invariant (LTI) models obtained by linearizing the nonlinear dynamic model of the vehicle. This is achieved by introducing the additional states corresponding to the relative distance between leading and following vehicles, and also the velocity of the leading vehicle. Control of the brake and throttle position is implemented by taking the state-dependent approach. The model demonstrates to be more effective in tracking the speed and distance by eliminating the necessity of switching between the two controllers. It also offers smooth variation in brake and throttle controlling signal which subsequently results in a more uniform acceleration of the vehicle. The results of proposed method are compared with other ACC systems using two separate control loops. Furthermore, an ACC simulation results using a stop&go scenario are shown, demonstrating a better fulfillment of the design requirements.
Zhang, Jeff L.; Rusinek, Henry; Bokacheva, Louisa; Lerman, Lilach O.; Chen, Qun; Prince, Chekema; Oesingmann, Niels; Song, Ting; Lee, Vivian S.
2009-01-01
A three-compartment model is proposed for analyzing magnetic resonance renography (MRR) and computed tomography renography (CTR) data to derive clinically useful parameters such as glomerular filtration rate (GFR) and renal plasma flow (RPF). The model fits the convolution of the measured input and the predefined impulse retention functions to the measured tissue curves. A MRR study of 10 patients showed that relative root mean square errors by the model were significantly lower than errors for a previously reported three-compartmental model (11.6% ± 4.9 vs 15.5% ± 4.1; P < 0.001). GFR estimates correlated well with reference values by 99mTc-DTPA scintigraphy (correlation coefficient r = 0.82), and for RPF, r = 0.80. Parameter-sensitivity analysis and Monte Carlo simulation indicated that model parameters could be reliably identified. When the model was applied to CTR in five pigs, expected increases in RPF and GFR due to acetylcholine were detected with greater consistency than with the previous model. These results support the reliability and validity of the new model in computing GFR, RPF, and renal mean transit times from MR and CT data. PMID:18228576
NASA Astrophysics Data System (ADS)
Fakhri, H.; Dehghani, A.; Mojaveri, B.
Using second-order differential operators as a realization of the su(1,1) Lie algebra by the associated Laguerre functions, it is shown that the quantum states of the Calogero-Sutherland, half-oscillator and radial part of a 3D harmonic oscillator constitute the unitary representations for the same algebra. This su(1,1) Lie algebra symmetry leads to derivation of the Barut-Girardello and Klauder-Perelomov coherent states for those models. The explicit compact forms of these coherent states are calculated. Also, to realize the resolution of the identity, their corresponding positive definite measures on the complex plane are obtained in terms of the known functions.
Introducing linear functions: an alternative statistical approach
NASA Astrophysics Data System (ADS)
Nolan, Caroline; Herbert, Sandra
2015-12-01
The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be `threshold concepts'. There is recognition that linear functions can be taught in context through the exploration of linear modelling examples, but this has its limitations. Currently, statistical data is easily attainable, and graphics or computer algebra system (CAS) calculators are common in many classrooms. The use of this technology provides ease of access to different representations of linear functions as well as the ability to fit a least-squares line for real-life data. This means these calculators could support a possible alternative approach to the introduction of linear functions. This study compares the results of an end-of-topic test for two classes of Australian middle secondary students at a regional school to determine if such an alternative approach is feasible. In this study, test questions were grouped by concept and subjected to concept by concept analysis of the means of test results of the two classes. This analysis revealed that the students following the alternative approach demonstrated greater competence with non-standard questions.
NASA Astrophysics Data System (ADS)
Legendre, Louis; Rivkin, Richard B.
2005-09-01
Marine food webs influence climate by channeling carbon below the permanent pycnocline, where it can be sequestered. Because most of the organic matter exported from the euphotic zone is remineralized within the "upper ocean" (i.e., the water column above the depth of sequestration), the resulting CO2 would potentially return to the atmosphere on decadal timescales. Thus ocean-climate models must consider the cycling of carbon within and from the upper ocean down to the depth of sequestration, instead of only to the base of the euphotic zone. Climate-related changes in the upper ocean will influence the diversity and functioning of plankton functional types. In order to predict the interactions between the changing climate and the ocean's biology, relevant models must take into account the roles of functional biodiversity and pelagic ecosystem functioning in determining the biogeochemical fluxes of carbon. We propose the development of a class of models that consider the interactions, in the upper ocean, of functional types of plankton organisms (e.g., phytoplankton, heterotrophic bacteria, microzooplankton, large zooplankton, and microphagous macrozooplankton), food web processes that affect organic matter (e.g., synthesis, transformation, and remineralization), and biogeochemical carbon fluxes (e.g., photosynthesis, calcification, respiration, and deep transfer). Herein we develop a framework for this class of models, and we use it to make preliminary predictions for the upper ocean in a high-CO2 world, without and with iron fertilization. Finally, we suggest a general approach for implementing our proposed class of models.
Estimating Function Approaches for Spatial Point Processes
NASA Astrophysics Data System (ADS)
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting
Error latency estimation using functional fault modeling
NASA Technical Reports Server (NTRS)
Manthani, S. R.; Saxena, N. R.; Robinson, J. P.
1983-01-01
A complete modeling of faults at gate level for a fault tolerant computer is both infeasible and uneconomical. Functional fault modeling is an approach where units are characterized at an intermediate level and then combined to determine fault behavior. The applicability of functional fault modeling to the FTMP is studied. Using this model a forecast of error latency is made for some functional blocks. This approach is useful in representing larger sections of the hardware and aids in uncovering system level deficiencies.
Menouar, Salah; Maamache, Mustapha; Choi, Jeong Ryeol
2010-08-15
The quantum states of time-dependent coupled oscillator model for charged particles subjected to variable magnetic field are investigated using the invariant operator methods. To do this, we have taken advantage of an alternative method, so-called unitary transformation approach, available in the framework of quantum mechanics, as well as a generalized canonical transformation method in the classical regime. The transformed quantum Hamiltonian is obtained using suitable unitary operators and is represented in terms of two independent harmonic oscillators which have the same frequencies as that of the classically transformed one. Starting from the wave functions in the transformed system, we have derived the full wave functions in the original system with the help of the unitary operators. One can easily take a complete description of how the charged particle behaves under the given Hamiltonian by taking advantage of these analytical wave functions.
NASA Astrophysics Data System (ADS)
Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide
2016-12-01
We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.
Approaches for modeling magnetic nanoparticle dynamics
Reeves, Daniel B; Weaver, John B
2014-01-01
Magnetic nanoparticles are useful biological probes as well as therapeutic agents. There have been several approaches used to model nanoparticle magnetization dynamics for both Brownian as well as Néel rotation. The magnetizations are often of interest and can be compared with experimental results. Here we summarize these approaches including the Stoner-Wohlfarth approach, and stochastic approaches including thermal fluctuations. Non-equilibrium related temperature effects can be described by a distribution function approach (Fokker-Planck equation) or a stochastic differential equation (Langevin equation). Approximate models in several regimes can be derived from these general approaches to simplify implementation. PMID:25271360
NASA Astrophysics Data System (ADS)
Hibbard, Bill
2012-05-01
Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.
NASA Astrophysics Data System (ADS)
Suhendi, Endi; Syariati, Rifki; Noor, Fatimah A.; Kurniasih, Neny; Khairurrijal
2014-03-01
We modeled a tunneling current in a p-n junction based on armchair graphene nanoribbons (AGNRs) by using an Airy function approach (AFA) and a transfer matrix method (TMM). We used β-type AGNRs, in which its band gap energy and electron effective mass depends on its width as given by the extended Huckel theory. It was shown that the tunneling currents evaluated by employing the AFA are the same as those obtained under the TMM. Moreover, the calculated tunneling current was proportional to the voltage bias and inversely with temperature.
Suhendi, Endi; Syariati, Rifki; Noor, Fatimah A.; Khairurrijal; Kurniasih, Neny
2014-03-24
We modeled a tunneling current in a p-n junction based on armchair graphene nanoribbons (AGNRs) by using an Airy function approach (AFA) and a transfer matrix method (TMM). We used β-type AGNRs, in which its band gap energy and electron effective mass depends on its width as given by the extended Huckel theory. It was shown that the tunneling currents evaluated by employing the AFA are the same as those obtained under the TMM. Moreover, the calculated tunneling current was proportional to the voltage bias and inversely with temperature.
A General Synthetic Approach to Functionalized Dihydrooxepines
Nicolaou, K. C.; Yu, Ruocheng; Shi, Lei; Cai, Quan; Lu, Min; Heretsch, Philipp
2013-01-01
A three-step sequence to access functionalized 4,5-dihydrooxepines from cyclohexenones has been developed. This approach features a regioselective Baeyer–Villiger oxidation and subsequent functionalization via the corresponding enol phosphate intermediate. PMID:23550898
Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2013-01-01
A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.
Gehring, Ulrike; Hoek, Gerard; Keuken, Menno; Jonkers, Sander; Beelen, Rob; Eeftens, Marloes; Postma, Dirkje S.; Brunekreef, Bert
2015-01-01
. 2015. Air pollution and lung function in Dutch children: a comparison of exposure estimates and associations based on land use regression and dispersion exposure modeling approaches. Environ Health Perspect 123:847–851; http://dx.doi.org/10.1289/ehp.1408541 PMID:25839747
Röling, Wilfred F M; van Bodegom, Peter M
2014-01-01
Molecular ecology approaches are rapidly advancing our insights into the microorganisms involved in the degradation of marine oil spills and their metabolic potentials. Yet, many questions remain open: how do oil-degrading microbial communities assemble in terms of functional diversity, species abundances and organization and what are the drivers? How do the functional properties of microorganisms scale to processes at the ecosystem level? How does mass flow among species, and which factors and species control and regulate fluxes, stability and other ecosystem functions? Can generic rules on oil-degradation be derived, and what drivers underlie these rules? How can we engineer oil-degrading microbial communities such that toxic polycyclic aromatic hydrocarbons are degraded faster? These types of questions apply to the field of microbial ecology in general. We outline how recent advances in single-species systems biology might be extended to help answer these questions. We argue that bottom-up mechanistic modeling allows deciphering the respective roles and interactions among microorganisms. In particular constraint-based, metagenome-derived community-scale flux balance analysis appears suited for this goal as it allows calculating degradation-related fluxes based on physiological constraints and growth strategies, without needing detailed kinetic information. We subsequently discuss what is required to make these approaches successful, and identify a need to better understand microbial physiology in order to advance microbial ecology. We advocate the development of databases containing microbial physiological data. Answering the posed questions is far from trivial. Oil-degrading communities are, however, an attractive setting to start testing systems biology-derived models and hypotheses as they are relatively simple in diversity and key activities, with several key players being isolated and a high availability of experimental data and approaches.
Röling, Wilfred F. M.; van Bodegom, Peter M.
2014-01-01
Molecular ecology approaches are rapidly advancing our insights into the microorganisms involved in the degradation of marine oil spills and their metabolic potentials. Yet, many questions remain open: how do oil-degrading microbial communities assemble in terms of functional diversity, species abundances and organization and what are the drivers? How do the functional properties of microorganisms scale to processes at the ecosystem level? How does mass flow among species, and which factors and species control and regulate fluxes, stability and other ecosystem functions? Can generic rules on oil-degradation be derived, and what drivers underlie these rules? How can we engineer oil-degrading microbial communities such that toxic polycyclic aromatic hydrocarbons are degraded faster? These types of questions apply to the field of microbial ecology in general. We outline how recent advances in single-species systems biology might be extended to help answer these questions. We argue that bottom-up mechanistic modeling allows deciphering the respective roles and interactions among microorganisms. In particular constraint-based, metagenome-derived community-scale flux balance analysis appears suited for this goal as it allows calculating degradation-related fluxes based on physiological constraints and growth strategies, without needing detailed kinetic information. We subsequently discuss what is required to make these approaches successful, and identify a need to better understand microbial physiology in order to advance microbial ecology. We advocate the development of databases containing microbial physiological data. Answering the posed questions is far from trivial. Oil-degrading communities are, however, an attractive setting to start testing systems biology-derived models and hypotheses as they are relatively simple in diversity and key activities, with several key players being isolated and a high availability of experimental data and approaches. PMID:24723922
Exploring Mouse Protein Function via Multiple Approaches
Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in
ERIC Educational Resources Information Center
Alavi, Seyed Mohammad; Bordbar, Soodeh
2017-01-01
Differential Item Functioning (DIF) analysis is a key element in evaluating educational test fairness and validity. One of the frequently cited sources of construct-irrelevant variance is gender which has an important role in the university entrance exam; therefore, it causes bias and consequently undermines test validity. The present study aims…
2008-03-01
renditions of mammary acini, which were then used to assess and quantify acinar topography and volume. Although TN-C increased acinar surface roughness...epithelial 3-D tissue structure and function. In essence, we devised an algorithm to quantify acinar surface topography and volume in 3-D cultures of...deficient mice, Nature 1995, 377:539-544 39 34. Matsuda A, Yoshiki A, Tagawa Y, Matsuda H, Kusakabe M: Corneal wound healing in tenascin knockout mouse
NASA Astrophysics Data System (ADS)
Creed, I. F.; Band, L. E.
1998-11-01
Functional similarity of catchments implies that we are able to identify the combination of processes that creates a similar response of a specific characteristic of a catchment. We applied the concept of functional similarity to the export of NO3--N from catchments situated within the Turkey Lakes Watershed, a temperate forest in central Ontario, Canada. Despite the homogeneous nature of the forest, these catchments exhibit substantial variability in the concentrations of NO3--N in discharge waters, over both time and space. We hypothesized that functional similarity in the export of NO3--N can be expressed as a function of topographic complexity as topography regulates both the formation and flushing of NO3--N within the catchment. We tested this hypothesis by exploring whether topographically based similarity indices of the formation and flushing of NO3--N capture the observed export of NO3--N over a set of topographically diverse catchments. For catchments with no elevated base concentrations of NO3--N the similarity indices explained up to 58% of the variance in the export of NO3--N. For catchments with elevated base concentrations of NO3--N, prediction of the export of NO3--N may have been complicated by the fact that hydrology was governed by a two-component till, with an ablation till overlying a basal till. While the similarity indices captured peak NO3--N concentrations exported from shallow flow paths emanating from the ablation till, they did not capture base NO3--N concentrations exported from deep flow paths emanating from the basal till, emphasizing the importance of including shallow and deep flow paths in future similarity indices. The strength of the similarity indices is their potential ability to enable us to discriminate catchments that have visually similar surface characteristics but show distinct NO3--N export responses and, conversely, to group catchments that have visually dissimilar surface characteristics but are functionally similar
NASA Astrophysics Data System (ADS)
Lee, Ji-Hwan; Tak, Youngjoo; Lee, Taehun; Soon, Aloysius
Ceria (CeO2-x) is widely studied as a choice electrolyte material for intermediate-temperature (~ 800 K) solid oxide fuel cells. At this temperature, maintaining its chemical stability and thermal-mechanical integrity of this oxide are of utmost importance. To understand their thermal-elastic properties, we firstly test the influence of various approximations to the density-functional theory (DFT) xc functionals on specific thermal-elastic properties of both CeO2 and Ce2O3. Namely, we consider the local-density approximation (LDA), the generalized gradient approximation (GGA-PBE) with and without additional Hubbard U as applied to the 4 f electron of Ce, as well as the recently popularized hybrid functional due to Heyd-Scuseria-Ernzehof (HSE06). Next, we then couple this to a volume-dependent Debye-Grüneisen model to determine the thermodynamic quantities of ceria at arbitrary temperatures. We find an explicit description of the strong correlation (e.g. via the DFT + U and hybrid functional approach) is necessary to have a good agreement with experimental values, in contrast to the mean-field treatment in standard xc approximations (such as LDA or GGA-PBE). We acknowledge support from Samsung Research Funding Center of Samsung Electronics (SRFC-MA1501-03).
2006-09-01
Parkinsons’ Disease, marmoset , MPTP, neuroprotection, oxidative stress, excitotoxicity, metabolic compromise, behavior 16. SECURITY CLASSIFICATION OF: 17...indeed able to prevent neuronal damage due to PD induction by MPTP in mice, marmoset monkeys and rhesus monkeys (Obinu et al., 2002, Araki et al...anti-oxidative) treatment and L-DOPA symptom treatment will be compared to untreated controls in the marmoset MPTP model. The integrative nature of this
Li, Xin; Carravetta, Vincenzo; Li, Cui; Monti, Susanna; Rinkevicius, Zilvinas; Ågren, Hans
2016-07-12
Motivated by the growing importance of organometallic nanostructured materials and nanoparticles as microscopic devices for diagnostic and sensing applications, and by the recent considerable development in the simulation of such materials, we here choose a prototype system - para-nitroaniline (pNA) on gold nanoparticles - to demonstrate effective strategies for designing metal nanoparticles with organic conjugates from fundamental principles. We investigated the motion, adsorption mode, and physical chemistry properties of gold-pNA particles, increasing in size, through classical molecular dynamics (MD) simulations in connection with quantum chemistry (QC) calculations. We apply the quantum mechanics-capacitance molecular mechanics method [Z. Rinkevicius et al. J. Chem. Theory Comput. 2014, 10, 989] for calculations of the properties of the conjugate nanoparticles, where time dependent density functional theory is used for the QM part and a capacitance-polarizability parametrization of the MM part, where induced dipoles and charges by metallic charge transfer are considered. Dispersion and short-range repulsion forces are included as well. The scheme is applied to one- and two-photon absorption of gold-pNA clusters increasing in size toward the nanometer scale. Charge imaging of the surface introduces red-shifts both because of altered excitation energy dependence and variation of the relative intensity of the inherent states making up for the total band profile. For the smaller nanoparticles the difference in the crystal facets are important for the spectral outcome which is also influenced by the surrounding MM environment.
I. M. Robertson; A. Beaudoin; J. Lambros
2004-01-05
OAK-135 Development and validation of constitutive models for polycrystalline materials subjected to high strain rate loading over a range of temperatures are needed to predict the response of engineering materials to in-service type conditions (foreign object damage, high-strain rate forging, high-speed sheet forming, deformation behavior during forming, response to extreme conditions, etc.). To account accurately for the complex effects that can occur during extreme and variable loading conditions, requires significant and detailed computational and modeling efforts. These efforts must be closely coupled with precise and targeted experimental measurements that not only verify the predictions of the models, but also provide input about the fundamental processes responsible for the macroscopic response. Achieving this coupling between modeling and experimentation is the guiding principle of this program. Specifically, this program seeks to bridge the length scale between discrete dislocation interactions with grain boundaries and continuum models for polycrystalline plasticity. Achieving this goal requires incorporating these complex dislocation-interface interactions into the well-defined behavior of single crystals. Despite the widespread study of metal plasticity, this aspect is not well understood for simple loading conditions, let alone extreme ones. Our experimental approach includes determining the high-strain rate response as a function of strain and temperature with post-mortem characterization of the microstructure, quasi-static testing of pre-deformed material, and direct observation of the dislocation behavior during reloading by using the in situ transmission electron microscope deformation technique. These experiments will provide the basis for development and validation of physically-based constitutive models, which will include dislocation-grain boundary interactions for polycrystalline systems. One aspect of the program will involve the dire ct
Borodovsky, M.
2013-04-11
Algorithmic methods for gene prediction have been developed and successfully applied to many different prokaryotic genome sequences. As the set of genes in a particular genome is not homogeneous with respect to DNA sequence composition features, the GeneMark.hmm program utilizes two Markov models representing distinct classes of protein coding genes denoted "typical" and "atypical". Atypical genes are those whose DNA features deviate significantly from those classified as typical and they represent approximately 10% of any given genome. In addition to the inherent interest of more accurately predicting genes, the atypical status of these genes may also reflect their separate evolutionary ancestry from other genes in that genome. We hypothesize that atypical genes are largely comprised of those genes that have been relatively recently acquired through lateral gene transfer (LGT). If so, what fraction of atypical genes are such bona fide LGTs? We have made atypical gene predictions for all fully completed prokaryotic genomes; we have been able to compare these results to other "surrogate" methods of LGT prediction.
An iterative approach of protein function prediction
2011-01-01
Background Current approaches of predicting protein functions from a protein-protein interaction (PPI) dataset are based on an assumption that the available functions of the proteins (a.k.a. annotated proteins) will determine the functions of the proteins whose functions are unknown yet at the moment (a.k.a. un-annotated proteins). Therefore, the protein function prediction is a mono-directed and one-off procedure, i.e. from annotated proteins to un-annotated proteins. However, the interactions between proteins are mutual rather than static and mono-directed, although functions of some proteins are unknown for some reasons at present. That means when we use the similarity-based approach to predict functions of un-annotated proteins, the un-annotated proteins, once their functions are predicted, will affect the similarities between proteins, which in turn will affect the prediction results. In other words, the function prediction is a dynamic and mutual procedure. This dynamic feature of protein interactions, however, was not considered in the existing prediction algorithms. Results In this paper, we propose a new prediction approach that predicts protein functions iteratively. This iterative approach incorporates the dynamic and mutual features of PPI interactions, as well as the local and global semantic influence of protein functions, into the prediction. To guarantee predicting functions iteratively, we propose a new protein similarity from protein functions. We adapt new evaluation metrics to evaluate the prediction quality of our algorithm and other similar algorithms. Experiments on real PPI datasets were conducted to evaluate the effectiveness of the proposed approach in predicting unknown protein functions. Conclusions The iterative approach is more likely to reflect the real biological nature between proteins when predicting functions. A proper definition of protein similarity from protein functions is the key to predicting functions iteratively. The
Muccioli, Luca; D'Avino, Gabriele; Berardi, Roberto; Orlandi, Silvia; Pizzirusso, Antonio; Ricci, Matteo; Roscioni, Otello Maria; Zannoni, Claudio
2014-01-01
The molecular organization of functional organic materials is one of the research areas where the combination of theoretical modeling and experimental determinations is most fruitful. Here we present a brief summary of the simulation approaches used to investigate the inner structure of organic materials with semiconducting behavior, paying special attention to applications in organic photovoltaics and clarifying the often obscure jargon hindering the access of newcomers to the literature of the field. Special attention is paid to the choice of the computational "engine" (Monte Carlo or Molecular Dynamics) used to generate equilibrium configurations of the molecular system under investigation and, more importantly, to the choice of the chemical details in describing the molecular interactions. Recent literature dealing with the simulation of organic semiconductors is critically reviewed in order of increasing complexity of the system studied, from low molecular weight molecules to semiflexible polymers, including the challenging problem of determining the morphology of heterojunctions between two different materials.
NASA Astrophysics Data System (ADS)
Pizio, O.; Sokołowski, S.; Sokołowska, Z.
2012-12-01
We apply recently developed version of a density functional theory [Z. Wang, L. Liu, and I. Neretnieks, J. Phys.: Condens. Matter 23, 175002 (2011)], 10.1088/0953-8984/23/17/175002 to study adsorption of a restricted primitive model for an ionic fluid in slit-like pores in the absence of interactions induced by electrostatic images. At present this approach is one of the most accurate theories for such model electric double layers. The dependencies of the differential double layer capacitance on the pore width, on the electrostatic potential at the wall, bulk fluid density, and temperature are obtained. We show that the differential capacitance can oscillate as a function of the pore width dependent on the values of the above parameters. The number of oscillations and their magnitude decrease for high values of the electrostatic potential. For very narrow pores, close to the ion diameter, the differential capacitance tends to a minimum. The dependence of differential capacitance on temperature exhibits maximum at different values of bulk fluid density and applied electrostatic potential.
Analysis of radial basis function interpolation approach
NASA Astrophysics Data System (ADS)
Zou, You-Long; Hu, Fa-Long; Zhou, Can-Can; Li, Chao-Liu; Dunn, Keh-Jim
2013-12-01
The radial basis function (RBF) interpolation approach proposed by Freedman is used to solve inverse problems encountered in well-logging and other petrophysical issues. The approach is to predict petrophysical properties in the laboratory on the basis of physical rock datasets, which include the formation factor, viscosity, permeability, and molecular composition. However, this approach does not consider the effect of spatial distribution of the calibration data on the interpolation result. This study proposes a new RBF interpolation approach based on the Freedman's RBF interpolation approach, by which the unit basis functions are uniformly populated in the space domain. The inverse results of the two approaches are comparatively analyzed by using our datasets. We determine that although the interpolation effects of the two approaches are equivalent, the new approach is more flexible and beneficial for reducing the number of basis functions when the database is large, resulting in simplification of the interpolation function expression. However, the predicted results of the central data are not sufficiently satisfied when the data clusters are far apart.
Functional capacity evaluation: an empirical approach.
Jette, A M
1980-02-01
This paper presents an empirical approach to selecting activities of daily living (ADL) to assess the functional capacity of noninstitutionalized individuals with polyarticular disability. The results of structural analyses illustrate the feasibility of substantially reducing the task of assessing functional capacity with a subset of ADL items without sacrificing the comprehensiveness of the assessment. The analyses reveal 5 common functional categories: physical mobility, transfers, home chores, kitchen chores, and personal care, which account for over 50% of the variance in the data.
Thomas, Philipp; Rammsayer, Thomas; Schweizer, Karl; Troche, Stefan
2015-01-01
Numerous studies reported a strong link between working memory capacity (WMC) and fluid intelligence (Gf), although views differ in respect to how close these two constructs are related to each other. In the present study, we used a WMC task with five levels of task demands to assess the relationship between WMC and Gf by means of a new methodological approach referred to as fixed-links modeling. Fixed-links models belong to the family of confirmatory factor analysis (CFA) and are of particular interest for experimental, repeated-measures designs. With this technique, processes systematically varying across task conditions can be disentangled from processes unaffected by the experimental manipulation. Proceeding from the assumption that experimental manipulation in a WMC task leads to increasing demands on WMC, the processes systematically varying across task conditions can be assumed to be WMC-specific. Processes not varying across task conditions, on the other hand, are probably independent of WMC. Fixed-links models allow for representing these two kinds of processes by two independent latent variables. In contrast to traditional CFA where a common latent variable is derived from the different task conditions, fixed-links models facilitate a more precise or purified representation of the WMC-related processes of interest. By using fixed-links modeling to analyze data of 200 participants, we identified a non-experimental latent variable, representing processes that remained constant irrespective of the WMC task conditions, and an experimental latent variable which reflected processes that varied as a function of experimental manipulation. This latter variable represents the increasing demands on WMC and, hence, was considered a purified measure of WMC controlled for the constant processes. Fixed-links modeling showed that both the purified measure of WMC (β = .48) as well as the constant processes involved in the task (β = .45) were related to Gf. Taken
Functional Risk Modeling for Lunar Surface Systems
NASA Technical Reports Server (NTRS)
Thomson, Fraser; Mathias, Donovan; Go, Susie; Nejad, Hamed
2010-01-01
We introduce an approach to risk modeling that we call functional modeling , which we have developed to estimate the capabilities of a lunar base. The functional model tracks the availability of functions provided by systems, in addition to the operational state of those systems constituent strings. By tracking functions, we are able to identify cases where identical functions are provided by elements (rovers, habitats, etc.) that are connected together on the lunar surface. We credit functional diversity in those cases, and in doing so compute more realistic estimates of operational mode availabilities. The functional modeling approach yields more realistic estimates of the availability of the various operational modes provided to astronauts by the ensemble of surface elements included in a lunar base architecture. By tracking functional availability the effects of diverse backup, which often exists when two or more independent elements are connected together, is properly accounted for.
An inverse approach for elucidating dendritic function.
Torben-Nielsen, Benjamin; Stiefel, Klaus M
2010-01-01
We outline an inverse approach for investigating dendritic function-structure relationships by optimizing dendritic trees for a priori chosen computational functions. The inverse approach can be applied in two different ways. First, we can use it as a "hypothesis generator" in which we optimize dendrites for a function of general interest. The optimization yields an artificial dendrite that is subsequently compared to real neurons. This comparison potentially allows us to propose hypotheses about the function of real neurons. In this way, we investigated dendrites that optimally perform input-order detection. Second, we can use it as a "function confirmation" by optimizing dendrites for functions hypothesized to be performed by classes of neurons. If the optimized, artificial, dendrites resemble the dendrites of real neurons the artificial dendrites corroborate the hypothesized function of the real neuron. Moreover, properties of the artificial dendrites can lead to predictions about yet unmeasured properties. In this way, we investigated wide-field motion integration performed by the VS cells of the fly visual system. In outlining the inverse approach and two applications, we also elaborate on the nature of dendritic function. We furthermore discuss the role of optimality in assigning functions to dendrites and point out interesting future directions.
NASA Astrophysics Data System (ADS)
Foglio, M. E.; Lobo, T.; Figueira, M. S.
2012-09-01
We consider the cumulant expansion of the periodic Anderson model (PAM) in the case of a finite electronic correlation U, employing the hybridization as perturbation, and obtain a formal expression of the exact one-electron Green's function (GF). This expression contains effective cumulants that are as difficult to calculate as the original GF, and the atomic approach consists in substituting the effective cumulants by the ones that correspond to the atomic case, namely by taking a conduction band of zeroth width and local hybridization. In a previous work (T. Lobo, M. S. Figueira, and M. E. Foglio, Nanotechnology 21, 274007 (2010), 10.1088/0957-4484/21/27/274007) we developed the atomic approach by considering only one variational parameter that is used to adjust the correct height of the Kondo peak by imposing the satisfaction of the Friedel sum rule. To obtain the correct width of the Kondo peak in the present work, we consider an additional variational parameter that guarantees this quantity. The two constraints now imposed on the formalism are the satisfaction of the Friedel sum rule and the correct Kondo temperature. In the first part of the work, we present a general derivation of the method for the single impurity Anderson model (SIAM), and we calculate several density of states representative of the Kondo regime for finite correlation U, including the symmetrical case. In the second part, we apply the method to study the electronic transport through a quantum dot (QD) embedded in a quantum wire (QW), which is realized experimentally by a single electron transistor (SET). We calculate the conductance of the SET and obtain a good agreement with available experimental and theoretical results.
Defining Function in the Functional Medicine Model.
Bland, Jeffrey
2017-02-01
In the functional medicine model, the word function is aligned with the evolving understanding that disease is an endpoint and function is a process. Function can move both forward and backward. The vector of change in function through time is, in part, determined by the unique interaction of an individual's genome with their environment, diet, and lifestyle. The functional medicine model for health care is concerned less with what we call the dysfunction or disease, and more about the dynamic processes that resulted in the person's dysfunction. The previous concept of functional somatic syndromes as psychosomatic in origin has now been replaced with a new concept of function that is rooted in the emerging 21st-century understanding of systems network-enabled biology.
A functional approach to the TMJ disorders.
Deodato, F; Cristiano, S; Trusendi, R; Giorgetti, R
2003-01-01
This manuscript describes our conservative approach to treatment of TMJ disorders. The method we use had been suggested by Rocabado - its aims are: joint distraction by the elimination of compression, restoration of physiologic articular rest, mobilization of the soft tissues, and whenever possible, to improve the condyle-disk-glenoid fossa relationship. To support these claims two clinical cases are presented where the non-invasive therapy was successful. The results obtained confirm the validity of this functional approach.
I. Robertson; A. Beaudoin; J. Lambros
2005-01-31
Development and validation of constitutive models for polycrystalline materials subjected to high strain rate loading over a range of temperatures are needed to predict the response of engineering materials to in-service type conditions (foreign object damage, high-strain rate forging, high-speed sheet forming, deformation behavior during forming, response to extreme conditions, etc.). To account accurately for the complex effects that can occur during extreme and variable loading conditions, requires significant and detailed computational and modeling efforts. These efforts must be closely coupled with precise and targeted experimental measurements that not only verify the predictions of the models, but also provide input about the fundamental processes responsible for the macroscopic response. Achieving this coupling between modeling and experimentation is the guiding principle of this program. Specifically, this program seeks to bridge the length scale between discrete dislocation interactions with grain boundaries and continuum models for polycrystalline plasticity. Achieving this goal requires incorporating these complex dislocation-interface interactions into the well-defined behavior of single crystals. Despite the widespread study of metal plasticity, this aspect is not well understood for simple loading conditions, let alone extreme ones. Our experimental approach includes determining the high-strain rate response as a function of strain and temperature with post-mortem characterization of the microstructure, quasi-static testing of pre-deformed material, and direct observation of the dislocation behavior during reloading by using the in situ transmission electron microscope deformation technique. These experiments will provide the basis for development and validation of physically-based constitutive models, which will include dislocation-grain boundary interactions for polycrystalline systems. One aspect of the program will involve the direct observation
Shankar Subramaniam
2009-04-01
This final project report summarizes progress made towards the objectives described in the proposal entitled “Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach”. Substantial progress has been made in theory, modeling and numerical simulation of turbulent multiphase flows. The consistent mathematical framework based on probability density functions is described. New models are proposed for turbulent particle-laden flows and sprays.
Detection of Differential Item Functioning Using the Lasso Approach
ERIC Educational Resources Information Center
Magis, David; Tuerlinckx, Francis; De Boeck, Paul
2015-01-01
This article proposes a novel approach to detect differential item functioning (DIF) among dichotomously scored items. Unlike standard DIF methods that perform an item-by-item analysis, we propose the "LR lasso DIF method": logistic regression (LR) model is formulated for all item responses. The model contains item-specific intercepts,…
From data to function: functional modeling of poultry genomics data.
McCarthy, F M; Lyons, E
2013-09-01
One of the challenges of functional genomics is to create a better understanding of the biological system being studied so that the data produced are leveraged to provide gains for agriculture, human health, and the environment. Functional modeling enables researchers to make sense of these data as it reframes a long list of genes or gene products (mRNA, ncRNA, and proteins) by grouping based upon function, be it individual molecular functions or interactions between these molecules or broader biological processes, including metabolic and signaling pathways. However, poultry researchers have been hampered by a lack of functional annotation data, tools, and training to use these data and tools. Moreover, this lack is becoming more critical as new sequencing technologies enable us to generate data not only for an increasingly diverse range of species but also individual genomes and populations of individuals. We discuss the impact of these new sequencing technologies on poultry research, with a specific focus on what functional modeling resources are available for poultry researchers. We also describe key strategies for researchers who wish to functionally model their own data, providing background information about functional modeling approaches, the data and tools to support these approaches, and the strengths and limitations of each. Specifically, we describe methods for functional analysis using Gene Ontology (GO) functional summaries, functional enrichment analysis, and pathways and network modeling. As annotation efforts begin to provide the fundamental data that underpin poultry functional modeling (such as improved gene identification, standardized gene nomenclature, temporal and spatial expression data and gene product function), tool developers are incorporating these data into new and existing tools that are used for functional modeling, and cyberinfrastructure is being developed to provide the necessary extendibility and scalability for storing and
Schoville, Benjamin J; Brown, Kyle S; Harris, Jacob A; Wilkins, Jayne
2016-01-01
The Middle Stone Age (MSA) is associated with early evidence for symbolic material culture and complex technological innovations. However, one of the most visible aspects of MSA technologies are unretouched triangular stone points that appear in the archaeological record as early as 500,000 years ago in Africa and persist throughout the MSA. How these tools were being used and discarded across a changing Pleistocene landscape can provide insight into how MSA populations prioritized technological and foraging decisions. Creating inferential links between experimental and archaeological tool use helps to establish prehistoric tool function, but is complicated by the overlaying of post-depositional damage onto behaviorally worn tools. Taphonomic damage patterning can provide insight into site formation history, but may preclude behavioral interpretations of tool function. Here, multiple experimental processes that form edge damage on unretouched lithic points from taphonomic and behavioral processes are presented. These provide experimental distributions of wear on tool edges from known processes that are then quantitatively compared to the archaeological patterning of stone point edge damage from three MSA lithic assemblages-Kathu Pan 1, Pinnacle Point Cave 13B, and Die Kelders Cave 1. By using a model-fitting approach, the results presented here provide evidence for variable MSA behavioral strategies of stone point utilization on the landscape consistent with armature tips at KP1, and cutting tools at PP13B and DK1, as well as damage contributions from post-depositional sources across assemblages. This study provides a method with which landscape-scale questions of early modern human tool-use and site-use can be addressed.
Schoville, Benjamin J.; Brown, Kyle S.; Harris, Jacob A.; Wilkins, Jayne
2016-01-01
The Middle Stone Age (MSA) is associated with early evidence for symbolic material culture and complex technological innovations. However, one of the most visible aspects of MSA technologies are unretouched triangular stone points that appear in the archaeological record as early as 500,000 years ago in Africa and persist throughout the MSA. How these tools were being used and discarded across a changing Pleistocene landscape can provide insight into how MSA populations prioritized technological and foraging decisions. Creating inferential links between experimental and archaeological tool use helps to establish prehistoric tool function, but is complicated by the overlaying of post-depositional damage onto behaviorally worn tools. Taphonomic damage patterning can provide insight into site formation history, but may preclude behavioral interpretations of tool function. Here, multiple experimental processes that form edge damage on unretouched lithic points from taphonomic and behavioral processes are presented. These provide experimental distributions of wear on tool edges from known processes that are then quantitatively compared to the archaeological patterning of stone point edge damage from three MSA lithic assemblages—Kathu Pan 1, Pinnacle Point Cave 13B, and Die Kelders Cave 1. By using a model-fitting approach, the results presented here provide evidence for variable MSA behavioral strategies of stone point utilization on the landscape consistent with armature tips at KP1, and cutting tools at PP13B and DK1, as well as damage contributions from post-depositional sources across assemblages. This study provides a method with which landscape-scale questions of early modern human tool-use and site-use can be addressed. PMID:27736886
Linearized path integral approach for calculating nonadiabatic time correlation functions.
Bonella, Sara; Montemayor, Daniel; Coker, David F
2005-05-10
We show that quantum time correlation functions including electronically nonadiabatic effects can be computed by using an approach in which their path integral expression is linearized in the difference between forward and backward nuclear paths while the electronic component of the amplitude, represented in the mapping formulation, can be computed exactly, leading to classical-like equations of motion for all degrees of freedom. The efficiency of this approach is demonstrated in some simple model applications.
Model dielectric functions and conservation laws
NASA Astrophysics Data System (ADS)
Shirley, Eric L.
2003-03-01
There continues to be a need for calculating dielectric screening of charges in solids. Most work has been done in the random-phase approximation (RPA) with minor variations, which proves to be quite accurate for many applications. However, this is still a time-consuming and computationally intensive approach, and model dielectric functions can be valuable for this reason. This talk discusses several conservation laws related to dielectric screening and a model dielectric function that obeys such laws. Shortcomings of model functions that are difficult to overcome will be touched on, and a possible means of combining results from RPA and model calculations will be addressed.
Modeling Protein Domain Function
ERIC Educational Resources Information Center
Baker, William P.; Jones, Carleton "Buck"; Hull, Elizabeth
2007-01-01
This simple but effective laboratory exercise helps students understand the concept of protein domain function. They use foam beads, Styrofoam craft balls, and pipe cleaners to explore how domains within protein active sites interact to form a functional protein. The activity allows students to gain content mastery and an understanding of the…
Brain Functioning Models for Learning.
ERIC Educational Resources Information Center
Tipps, Steve; And Others
This paper describes three models of brain function, each of which contributes to an integrated understanding of human learning. The first model, the up-and-down model, emphasizes the interconnection between brain structures and functions, and argues that since physiological, emotional, and cognitive responses are inseparable, the learning context…
A Bayesian geostatistical transfer function approach to tracer test analysis
NASA Astrophysics Data System (ADS)
Fienen, Michael N.; Luo, Jian; Kitanidis, Peter K.
2006-07-01
Reactive transport modeling is often used in support of bioremediation and chemical treatment planning and design. There remains a pressing need for practical and efficient models that do not require (or assume attainable) the high level of characterization needed by complex numerical models. We focus on a linear systems or transfer function approach to the problem of reactive tracer transport in a heterogeneous saprolite aquifer. Transfer functions are obtained through the Bayesian geostatistical inverse method applied to tracer injection histories and breakthrough curves. We employ nonparametric transfer functions, which require minimal assumptions about shape and structure. The resulting flexibility empowers the data to determine the nature of the transfer function with minimal prior assumptions. Nonnegativity is enforced through a reflected Brownian motion stochastic model. The inverse method enables us to quantify uncertainty and to generate conditional realizations of the transfer function. Complex information about a hydrogeologic system is distilled into a relatively simple but rigorously obtained function that describes the transport behavior of the system between two wells. The resulting transfer functions are valuable in reactive transport models based on traveltime and streamline methods. The information contained in the data, particularly in the case of strong heterogeneity, is not overextended but is fully used. This is the first application of Bayesian geostatistical inversion to transfer functions in hydrogeology but the methodology can be extended to any linear system.
Hydraulic Modeling of Lock Approaches
2016-08-01
cation was that the guidewall design changed from a solid wall to one on pilings in which water was allowed to flow through and/or under the wall ...develops innovative solutions in civil and military engineering, geospatial sciences, water resources, and environmental sciences for the Army, the...magnitudes and directions at lock approaches for open river conditions. The meshes were developed using the Surface- water Modeling System. The two
Functional genomics approaches in parasitic helminths.
Hagen, J; Lee, E F; Fairlie, W D; Kalinna, B H
2012-01-01
As research on parasitic helminths is moving into the post-genomic era, an enormous effort is directed towards deciphering gene function and to achieve gene annotation. The sequences that are available in public databases undoubtedly hold information that can be utilized for new interventions and control but the exploitation of these resources has until recently remained difficult. Only now, with the emergence of methods to genetically manipulate and transform parasitic worms will it be possible to gain a comprehensive understanding of the molecular mechanisms involved in nutrition, metabolism, developmental switches/maturation and interaction with the host immune system. This review focuses on functional genomics approaches in parasitic helminths that are currently used, to highlight potential applications of these technologies in the areas of cell biology, systems biology and immunobiology of parasitic helminths.
An approach to metering and network modeling
Adibi, M.M. ); Clements, K.A. ); Kafka, R.J. ); Stovall, J.P. )
1992-01-01
Estimation of the static state of an electric power network has become a standard function in real-time monitoring and control. Its purpose is to use the network model and process the metering data in order to determine an accurate and reliable estimate of the system state in the real-time environment. In the models usually used it is assumed that the network parameters and topology are free of errors and the measurement system provides unbiased data having a known distribution. The network and metering models however, contain errors which frequently result in either non-convergent behavior of the state estimator or exceedingly large residuals, reducing the level of confidence in the results. This paper describes an approach minimizing the above uncertainties by analyzing the data which are routinely collected at the power system control center. The approach will improve the reliability of the real-time data-base while reducing the state estimator installation and maintenance effort.
Modeling NMR lineshapes using logspline density functions.
Raz, J; Fernandez, E J; Gillespie, J
1997-08-01
Distortions in the FID and spin echo due to magnetic field inhomogeneity are proved to have a representation as the characteristic function of some probability distribution. In the special case that the distribution is Cauchy, the model reduces to the conventional Lorentzian model. A more general and flexible representation is presented using the Fourier transform of a logspline density. An algorithm for fitting the model is described, the performance of the model and algorithm is investigated in applications to real and simulated data sets, and the logspline approach is compared to a previous Hermitian spline approach and to the Lorentzian model. The logspline model is more parsimonious than the Hermitian spline model, provides a better fit to real data, and is much less biased than the Lorentzian model.
The functions of autobiographical memory: an integrative approach.
Harris, Celia B; Rasmussen, Anne S; Berntsen, Dorthe
2014-01-01
Recent research in cognitive psychology has emphasised the uses, or functions, of autobiographical memory. Theoretical and empirical approaches have focused on a three-function model: autobiographical memory serves self, directive, and social functions. In the reminiscence literature other taxonomies and additional functions have been postulated. We examined the relationships between functions proposed by these literatures, in order to broaden conceptualisations and make links between research traditions. In Study 1 we combined two measures of individual differences in the uses of autobiographical memory. Our results suggested four classes of memory functions, which we labelled Reflective, Generative, Ruminative, and Social. In Study 2 we tested relationships between our four functions and broader individual differences, and found conceptually consistent relationships. In Study 3 we found that memories cued by Generative and Social functions were more emotionally positive than were memories cued by Reflective and Ruminative functions. In Study 4 we found that reported use of Generative functions increased across the lifespan, while reported use of the other three functions decreased. Overall our findings suggest a broader view of autobiographical memory functions that links them to ways in which people make meaning of their selves, their environment, and their social world more generally.
Synchronization-based approach for detecting functional activation of brain
NASA Astrophysics Data System (ADS)
Hong, Lei; Cai, Shi-Min; Zhang, Jie; Zhuo, Zhao; Fu, Zhong-Qian; Zhou, Pei-Ling
2012-09-01
In this paper, we investigate a synchronization-based, data-driven clustering approach for the analysis of functional magnetic resonance imaging (fMRI) data, and specifically for detecting functional activation from fMRI data. We first define a new measure of similarity between all pairs of data points (i.e., time series of voxels) integrating both complete phase synchronization and amplitude correlation. These pairwise similarities are taken as the coupling between a set of Kuramoto oscillators, which in turn evolve according to a nearest-neighbor rule. As the network evolves, similar data points naturally synchronize with each other, and distinct clusters will emerge. The clustering behavior of the interaction network of the coupled oscillators, therefore, mirrors the clustering property of the original multiple time series. The clustered regions whose cross-correlation coefficients are much greater than other regions are considered as the functionally activated brain regions. The analysis of fMRI data in auditory and visual areas shows that the recognized brain functional activations are in complete correspondence with those from the general linear model of statistical parametric mapping, but with a significantly lower time complexity. We further compare our results with those from traditional K-means approach, and find that our new clustering approach can distinguish between different response patterns more accurately and efficiently than the K-means approach, and therefore more suitable in detecting functional activation from event-related experimental fMRI data.
HEDR modeling approach: Revision 1
Shipler, D.B.; Napier, B.A.
1994-05-01
This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies.
A Transfer Learning Approach for Network Modeling
Huang, Shuai; Li, Jing; Chen, Kewei; Wu, Teresa; Ye, Jieping; Wu, Xia; Yao, Li
2012-01-01
Networks models have been widely used in many domains to characterize the interacting relationship between physical entities. A typical problem faced is to identify the networks of multiple related tasks that share some similarities. In this case, a transfer learning approach that can leverage the knowledge gained during the modeling of one task to help better model another task is highly desirable. In this paper, we propose a transfer learning approach, which adopts a Bayesian hierarchical model framework to characterize task relatedness and additionally uses the L1-regularization to ensure robust learning of the networks with limited sample sizes. A method based on the Expectation-Maximization (EM) algorithm is further developed to learn the networks from data. Simulation studies are performed, which demonstrate the superiority of the proposed transfer learning approach over single task learning that learns the network of each task in isolation. The proposed approach is also applied to identification of brain connectivity networks of Alzheimer’s disease (AD) from functional magnetic resonance image (fMRI) data. The findings are consistent with the AD literature. PMID:24526804
Computational Models for Neuromuscular Function
Valero-Cuevas, Francisco J.; Hoffmann, Heiko; Kurse, Manish U.; Kutch, Jason J.; Theodorou, Evangelos A.
2011-01-01
Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data. PMID:21687779
Response Surface Modeling Using Multivariate Orthogonal Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2001-01-01
A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.
Functional phosphoproteomic mass spectrometry-based approaches
2012-01-01
Mass Spectrometry (MS)-based phosphoproteomics tools are crucial for understanding the structure and dynamics of signaling networks. Approaches such as affinity purification followed by MS have also been used to elucidate relevant biological questions in health and disease. The study of proteomes and phosphoproteomes as linked systems, rather than research studies of individual proteins, are necessary to understand the functions of phosphorylated and un-phosphorylated proteins under spatial and temporal conditions. Phosphoproteome studies also facilitate drug target protein identification which may be clinically useful in the near future. Here, we provide an overview of general principles of signaling pathways versus phosphorylation. Likewise, we detail chemical phosphoproteomic tools, including pros and cons with examples where these methods have been applied. In addition, basic clues of electrospray ionization and collision induced dissociation fragmentation are detailed in a simple manner for successful phosphoproteomic clinical studies. PMID:23369623
Modeling Approaches in Planetary Seismology
NASA Technical Reports Server (NTRS)
Weber, Renee; Knapmeyer, Martin; Panning, Mark; Schmerr, Nick
2014-01-01
Of the many geophysical means that can be used to probe a planet's interior, seismology remains the most direct. Given that the seismic data gathered on the Moon over 40 years ago revolutionized our understanding of the Moon and are still being used today to produce new insight into the state of the lunar interior, it is no wonder that many future missions, both real and conceptual, plan to take seismometers to other planets. To best facilitate the return of high-quality data from these instruments, as well as to further our understanding of the dynamic processes that modify a planet's interior, various modeling approaches are used to quantify parameters such as the amount and distribution of seismicity, tidal deformation, and seismic structure on and of the terrestrial planets. In addition, recent advances in wavefield modeling have permitted a renewed look at seismic energy transmission and the effects of attenuation and scattering, as well as the presence and effect of a core, on recorded seismograms. In this chapter, we will review these approaches.
Green functions of graphene: An analytic approach
NASA Astrophysics Data System (ADS)
Lawlor, James A.; Ferreira, Mauro S.
2015-04-01
In this article we derive the lattice Green Functions (GFs) of graphene using a Tight Binding Hamiltonian incorporating both first and second nearest neighbour hoppings and allowing for a non-orthogonal electron wavefunction overlap. It is shown how the resulting GFs can be simplified from a double to a single integral form to aid computation, and that when considering off-diagonal GFs in the high symmetry directions of the lattice this single integral can be approximated very accurately by an algebraic expression. By comparing our results to the conventional first nearest neighbour model commonly found in the literature, it is apparent that the extended model leads to a sizeable change in the electronic structure away from the linear regime. As such, this article serves as a blueprint for researchers who wish to examine quantities where these considerations are important.
Bubalo, Marina Cvjetko; Radošević, Kristina; Srček, Višnja Gaurina; Das, Rudra Narayan; Popelier, Paul; Roy, Kunal
2015-02-01
Within this work we evaluated the cytotoxicity towards the Channel Catfish Ovary (CCO) cell line of some imidazolium-based ionic liquids containing different functionalized and unsaturated side chains. The toxic effects were measured by the reduction of the WST-1 dye after 72 h exposure resulting in dose- and structure-dependent toxicities. The obtained data on cytotoxic effects of 14 different imidazolium ionic liquids in CCO cells, expressed as EC50 values, were used in a preliminary quantitative structure-toxicity relationship (QSTR) study employing regression- and classification-based approaches. The toxicity of ILs towards CCO was chiefly related to the shape and hydrophobicity parameters of cations. A significant influence of the quantum topological molecular similarity descriptor ellipticity (ε) of the imine bond was also observed.
Distribution function approach to redshift space distortions
Seljak, Uroš; McDonald, Patrick E-mail: pvmcdonald@lbl.gov
2011-11-01
We develop a phase space distribution function approach to redshift space distortions (RSD), in which the redshift space density can be written as a sum over velocity moments of the distribution function. These moments are density weighted and have well defined physical interpretation: their lowest orders are density, momentum density, and stress energy density. The series expansion is convergent if kμu/aH < 1, where k is the wavevector, H the Hubble parameter, u the typical gravitational velocity and μ = cos θ, with θ being the angle between the Fourier mode and the line of sight. We perform an expansion of these velocity moments into helicity modes, which are eigenmodes under rotation around the axis of Fourier mode direction, generalizing the scalar, vector, tensor decomposition of perturbations to an arbitrary order. We show that only equal helicity moments correlate and derive the angular dependence of the individual contributions to the redshift space power spectrum. We show that the dominant term of μ{sup 2} dependence on large scales is the cross-correlation between the density and scalar part of momentum density, which can be related to the time derivative of the matter power spectrum. Additional terms contributing to μ{sup 2} and dominating on small scales are the vector part of momentum density-momentum density correlations, the energy density-density correlations, and the scalar part of anisotropic stress density-density correlations. The second term is what is usually associated with the small scale Fingers-of-God damping and always suppresses power, but the first term comes with the opposite sign and always adds power. Similarly, we identify 7 terms contributing to μ{sup 4} dependence. Some of the advantages of the distribution function approach are that the series expansion converges on large scales and remains valid in multi-stream situations. We finish with a brief discussion of implications for RSD in galaxies relative to dark matter
Wavelet-based functional mixed models
Morris, Jeffrey S.; Carroll, Raymond J.
2009-01-01
Summary Increasingly, scientific studies yield functional data, in which the ideal units of observation are curves and the observed data consist of sets of curves that are sampled on a fine grid. We present new methodology that generalizes the linear mixed model to the functional mixed model framework, with model fitting done by using a Bayesian wavelet-based approach. This method is flexible, allowing functions of arbitrary form and the full range of fixed effects structures and between-curve covariance structures that are available in the mixed model framework. It yields nonparametric estimates of the fixed and random-effects functions as well as the various between-curve and within-curve covariance matrices. The functional fixed effects are adaptively regularized as a result of the non-linear shrinkage prior that is imposed on the fixed effects’ wavelet coefficients, and the random-effect functions experience a form of adaptive regularization because of the separately estimated variance components for each wavelet coefficient. Because we have posterior samples for all model quantities, we can perform pointwise or joint Bayesian inference or prediction on the quantities of the model. The adaptiveness of the method makes it especially appropriate for modelling irregular functional data that are characterized by numerous local features like peaks. PMID:19759841
Leveraging modeling approaches: reaction networks and rules.
Blinov, Michael L; Moraru, Ion I
2012-01-01
We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high-resolution and/or high-throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatiotemporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks - the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks.
Ecosystem structure and function modeling
Humphries, H.C.; Baron, J.S.; Jensen, M.E.; Bourgeron, P.
2001-01-01
An important component of ecological assessments is the ability to predict and display changes in ecosystem structure and function over a variety of spatial and temporal scales. These changes can occur over short (less than 1 year) or long time frames (over 100 years). Models may emphasize structural responses (changes in species composition, growth forms, canopy height, amount of old growth, etc.) or functional responses (cycling of carbon, nutrients, and water). Both are needed to display changes in ecosystem components for use in robust ecological assessments. Structure and function models vary in the ecosystem components included, algorithms employed, level of detail, and spatial and temporal scales incorporated. They range from models that track individual organisms to models of broad-scale landscape changes. This chapter describes models appropriate for ecological assessments. The models selected for inclusion can be implemented in a spatial framework and for the most part have been run in more than one system.
Piehler, Timothy F; Bloomquist, Michael L; August, Gerald J; Gewirtz, Abigail H; Lee, Susanne S; Lee, Wendy S C
2014-01-01
A culturally diverse sample of formerly homeless youth (ages 6-12) and their families (n = 223) participated in a cluster randomized controlled trial of the Early Risers conduct problems prevention program in a supportive housing setting. Parents provided 4 annual behaviorally-based ratings of executive functioning (EF) and conduct problems, including at baseline, over 2 years of intervention programming, and at a 1-year follow-up assessment. Using intent-to-treat analyses, a multilevel latent growth model revealed that the intervention group demonstrated reduced growth in conduct problems over the 4 assessment points. In order to examine mediation, a multilevel parallel process latent growth model was used to simultaneously model growth in EF and growth in conduct problems along with intervention status as a covariate. A significant mediational process emerged, with participation in the intervention promoting growth in EF, which predicted negative growth in conduct problems. The model was consistent with changes in EF fully mediating intervention-related changes in youth conduct problems over the course of the study. These findings highlight the critical role that EF plays in behavioral change and lends further support to its importance as a target in preventive interventions with populations at risk for conduct problems.
Calculus of Functions and Their Inverses: A Unified Approach
ERIC Educational Resources Information Center
Krishnan, Srilal N.
2006-01-01
In this pedagogical article, I explore a unified approach in obtaining the derivatives of functions and their inverses by adopting a guided self-discovery approach. I begin by finding the derivative of the exponential functions and the derivative of their inverses, the logarithmic functions. I extend this approach to generate formulae for the…
Modelling approaches for evaluating multiscale tendon mechanics
Fang, Fei; Lake, Spencer P.
2016-01-01
Tendon exhibits anisotropic, inhomogeneous and viscoelastic mechanical properties that are determined by its complicated hierarchical structure and varying amounts/organization of different tissue constituents. Although extensive research has been conducted to use modelling approaches to interpret tendon structure–function relationships in combination with experimental data, many issues remain unclear (i.e. the role of minor components such as decorin, aggrecan and elastin), and the integration of mechanical analysis across different length scales has not been well applied to explore stress or strain transfer from macro- to microscale. This review outlines mathematical and computational models that have been used to understand tendon mechanics at different scales of the hierarchical organization. Model representations at the molecular, fibril and tissue levels are discussed, including formulations that follow phenomenological and microstructural approaches (which include evaluations of crimp, helical structure and the interaction between collagen fibrils and proteoglycans). Multiscale modelling approaches incorporating tendon features are suggested to be an advantageous methodology to understand further the physiological mechanical response of tendon and corresponding adaptation of properties owing to unique in vivo loading environments. PMID:26855747
Systematic approach for modeling tetrachloroethene biodegradation
Bagley, D.M.
1998-11-01
The anaerobic biodegradation of tetrachloroethene (PCE) is a reasonably well understood process. Specific organisms capable of using PCE as an electron acceptor for growth require the addition of an electron donor to remove PCE from contaminated ground waters. However, competition from other anaerobic microorganisms for added electron donor will influence the rate and completeness of PCE degradation. The approach developed here allows for the explicit modeling of PCE and byproduct biodegradation as a function of electron donor and byproduct concentrations, and the microbiological ecology of the system. The approach is general and can be easily modified for ready use with in situ ground-water models or ex situ reactor models. Simulations conducted with models developed from this approach show the sensitivity of PCE biodegradation to input parameter values, in particular initial biomass concentrations. Additionally, the dechlorination rate will be strongly influenced by the microbial ecology of the system. Finally, comparison with experimental acclimation results indicates that existing kinetic constants may not be generally applicable. Better techniques for measuring the biomass of specific organisms groups in mixed systems are required.
Interaction Models for Functional Regression
USSET, JOSEPH; STAICU, ANA-MARIA; MAITY, ARNAB
2015-01-01
A functional regression model with a scalar response and multiple functional predictors is proposed that accommodates two-way interactions in addition to their main effects. The proposed estimation procedure models the main effects using penalized regression splines, and the interaction effect by a tensor product basis. Extensions to generalized linear models and data observed on sparse grids or with measurement error are presented. A hypothesis testing procedure for the functional interaction effect is described. The proposed method can be easily implemented through existing software. Numerical studies show that fitting an additive model in the presence of interaction leads to both poor estimation performance and lost prediction power, while fitting an interaction model where there is in fact no interaction leads to negligible losses. The methodology is illustrated on the AneuRisk65 study data. PMID:26744549
The Linearized Kinetic Equation -- A Functional Analytic Approach
NASA Astrophysics Data System (ADS)
Brinkmann, Ralf Peter
2009-10-01
Kinetic models of plasma phenomena are difficult to address for two reasons. They i) are given as systems of nonlinear coupled integro-differential equations, and ii) involve generally six-dimensional distribution functions f(r,v,t). In situations which can be addressed in a linear regime, the first difficulty disappears, but the second one still poses considerable practical problems. This contribution presents an abstract approach to linearized kinetic theory which employs the methods of functional analysis. A kinetic electron equation with elastic electron-neutral interaction is studied in the electrostatic approximation. Under certain boundary conditions, a nonlinear functional, the kinetic free energy, exists which has the properties of a Lyapunov functional. In the linear regime, the functional becomes a quadratic form which motivates the definition of a bilinear scalar product, turning the space of all distribution functions into a Hilbert space. The linearized kinetic equation can then be described in terms of dynamical operators with well-defined properties. Abstract solutions can be constructed which have mathematically plausible properties. As an example, the formalism is applied to the example of the multipole resonance probe (MRP). Under the assumption of a Maxwellian background distribution, the kinetic model of that diagnostics device is compared to a previously investigated fluid model.
NASA Astrophysics Data System (ADS)
Lepping, R. P.; Berdichevsky, D. B.; Wu, C.-C.
2017-02-01
We examine the average magnetic field magnitude (| B | ≡ B) within magnetic clouds (MCs) observed by the Wind spacecraft from 1995 to July 2015 to understand the difference between this B and the ideal B-profiles expected from using the static, constant-α, force-free, cylindrically symmetric model for MCs of Lepping, Jones, and Burlaga ( J. Geophys. Res. 95, 11957, 1990, denoted here as the LJB model). We classify all MCs according to an assigned quality, Q0 (= 1, 2, 3, for excellent, good, and poor). There are a total of 209 MCs and 124 when only Q0 = 1, 2 cases are considered. The average normalized field with respect to the closest approach (CA) is stressed, where we separate cases into four CA sets centered at 12.5 %, 37.5 %, 62.5 %, and 87.5 % of the average radius; the averaging is done on a percentage-duration basis to treat all cases the same. Normalized B means that before averaging, the B for each MC at each point is divided by the LJB model-estimated B for the MC axis, B0. The actual averages for the 209 and 124 MC sets are compared to the LJB model, after an adjustment for MC expansion ( e.g. Lepping et al. in Ann. Geophys. 26, 1919, 2008). This provides four separate difference-relationships, each fitted with a quadratic ( Quad) curve of very small σ. Interpreting these Quad formulae should provide a comprehensive view of the variation in normalized B throughout the average MC, where we expect external front and rear compression to be part of its explanation. These formulae are also being considered for modifying the LJB model. This modification will be used in a scheme for forecasting the timing and magnitude of magnetic storms caused by MCs. Extensive testing of the Quad formulae shows that the formulae are quite useful in correcting individual MC B-profiles, especially for the first {≈ }1/3 of these MCs. However, the use of this type of B correction constitutes a (slight) violation of the force-free assumption used in the original LJB MC model.
General Green's function formalism for layered systems: Wave function approach
NASA Astrophysics Data System (ADS)
Zhang, Shu-Hui; Yang, Wen; Chang, Kai
2017-02-01
The single-particle Green's function (GF) of mesoscopic structures plays a central role in mesoscopic quantum transport. The recursive GF technique is a standard tool to compute this quantity numerically, but it lacks physical transparency and is limited to relatively small systems. Here we present a numerically efficient and physically transparent GF formalism for a general layered structure. In contrast to the recursive GF that directly calculates the GF through the Dyson equations, our approach converts the calculation of the GF to the generation and subsequent propagation of a scattering wave function emanating from a local excitation. This viewpoint not only allows us to reproduce existing results in a concise and physically intuitive manner, but also provides analytical expressions of the GF in terms of a generalized scattering matrix. This identifies the contributions from each individual scattering channel to the GF and hence allows this information to be extracted quantitatively from dual-probe STM experiments. The simplicity and physical transparency of the formalism further allows us to treat the multiple reflection analytically and derive an analytical rule to construct the GF of a general layered system. This could significantly reduce the computational time and enable quantum transport calculations for large samples. We apply this formalism to perform both analytical analysis and numerical simulation for the two-dimensional conductance map of a realistic graphene p -n junction. The results demonstrate the possibility of observing the spatially resolved interference pattern caused by negative refraction and further reveal a few interesting features, such as the distance-independent conductance and its quadratic dependence on the carrier concentration, as opposed to the linear dependence in uniform graphene.
NASA Astrophysics Data System (ADS)
Choubey, Sanjay K.; Mariadasse, Richard; Rajendran, Santhosh; Jeyaraman, Jeyakanthan
2016-12-01
Overexpression of HDAC1, a member of Class I histone deacetylase is reported to be implicated in breast cancer. Epigenetic alteration in carcinogenesis has been the thrust of research for few decades. Increased deacetylation leads to accelerated cell proliferation, cell migration, angiogenesis and invasion. HDAC1 is pronounced as the potential drug target towards the treatment of breast cancer. In this study, the biochemical potential of 6-aminonicotinamide derivatives was rationalized. Five point pharmacophore model with one hydrogen-bond acceptor (A3), two hydrogen-bond donors (D5, D6), one ring (R12) and one hydrophobic group (H8) was developed using 6-aminonicotinamide derivatives. The pharmacophore hypothesis yielded a 3D-QSAR model with correlation-coefficient (r2 = 0.977, q2 = 0.801) and it was externally validated with (r2pred = 0.929, r2cv = 0.850 and r2m = 0.856) which reveals the statistical significance of the model having high predictive power. The model was then employed as 3D search query for virtual screening against compound libraries (Zinc, Maybridge, Enamine, Asinex, Toslab, LifeChem and Specs) in order to identify novel scaffolds which can be experimentally validated to design future drug molecule. Density Functional Theory (DFT) at B3LYP/6-31G* level was employed to explore the electronic features of the ligands involved in charge transfer reaction during receptor ligand interaction. Binding free energy (ΔGbind) calculation was done using MM/GBSA which defines the affinity of ligands towards the receptor.
Modeling the Schwarzschild Green's function
NASA Astrophysics Data System (ADS)
Mark, Zachary; Zimmerman, Aaron; Chen, Yanbei
2017-01-01
At sufficiently late times, gravitational waveforms from extreme mass ratio inspirals consist of a sum of quasinormal modes, power law tails, and modes related to the matter source, such as the horizon mode (Zimmerman and Chen 2011). Due to the complexity of the exact curved spacetime Green function, making precise predictions about each component is difficult. We discuss the validity of a simple model for the scalar Schwarzschild Green's function. For observers at future null infinity, we model the Green's function as a simple function describing the direct radiation that matches to a single quasinormal mode at a retarded time related to the light ring location. As applications of the model, we describe the excitation process of the single quasinormal mode and the horizon mode, showing that waveform from the inspiralling object is in precise correspondence to the response of driven, damped harmonic oscillator.
Genetic and genomic approaches to understanding macrophage identity and function.
Glass, Christopher K
2015-04-01
A major goal of our laboratory is to understand the molecular mechanisms that underlie the development and functions of diverse macrophage phenotypes in health and disease. Recent studies using genetic and genomic approaches suggest a relatively simple model of collaborative and hierarchical interactions between lineage-determining and signal-dependent transcription factors that enable selection and activation of transcriptional enhancers that specify macrophage identity and function. In addition, we have found that it is possible to use natural genetic variation as a powerful tool for advancing our understanding of how the macrophage deciphers the information encoded by the genome to attain specific phenotypes in a context-dependent manner. Here, I will describe our recent efforts to extend genetic and genomic approaches to investigate the roles of distinct tissue environments in determining the phenotypes of different resident populations of macrophages.
Chu, Congying; Fan, Lingzhong; Eickhoff, Claudia R.; Liu, Yong; Yang, Yong; Eickhoff, Simon B.; Jiang, Tianzi
2016-01-01
Recent progress in functional neuroimaging has prompted studies of brain activation during various cognitive tasks. Coordinate-based meta-analysis has been utilized to discover the brain regions that are consistently activated across experiments. However, within-experiment co-activation relationships, which can reflect the underlying functional relationships between different brain regions, have not been widely studied. In particular, voxel-wise co-activation, which may be able to provide a detailed configuration of the co-activation network, still needs to be modeled. To estimate the voxel-wise co-activation pattern and deduce the co-activation network, a Co-activation Probability Estimation (CoPE) method was proposed to model within-experiment activations for the purpose of defining the co-activations. A permutation test was adopted as a significance test. Moreover, the co-activations were automatically separated into local and long-range ones, based on distance. The two types of co-activations describe distinct features: the first reflects convergent activations; the second represents co-activations between different brain regions. The validation of CoPE was based on five simulation tests and one real dataset derived from studies of working memory. Both the simulated and the real data demonstrated that CoPE was not only able to find local convergence but also significant long-range co-activation. In particular, CoPE was able to identify a ‘core’ co-activation network in the working memory dataset. As a data-driven method, the CoPE method can be used to mine underlying co-activation relationships across experiments in future studies. PMID:26037052
A factor analysis model for functional genomics
Kustra, Rafal; Shioda, Romy; Zhu, Mu
2006-01-01
Background Expression array data are used to predict biological functions of uncharacterized genes by comparing their expression profiles to those of characterized genes. While biologically plausible, this is both statistically and computationally challenging. Typical approaches are computationally expensive and ignore correlations among expression profiles and functional categories. Results We propose a factor analysis model (FAM) for functional genomics and give a two-step algorithm, using genome-wide expression data for yeast and a subset of Gene-Ontology Biological Process functional annotations. We show that the predictive performance of our method is comparable to the current best approach while our total computation time was faster by a factor of 4000. We discuss the unique challenges in performance evaluation of algorithms used for genome-wide functions genomics. Finally, we discuss extensions to our method that can incorporate the inherent correlation structure of the functional categories to further improve predictive performance. Conclusion Our factor analysis model is a computationally efficient technique for functional genomics and provides a clear and unified statistical framework with potential for incorporating important gene ontology information to improve predictions. PMID:16630343
dos Santos, Sandra C.; Teixeira, Miguel C.; Dias, Paulo J.; Sá-Correia, Isabel
2014-01-01
Multidrug/Multixenobiotic resistance (MDR/MXR) is a widespread phenomenon with clinical, agricultural and biotechnological implications, where MDR/MXR transporters that are presumably able to catalyze the efflux of multiple cytotoxic compounds play a key role in the acquisition of resistance. However, although these proteins have been traditionally considered drug exporters, the physiological function of MDR/MXR transporters and the exact mechanism of their involvement in resistance to cytotoxic compounds are still open to debate. In fact, the wide range of structurally and functionally unrelated substrates that these transporters are presumably able to export has puzzled researchers for years. The discussion has now shifted toward the possibility of at least some MDR/MXR transporters exerting their effect as the result of a natural physiological role in the cell, rather than through the direct export of cytotoxic compounds, while the hypothesis that MDR/MXR transporters may have evolved in nature for other purposes than conferring chemoprotection has been gaining momentum in recent years. This review focuses on the drug transporters of the Major Facilitator Superfamily (MFS; drug:H+ antiporters) in the model yeast Saccharomyces cerevisiae. New insights into the natural roles of these transporters are described and discussed, focusing on the knowledge obtained or suggested by post-genomic research. The new information reviewed here provides clues into the unexpectedly complex roles of these transporters, including a proposed indirect regulation of the stress response machinery and control of membrane potential and/or internal pH, with a special emphasis on a genome-wide view of the regulation and evolution of MDR/MXR-MFS transporters. PMID:24847282
Introducing Linear Functions: An Alternative Statistical Approach
ERIC Educational Resources Information Center
Nolan, Caroline; Herbert, Sandra
2015-01-01
The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be "threshold concepts". There is recognition that linear functions can be taught in context through the exploration of linear…
Wu, Jian; Singla, Mithun; Olmi, Claudio; Shieh, Leang S; Song, Gangbing
2010-07-01
In this paper, a scalar sign function-based digital design methodology is developed for modeling and control of a class of analog nonlinear systems that are restricted by the absolute value function constraints. As is found to be not uncommon, many real systems are subject to the constraints which are described by the non-smooth functions such as absolute value function. The non-smooth and nonlinear nature poses significant challenges to the modeling and control work. To overcome these difficulties, a novel idea proposed in this work is to use a scalar sign function approach to effectively transform the original nonlinear and non-smooth model into a smooth nonlinear rational function model. Upon the resulting smooth model, a systematic digital controller design procedure is established, in which an optimal linearization method, LQR design and digital implementation through an advanced digital redesign technique are sequentially applied. The example of tracking control of a piezoelectric actuator system is utilized throughout the paper for illustrating the proposed methodology.
Pizio, Orest; Sokołowski, Stefan
2013-05-28
We apply a density functional theory to describe properties of a restricted primitive model of an ionic fluid in slit-like pores. The pore walls are modified by grafted chains. The chains are built of uncharged or charged segments. We study the influence of modification of the pore walls on the structure, adsorption, ion selectivity, and the electric double layer capacitance of ionic fluid under confinement. The brush built of uncharged segments acts as a collection of obstacles in the walls vicinity. Consequently, separation of charges requires higher voltages, in comparison to the models without brushes. At high grafting densities the formation of crowding-type structure is inhibited. The double layer structure becomes more complex in various aspects, if the brushes are built of charged segments. In particular, the evolution of the brush height with the bulk fluid density and with the charge on the walls depends on the length of the blocks of charged spheres as well as on the distribution of charged species along chains. We also investigated how the dependence of the double layer capacitance on the electrostatic potential (or on the charge on the walls) changes with grafting density, the chain length, distribution of charges along the chain, the bulk fluid density, and, finally, with the pore width. The shape of the electric double layer capacitance vs. voltage changes from a camel-like to bell-like shape, if the bulk fluid density changes from low to moderate and high. If the bulk density is appropriately chosen, it is possible to alter the shape of this curve from the double hump to single hump by changing the grafting density. Moreover, in narrow pores one can observe the capacitance curve with even three humps for a certain set of parameters describing brush. This behavior illustrates how strong the influence of brushes on the electric double layer properties can be, particularly for ionic fluids in narrow pores.
Choubey, Sanjay K; Jeyaraman, Jeyakanthan
2016-11-01
Deregulated epigenetic activity of Histone deacetylase 1 (HDAC1) in tumor development and carcinogenesis pronounces it as promising therapeutic target for cancer treatment. HDAC1 has recently captured the attention of researchers owing to its decisive role in multiple types of cancer. In the present study a multistep framework combining ligand based 3D-QSAR, molecular docking and Molecular Dynamics (MD) simulation studies were performed to explore potential compound with good HDAC1 binding affinity. Four different pharmacophore hypotheses Hypo1 (AADR), Hypo2 (AAAH), Hypo3 (AAAR) and Hypo4 (ADDR) were obtained. The hypothesis Hypo1 (AADR) with two hydrogen bond acceptors (A), one hydrogen bond donor (D) and one aromatics ring (R) was selected to build 3D-QSAR model on the basis of statistical parameter. The pharmacophore hypothesis produced a statistically significant QSAR model, with co-efficient of correlation r(2)=0.82 and cross validation correlation co-efficient q(2)=0.70. External validation result displays high predictive power with r(2) (o) value of 0.88 and r(2) (m) value of 0.58 to carry out further in silico studies. Virtual screening result shows ZINC70450932 as the most promising lead where HDAC1 interacts with residues Asp99, His178, Tyr204, Phe205 and Leu271 forming seven hydrogen bonds. A high docking score (-11.17kcal/mol) and lower docking energy -37.84kcal/mol) displays the binding efficiency of the ligand. Binding free energy calculation was done using MM/GBSA to access affinity of ligands towards protein. Density Functional Theory was employed to explore electronic features of the ligands describing intramolcular charge transfer reaction. Molecular dynamics simulation studies at 50ns display metal ion (Zn)-ligand interaction which is vital to inhibit the enzymatic activity of the protein.
Food Protein Functionality--A New Model.
Foegeding, E Allen
2015-12-01
Proteins in foods serve dual roles as nutrients and structural building blocks. The concept of protein functionality has historically been restricted to nonnutritive functions--such as creating emulsions, foams, and gels--but this places sole emphasis on food quality considerations and potentially overlooks modifications that may also alter nutritional quality or allergenicity. A new model is proposed that addresses the function of proteins in foods based on the length scale(s) responsible for the function. Properties such as flavor binding, color, allergenicity, and digestibility are explained based on the structure of individual molecules; placing this functionality at the nano/molecular scale. At the next higher scale, applications in foods involving gelation, emulsification, and foam formation are based on how proteins form secondary structures that are seen at the nano and microlength scales, collectively called the mesoscale. The macroscale structure represents the arrangements of molecules and mesoscale structures in a food. Macroscale properties determine overall product appearance, stability, and texture. The historical approach of comparing among proteins based on forming and stabilizing specific mesoscale structures remains valid but emphasis should be on a common means for structure formation to allow for comparisons across investigations. For applications in food products, protein functionality should start with identification of functional needs across scales. Those needs are then evaluated relative to how processing and other ingredients could alter desired molecular scale properties, or proper formation of mesoscale structures. This allows for a comprehensive approach to achieving the desired function of proteins in foods.
Validation of Modeling Flow Approaching Navigation Locks
2013-08-01
instrumentation, direction vernier . ........................................................................ 8 Figure 11. Plan A lock approach, upstream approach...13-9 8 Figure 9. Tools and instrumentation, bracket attached to rail. Figure 10. Tools and instrumentation, direction vernier . Numerical model
Mixture models for distance sampling detection functions.
Miller, David L; Thomas, Len
2015-01-01
We present a new class of models for the detection function in distance sampling surveys of wildlife populations, based on finite mixtures of simple parametric key functions such as the half-normal. The models share many of the features of the widely-used "key function plus series adjustment" (K+A) formulation: they are flexible, produce plausible shapes with a small number of parameters, allow incorporation of covariates in addition to distance and can be fitted using maximum likelihood. One important advantage over the K+A approach is that the mixtures are automatically monotonic non-increasing and non-negative, so constrained optimization is not required to ensure distance sampling assumptions are honoured. We compare the mixture formulation to the K+A approach using simulations to evaluate its applicability in a wide set of challenging situations. We also re-analyze four previously problematic real-world case studies. We find mixtures outperform K+A methods in many cases, particularly spiked line transect data (i.e., where detectability drops rapidly at small distances) and larger sample sizes. We recommend that current standard model selection methods for distance sampling detection functions are extended to include mixture models in the candidate set.
An approach to metering and network modeling
Adibi, M.M.; Clements, K.A.; Kafka, R.J.; Stovall, J.P.
1992-06-01
Estimation of the static state of an electric power network has become a standard function in real-time monitoring and control. Its purpose is to use the network model and process the metering data in order to determine an accurate and reliable estimate of the system state in the real-time environment. In the models usually used it is assumed that the network parameters and topology are free of errors and the measurement system provides unbiased data having a known distribution. The network and metering models however, contain errors which frequently result in either non-convergent behavior of the state estimator or exceedingly large residual, reducing the level of confidence in the results. This paper describes an approach minimizing the above uncertainties by analyzing the data which are routinely collected at the power system control center. The approach while improve the reliability of the real-time data-base while reducing the state estimator installation and maintenance effort. 5 refs.
An approach to metering and network modeling
Adibi, M.M. ); Clements, K.A. ); Kafka, R.J. ); Stovall, J.P. )
1992-01-01
Estimation of the static state of an electric power network has become a standard function in real-time monitoring and control. Its purpose is to use the network model and process the metering data in order to determine an accurate and reliable estimate of the system state in the real-time environment. In the models usually used it is assumed that the network parameters and topology are free of errors and the measurement system provides unbiased data having a known distribution. The network and metering models however, contain errors which frequently result in either non-convergent behavior of the state estimator or exceedingly large residual, reducing the level of confidence in the results. This paper describes an approach minimizing the above uncertainties by analyzing the data which are routinely collected at the power system control center. The approach while improve the reliability of the real-time data-base while reducing the state estimator installation and maintenance effort. 5 refs.
A Functional Analytic Approach to Group Psychotherapy
ERIC Educational Resources Information Center
Vandenberghe, Luc
2009-01-01
This article provides a particular view on the use of Functional Analytical Psychotherapy (FAP) in a group therapy format. This view is based on the author's experiences as a supervisor of Functional Analytical Psychotherapy Groups, including groups for women with depression and groups for chronic pain patients. The contexts in which this approach…
Functional integral approach for multiplicative stochastic processes.
Arenas, Zochil González; Barci, Daniel G
2010-05-01
We present a functional formalism to derive a generating functional for correlation functions of a multiplicative stochastic process represented by a Langevin equation. We deduce a path integral over a set of fermionic and bosonic variables without performing any time discretization. The usual prescriptions to define the Wiener integral appear in our formalism in the definition of Green's functions in the Grassman sector of the theory. We also study nonperturbative constraints imposed by Becchi, Rouet and Stora symmetry (BRS) and supersymmetry on correlation functions. We show that the specific prescription to define the stochastic process is wholly contained in tadpole diagrams. Therefore, in a supersymmetric theory, the stochastic process is uniquely defined since tadpole contributions cancels at all order of perturbation theory.
A three-way approach for protein function classification
2017-01-01
The knowledge of protein functions plays an essential role in understanding biological cells and has a significant impact on human life in areas such as personalized medicine, better crops and improved therapeutic interventions. Due to expense and inherent difficulty of biological experiments, intelligent methods are generally relied upon for automatic assignment of functions to proteins. The technological advancements in the field of biology are improving our understanding of biological processes and are regularly resulting in new features and characteristics that better describe the role of proteins. It is inevitable to neglect and overlook these anticipated features in designing more effective classification techniques. A key issue in this context, that is not being sufficiently addressed, is how to build effective classification models and approaches for protein function prediction by incorporating and taking advantage from the ever evolving biological information. In this article, we propose a three-way decision making approach which provides provisions for seeking and incorporating future information. We considered probabilistic rough sets based models such as Game-Theoretic Rough Sets (GTRS) and Information-Theoretic Rough Sets (ITRS) for inducing three-way decisions. An architecture of protein functions classification with probabilistic rough sets based three-way decisions is proposed and explained. Experiments are carried out on Saccharomyces cerevisiae species dataset obtained from Uniprot database with the corresponding functional classes extracted from the Gene Ontology (GO) database. The results indicate that as the level of biological information increases, the number of deferred cases are reduced while maintaining similar level of accuracy. PMID:28234929
Parametric modeling of quantile regression coefficient functions.
Frumento, Paolo; Bottai, Matteo
2016-03-01
Estimating the conditional quantiles of outcome variables of interest is frequent in many research areas, and quantile regression is foremost among the utilized methods. The coefficients of a quantile regression model depend on the order of the quantile being estimated. For example, the coefficients for the median are generally different from those of the 10th centile. In this article, we describe an approach to modeling the regression coefficients as parametric functions of the order of the quantile. This approach may have advantages in terms of parsimony, efficiency, and may expand the potential of statistical modeling. Goodness-of-fit measures and testing procedures are discussed, and the results of a simulation study are presented. We apply the method to analyze the data that motivated this work. The described method is implemented in the qrcm R package.
Functional Approaches to Written Text: Classroom Applications.
ERIC Educational Resources Information Center
Miller, Tom, Ed.
Noting that little in language can be understood without taking into consideration the wider picture of communicative purpose, content, context, and audience, this book address practical uses of various approaches to discourse analysis. Several assumptions run through the chapters: knowledge is socially constructed; the manner in which language…
Translation: Towards a Critical-Functional Approach
ERIC Educational Resources Information Center
Sadeghi, Sima; Ketabi, Saeed
2010-01-01
The controversy over the place of translation in the teaching of English as a Foreign Language (EFL) is a thriving field of inquiry. Many older language teaching methodologies such as the Direct Method, the Audio-lingual Method, and Natural and Communicative Approaches, tended to either neglect the role of translation, or prohibit it entirely as a…
Numerical approaches to combustion modeling
Oran, E.S.; Boris, J.P. )
1991-01-01
This book presents a series of topics ranging from microscopic combustion physics to several aspects of macroscopic reactive-flow modeling. As the reader progresses into the book, the successive chapters generally include a wider range of physical and chemical processes in the mathematical model. Including more processes, however, usually means that they will be represented phenomenologically at a cruder level. In practice the detailed microscopic models and simulations are often used to develop and calibrate the phenomenologies used in the macroscopic models. The book first describes computations of the most microscopic chemical processes, then considers laminar flames and detonation modeling, and ends with computations of complex, multiphase combustion systems.
Loop expansion of the average effective action in the functional renormalization group approach
NASA Astrophysics Data System (ADS)
Lavrov, Peter M.; Merzlikin, Boris S.
2015-10-01
We formulate a perturbation expansion for the effective action in a new approach to the functional renormalization group method based on the concept of composite fields for regulator functions being their most essential ingredients. We demonstrate explicitly the principal difference between the properties of effective actions in these two approaches existing already on the one-loop level in a simple gauge model.
Functional CAR models for large spatially correlated functional datasets.
Zhang, Lin; Baladandayuthapani, Veerabhadran; Zhu, Hongxiao; Baggerly, Keith A; Majewski, Tadeusz; Czerniak, Bogdan A; Morris, Jeffrey S
2016-01-01
We develop a functional conditional autoregressive (CAR) model for spatially correlated data for which functions are collected on areal units of a lattice. Our model performs functional response regression while accounting for spatial correlations with potentially nonseparable and nonstationary covariance structure, in both the space and functional domains. We show theoretically that our construction leads to a CAR model at each functional location, with spatial covariance parameters varying and borrowing strength across the functional domain. Using basis transformation strategies, the nonseparable spatial-functional model is computationally scalable to enormous functional datasets, generalizable to different basis functions, and can be used on functions defined on higher dimensional domains such as images. Through simulation studies, we demonstrate that accounting for the spatial correlation in our modeling leads to improved functional regression performance. Applied to a high-throughput spatially correlated copy number dataset, the model identifies genetic markers not identified by comparable methods that ignore spatial correlations.
Work Functions for Models of Scandate Surfaces
NASA Technical Reports Server (NTRS)
Mueller, Wolfgang
1997-01-01
The electronic structure, surface dipole properties, and work functions of scandate surfaces have been investigated using the fully relativistic scattered-wave cluster approach. Three different types of model surfaces are considered: (1) a monolayer of Ba-Sc-O on W(100), (2) Ba or BaO adsorbed on Sc2O3 + W, and (3) BaO on SC2O3 + WO3. Changes in the work function due to Ba or BaO adsorption on the different surfaces are calculated by employing the depolarization model of interacting surface dipoles. The largest work function change and the lowest work function of 1.54 eV are obtained for Ba adsorbed on the Sc-O monolayer on W(100). The adsorption of Ba on Sc2O3 + W does not lead to a low work function, but the adsorption of BaO results in a work function of about 1.6-1.9 eV. BaO adsorbed on Sc2O3 + WO3, or scandium tungstates, may also lead to low work functions.
Linearized Functional Minimization for Inverse Modeling
Wohlberg, Brendt; Tartakovsky, Daniel M.; Dentz, Marco
2012-06-21
Heterogeneous aquifers typically consist of multiple lithofacies, whose spatial arrangement significantly affects flow and transport. The estimation of these lithofacies is complicated by the scarcity of data and by the lack of a clear correlation between identifiable geologic indicators and attributes. We introduce a new inverse-modeling approach to estimate both the spatial extent of hydrofacies and their properties from sparse measurements of hydraulic conductivity and hydraulic head. Our approach is to minimize a functional defined on the vectors of values of hydraulic conductivity and hydraulic head fields defined on regular grids at a user-determined resolution. This functional is constructed to (i) enforce the relationship between conductivity and heads provided by the groundwater flow equation, (ii) penalize deviations of the reconstructed fields from measurements where they are available, and (iii) penalize reconstructed fields that are not piece-wise smooth. We develop an iterative solver for this functional that exploits a local linearization of the mapping from conductivity to head. This approach provides a computationally efficient algorithm that rapidly converges to a solution. A series of numerical experiments demonstrates the robustness of our approach.
Kim, Sunghee; Kim, Ki Chul; Lee, Seung Woo; Jang, Seung Soon
2016-07-27
Understanding the thermodynamic stability and redox properties of oxygen functional groups on graphene is critical to systematically design stable graphene-based positive electrode materials with high potential for lithium-ion battery applications. In this work, we study the thermodynamic and redox properties of graphene functionalized with carbonyl and hydroxyl groups, and the evolution of these properties with the number, types and distribution of functional groups by employing the density functional theory method. It is found that the redox potential of the functionalized graphene is sensitive to the types, number, and distribution of oxygen functional groups. First, the carbonyl group induces higher redox potential than the hydroxyl group. Second, more carbonyl groups would result in higher redox potential. Lastly, the locally concentrated distribution of the carbonyl group is more beneficial to have higher redox potential compared to the uniformly dispersed distribution. In contrast, the distribution of the hydroxyl group does not affect the redox potential significantly. Thermodynamic investigation demonstrates that the incorporation of carbonyl groups at the edge of graphene is a promising strategy for designing thermodynamically stable positive electrode materials with high redox potentials.
Transfer function modeling of damping mechanisms in distributed parameter models
NASA Technical Reports Server (NTRS)
Slater, J. C.; Inman, D. J.
1994-01-01
This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.
Statistical approaches and software for clustering islet cell functional heterogeneity
Wills, Quin F.; Boothe, Tobias; Asadi, Ali; Ao, Ziliang; Warnock, Garth L.; Kieffer, Timothy J.
2016-01-01
ABSTRACT Worldwide efforts are underway to replace or repair lost or dysfunctional pancreatic β-cells to cure diabetes. However, it is unclear what the final product of these efforts should be, as β-cells are thought to be heterogeneous. To enable the analysis of β-cell heterogeneity in an unbiased and quantitative way, we developed model-free and model-based statistical clustering approaches, and created new software called TraceCluster. Using an example data set, we illustrate the utility of these approaches by clustering dynamic intracellular Ca2+ responses to high glucose in ∼300 simultaneously imaged single islet cells. Using feature extraction from the Ca2+ traces on this reference data set, we identified 2 distinct populations of cells with β-like responses to glucose. To the best of our knowledge, this report represents the first unbiased cluster-based analysis of human β-cell functional heterogeneity of simultaneous recordings. We hope that the approaches and tools described here will be helpful for those studying heterogeneity in primary islet cells, as well as excitable cells derived from embryonic stem cells or induced pluripotent cells. PMID:26909740
Functional approach to high-throughput plant growth analysis
2013-01-01
Method Taking advantage of the current rapid development in imaging systems and computer vision algorithms, we present HPGA, a high-throughput phenotyping platform for plant growth modeling and functional analysis, which produces better understanding of energy distribution in regards of the balance between growth and defense. HPGA has two components, PAE (Plant Area Estimation) and GMA (Growth Modeling and Analysis). In PAE, by taking the complex leaf overlap problem into consideration, the area of every plant is measured from top-view images in four steps. Given the abundant measurements obtained with PAE, in the second module GMA, a nonlinear growth model is applied to generate growth curves, followed by functional data analysis. Results Experimental results on model plant Arabidopsis thaliana show that, compared to an existing approach, HPGA reduces the error rate of measuring plant area by half. The application of HPGA on the cfq mutant plants under fluctuating light reveals the correlation between low photosynthetic rates and small plant area (compared to wild type), which raises a hypothesis that knocking out cfq changes the sensitivity of the energy distribution under fluctuating light conditions to repress leaf growth. Availability HPGA is available at http://www.msu.edu/~jinchen/HPGA. PMID:24565437
Identifying Similarities in Cognitive Subtest Functional Requirements: An Empirical Approach
ERIC Educational Resources Information Center
Frisby, Craig L.; Parkin, Jason R.
2007-01-01
In the cognitive test interpretation literature, a Rational/Intuitive, Indirect Empirical, or Combined approach is typically used to construct conceptual taxonomies of the functional (behavioral) similarities between subtests. To address shortcomings of these approaches, the functional requirements for 49 subtests from six individually…
Quantum thermodynamics: a nonequilibrium Green's function approach.
Esposito, Massimiliano; Ochoa, Maicol A; Galperin, Michael
2015-02-27
We establish the foundations of a nonequilibrium theory of quantum thermodynamics for noninteracting open quantum systems strongly coupled to their reservoirs within the framework of the nonequilibrium Green's functions. The energy of the system and its coupling to the reservoirs are controlled by a slow external time-dependent force treated to first order beyond the quasistatic limit. We derive the four basic laws of thermodynamics and characterize reversible transformations. Stochastic thermodynamics is recovered in the weak coupling limit.
Different Approaches to Covariate Inclusion in the Mixture Rasch Model
ERIC Educational Resources Information Center
Li, Tongyun; Jiao, Hong; Macready, George B.
2016-01-01
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
ONION: Functional Approach for Integration of Lipidomics and Transcriptomics Data
Piwowar, Monika; Jurkowski, Wiktor
2015-01-01
To date, the massive quantity of data generated by high-throughput techniques has not yet met bioinformatics treatment required to make full use of it. This is partially due to a mismatch in experimental and analytical study design but primarily due to a lack of adequate analytical approaches. When integrating multiple data types e.g. transcriptomics and metabolomics, multidimensional statistical methods are currently the techniques of choice. Typical statistical approaches, such as canonical correlation analysis (CCA), that are applied to find associations between metabolites and genes are failing due to small numbers of observations (e.g. conditions, diet etc.) in comparison to data size (number of genes, metabolites). Modifications designed to cope with this issue are not ideal due to the need to add simulated data resulting in a lack of p-value computation or by pruning of variables hence losing potentially valid information. Instead, our approach makes use of verified or putative molecular interactions or functional association to guide analysis. The workflow includes dividing of data sets to reach the expected data structure, statistical analysis within groups and interpretation of results. By applying pathway and network analysis, data obtained by various platforms are grouped with moderate stringency to avoid functional bias. As a consequence CCA and other multivariate models can be applied to calculate robust statistics and provide easy to interpret associations between metabolites and genes to leverage understanding of metabolic response. Effective integration of lipidomics and transcriptomics is demonstrated on publically available murine nutrigenomics data sets. We are able to demonstrate that our approach improves detection of genes related to lipid metabolism, in comparison to applying statistics alone. This is measured by increased percentage of explained variance (95% vs. 75–80%) and by identifying new metabolite-gene associations related to lipid
A system decomposition approach to the design of functional observers
NASA Astrophysics Data System (ADS)
Fernando, Tyrone; Trinh, Hieu
2014-09-01
This paper reports a system decomposition that allows the construction of a minimum-order functional observer using a state observer design approach. The system decomposition translates the functional observer design problem to that of a state observer for a smaller decomposed subsystem. Functional observability indices are introduced, and a closed-form expression for the minimum order required for a functional observer is derived in terms of those functional observability indices.
Multicomponent Equilibrium Models for Testing Geothermometry Approaches
Cooper, D. Craig; Palmer, Carl D.; Smith, Robert W.; McLing, Travis L.
2013-02-01
Geothermometry is an important tool for estimating deep reservoir temperature from the geochemical composition of shallower and cooler waters. The underlying assumption of geothermometry is that the waters collected from shallow wells and seeps maintain a chemical signature that reflects equilibrium in the deeper reservoir. Many of the geothermometers used in practice are based on correlation between water temperatures and composition or using thermodynamic calculations based a subset (typically silica, cations or cation ratios) of the dissolved constituents. An alternative approach is to use complete water compositions and equilibrium geochemical modeling to calculate the degree of disequilibrium (saturation index) for large number of potential reservoir minerals as a function of temperature. We have constructed several “forward” geochemical models using The Geochemist’s Workbench to simulate the change in chemical composition of reservoir fluids as they migrate toward the surface. These models explicitly account for the formation (mass and composition) of a steam phase and equilibrium partitioning of volatile components (e.g., CO2, H2S, and H2) into the steam as a result of pressure decreases associated with upward fluid migration from depth. We use the synthetic data generated from these simulations to determine the advantages and limitations of various geothermometry and optimization approaches for estimating the likely conditions (e.g., temperature, pCO2) to which the water was exposed in the deep subsurface. We demonstrate the magnitude of errors that can result from boiling, loss of volatiles, and analytical error from sampling and instrumental analysis. The estimated reservoir temperatures for these scenarios are also compared to conventional geothermometers. These results can help improve estimation of geothermal resource temperature during exploration and early development.
Functional models of power electronic components for system studies
NASA Technical Reports Server (NTRS)
Tam, Kwa-Sur; Yang, Lifeng; Dravid, Narayan
1991-01-01
A novel approach to model power electronic circuits has been developed to facilitate simulation studies of system-level issues. The underlying concept for this approach is to develop an equivalent circuit, the functional model, that performs the same functions as the actual circuit but whose operation can be simulated by using larger time step size and the reduction in model complexity, the computation time required by a functional model is significantly shorter than that required by alternative approaches. The authors present this novel modeling approach and discuss the functional models of two major power electronic components, the DC/DC converter unit and the load converter, that are being considered by NASA for use in the Space Station Freedom electric power system. The validity of these models is established by comparing the simulation results with available experimental data and other simulation results obtained by using a more established modeling approach. The usefulness of this approach is demonstrated by incorporating these models into a power system model and simulating the system responses and interactions between components under various conditions.
Distinguishing treatment from research: a functional approach
Lewens, T
2006-01-01
The best way to distinguish treatment from research is by their functions. This mode of distinction fits well with the basic ethical work that needs to be carried out. The distinction needs to serve as an ethical flag, highlighting areas in which the goals of doctors and patients are more likely than usual to diverge. The distinction also allows us to illuminate and understand some otherwise puzzling elements of debates on research ethics: it shows the peculiarity of exclusive conceptions of the distinction between research and treatment; it allows us to frame questions about therapeutic obligations in the research context, and it allows us to consider whether there may be research obligations in the therapeutic context. PMID:16816045
Sturmian function approach and {bar N}N bound states
Yan, Y.; Tegen, R.; Gutsche, T.; Faessler, A.
1997-09-01
A suitable numerical approach based on Sturmian functions is employed to solve the {bar N}N bound state problem for local and nonlocal potentials. The approach accounts for both the strong short-range nuclear potential and the long-range Coulomb force and provides directly the wave function of protonium and {bar N}N deep bound states with complex eigenvalues E=E{sub R}{minus}i({Gamma}/2). The spectrum of {bar N}N bound states has two parts, the atomic states bound by several keV, and the deep bound states which are bound by several hundred MeV. The observed very small hyperfine splitting of the 1s level and the 1s and 2p decay widths are reasonably well reproduced by both the Paris and Bonn potentials (supplemented with a microscopically derived quark annihilation potential), although there are differences in magnitude and level ordering. We present further arguments for the identification of the {sup 13}PF{sub 2} deep bound state with the exotic tensor meson f{sub 2}(1520). Both investigated models can accommodate the f{sub 2}(1520) but differ greatly in the total number of levels and in their ordering. The model based on the Paris potential predicts the {sup 13}P{sub 0} level slightly below 1.1 GeV while the model based on the Bonn potential puts this state below 0.8 GeV. It remains to be seen if this state can be identified with a scalar partner of the f{sub 2}(1520). {copyright} {ital 1997} {ital The American Physical Society}
Interactively Open Autonomy Unifies Two Approaches to Function
NASA Astrophysics Data System (ADS)
Collier, John
2004-08-01
Functionality is essential to any form of anticipation beyond simple directedness at an end. In the literature on function in biology, there are two distinct approaches. One, the etiological view, places the origin of function in selection, while the other, the organizational view, individuates function by organizational role. Both approaches have well-known advantages and disadvantages. I propose a reconciliation of the two approaches, based in an interactivist approach to the individuation and stability of organisms. The approach was suggested by Kant in the Critique of Judgment, but since it requires, on his account, the identification a new form of causation, it has not been accessible by analytical techniques. I proceed by construction of the required concept to fit certain design requirements. This construction builds on concepts introduced in my previous four talks to these meetings.
Matrix model approach to cosmology
NASA Astrophysics Data System (ADS)
Chaney, A.; Lu, Lei; Stern, A.
2016-03-01
We perform a systematic search for rotationally invariant cosmological solutions to toy matrix models. These models correspond to the bosonic sector of Lorentzian Ishibashi, Kawai, Kitazawa and Tsuchiya (IKKT)-type matrix models in dimensions d less than ten, specifically d =3 and d =5 . After taking a continuum (or commutative) limit they yield d -1 dimensional Poisson manifolds. The manifolds have a Lorentzian induced metric which can be associated with closed, open, or static space-times. For d =3 , we obtain recursion relations from which it is possible to generate rotationally invariant matrix solutions which yield open universes in the continuum limit. Specific examples of matrix solutions have also been found which are associated with closed and static two-dimensional space-times in the continuum limit. The solutions provide for a resolution of cosmological singularities, at least within the context of the toy matrix models. The commutative limit reveals other desirable features, such as a solution describing a smooth transition from an initial inflation to a noninflationary era. Many of the d =3 solutions have analogues in higher dimensions. The case of d =5 , in particular, has the potential for yielding realistic four-dimensional cosmologies in the continuum limit. We find four-dimensional de Sitter d S4 or anti-de Sitter AdS4 solutions when a totally antisymmetric term is included in the matrix action. A nontrivial Poisson structure is attached to these manifolds which represents the lowest order effect of noncommutativity. For the case of AdS4 , we find one particular limit where the lowest order noncommutativity vanishes at the boundary, but not in the interior.
Nonrelativistic approaches derived from point-coupling relativistic models
Lourenco, O.; Dutra, M.; Delfino, A.; Sa Martins, J. S.
2010-03-15
We construct nonrelativistic versions of relativistic nonlinear hadronic point-coupling models, based on new normalized spinor wave functions after small component reduction. These expansions give us energy density functionals that can be compared to their relativistic counterparts. We show that the agreement between the nonrelativistic limit approach and the Skyrme parametrizations becomes strongly dependent on the incompressibility of each model. We also show that the particular case A=B=0 (Walecka model) leads to the same energy density functional of the Skyrme parametrizations SV and ZR2, while the truncation scheme, up to order {rho}{sup 3}, leads to parametrizations for which {sigma}=1.
2015-08-31
inversion , TREX13 data analysis and model-data comparisons. Distribution Statement A: Approved for public release; distribution unlimited. 2...inputs. Also, it was suggested to consider analytical expressions for reverberation, methods of its inversion for environmental parameters, and...1913), “Physics-based inversion of multibeam sonar data for seafloor characterization”, J. Acoust. Soc. Amer., 134(4), Pt.2, p.4240. B.T. Hefner
Computational modelling approaches to vaccinology.
Pappalardo, Francesco; Flower, Darren; Russo, Giulia; Pennisi, Marzio; Motta, Santo
2015-02-01
Excepting the Peripheral and Central Nervous Systems, the Immune System is the most complex of somatic systems in higher animals. This complexity manifests itself at many levels from the molecular to that of the whole organism. Much insight into this confounding complexity can be gained through computational simulation. Such simulations range in application from epitope prediction through to the modelling of vaccination strategies. In this review, we evaluate selectively various key applications relevant to computational vaccinology: these include technique that operates at different scale that is, from molecular to organisms and even to population level.
Hardy, Simon; Robillard, Pierre N
2004-12-01
Petri nets are a discrete event simulation approach developed for system representation, in particular for their concurrency and synchronization properties. Various extensions to the original theory of Petri nets have been used for modeling molecular biology systems and metabolic networks. These extensions are stochastic, colored, hybrid and functional. This paper carries out an initial review of the various modeling approaches based on Petri net found in the literature, and of the biological systems that have been successfully modeled with these approaches. Moreover, the modeling goals and possibilities of qualitative analysis and system simulation of each approach are discussed.
Social learning in Models and Cases - an Interdisciplinary Approach
NASA Astrophysics Data System (ADS)
Buhl, Johannes; De Cian, Enrica; Carrara, Samuel; Monetti, Silvia; Berg, Holger
2016-04-01
Our paper follows an interdisciplinary understanding of social learning. We contribute to the literature on social learning in transition research by bridging case-oriented research and modelling-oriented transition research. We start by describing selected theories on social learning in innovation, diffusion and transition research. We present theoretical understandings of social learning in techno-economic and agent-based modelling. Then we elaborate on empirical research on social learning in transition case studies. We identify and synthetize key dimensions of social learning in transition case studies. In the following we bridge between more formal and generalising modelling approaches towards social learning processes and more descriptive, individualising case study approaches by interpreting the case study analysis into a visual guide on functional forms of social learning typically identified in the cases. We then try to exemplarily vary functional forms of social learning in integrated assessment models. We conclude by drawing the lessons learned from the interdisciplinary approach - methodologically and empirically.
Functional infrared imaging in medicine: a quantitative diagnostic approach.
Merla, A; Romani, G L
2006-01-01
The role and the potentialities of high-resolution infrared thermography, combined to bio-heat modelling, have been largely described in the last years in a wide variety of biomedical applications. Quantitative assessment over time of the cutaneous temperature and/or of other biomedical parameters related to the temperature (e.g., cutaneous blood flow, thermal inertia, sympathetic skin response) allows for a better and more complete understanding and description of functional processes involved and/or altered in presence of ailment and interfering with the regular cutaneous thermoregulation. Such an approach to thermal medical imaging requires both new methodologies and tools, like diagnostic paradigms, appropriate software for data analysis and, even, a completely new way to look at data processing. In this paper, some of the studies recently made in our laboratory are presented and described, with the general intent of introducing the reader to these innovative methods to obtain quantitative diagnostic tools based on thermal imaging.
Challenges and opportunities for integrating lake ecosystem modelling approaches
Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.
2010-01-01
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative
An approach to solving large reliability models
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.
1988-01-01
This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).
Defining and Applying a Functionality Approach to Intellectual Disability
ERIC Educational Resources Information Center
Luckasson, R.; Schalock, R. L.
2013-01-01
Background: The current functional models of disability do not adequately incorporate significant changes of the last three decades in our understanding of human functioning, and how the human functioning construct can be applied to clinical functions, professional practices and outcomes evaluation. Methods: The authors synthesise current…
Roth, Jason L.; Capel, Paul D.
2012-01-01
Crop agriculture occupies 13 percent of the conterminous United States. Agricultural management practices, such as crop and tillage types, affect the hydrologic flow paths through the landscape. Some agricultural practices, such as drainage and irrigation, create entirely new hydrologic flow paths upon the landscapes where they are implemented. These hydrologic changes can affect the magnitude and partitioning of water budgets and sediment erosion. Given the wide degree of variability amongst agricultural settings, changes in the magnitudes of hydrologic flow paths and sediment erosion induced by agricultural management practices commonly are difficult to characterize, quantify, and compare using only field observations. The Water Erosion Prediction Project (WEPP) model was used to simulate two landscape characteristics (slope and soil texture) and three agricultural management practices (land cover/crop type, tillage type, and selected agricultural land management practices) to evaluate their effects on the water budgets of and sediment yield from agricultural lands. An array of sixty-eight 60-year simulations were run, each representing a distinct natural or agricultural scenario with various slopes, soil textures, crop or land cover types, tillage types, and select agricultural management practices on an isolated 16.2-hectare field. Simulations were made to represent two common agricultural climate regimes: arid with sprinkler irrigation and humid. These climate regimes were constructed with actual climate and irrigation data. The results of these simulations demonstrate the magnitudes of potential changes in water budgets and sediment yields from lands as a result of landscape characteristics and agricultural practices adopted on them. These simulations showed that variations in landscape characteristics, such as slope and soil type, had appreciable effects on water budgets and sediment yields. As slopes increased, sediment yields increased in both the arid and
Combining Formal and Functional Approaches to Topic Structure
ERIC Educational Resources Information Center
Zellers, Margaret; Post, Brechtje
2012-01-01
Fragmentation between formal and functional approaches to prosodic variation is an ongoing problem in linguistic research. In particular, the frameworks of the Phonetics of Talk-in-Interaction (PTI) and Empirical Phonology (EP) take very different theoretical and methodological approaches to this kind of variation. We argue that it is fruitful to…
Defining mental disorder. Exploring the 'natural function' approach
2011-01-01
Due to several socio-political factors, to many psychiatrists only a strictly objective definition of mental disorder, free of value components, seems really acceptable. In this paper, I will explore a variant of such an objectivist approach to defining metal disorder, natural function objectivism. Proponents of this approach make recourse to the notion of natural function in order to reach a value-free definition of mental disorder. The exploration of Christopher Boorse's 'biostatistical' account of natural function (1) will be followed an investigation of the 'hybrid naturalism' approach to natural functions by Jerome Wakefield (2). In the third part, I will explore two proposals that call into question the whole attempt to define mental disorder (3). I will conclude that while 'natural function objectivism' accounts fail to provide the backdrop for a reliable definition of mental disorder, there is no compelling reason to conclude that a definition cannot be achieved. PMID:21255405
Defining mental disorder. Exploring the 'natural function' approach.
Varga, Somogy
2011-01-21
Due to several socio-political factors, to many psychiatrists only a strictly objective definition of mental disorder, free of value components, seems really acceptable. In this paper, I will explore a variant of such an objectivist approach to defining metal disorder, natural function objectivism. Proponents of this approach make recourse to the notion of natural function in order to reach a value-free definition of mental disorder. The exploration of Christopher Boorse's 'biostatistical' account of natural function (1) will be followed an investigation of the 'hybrid naturalism' approach to natural functions by Jerome Wakefield (2). In the third part, I will explore two proposals that call into question the whole attempt to define mental disorder (3). I will conclude that while 'natural function objectivism' accounts fail to provide the backdrop for a reliable definition of mental disorder, there is no compelling reason to conclude that a definition cannot be achieved.
Searching for new mathematical growth model approaches for Listeria monocytogenes.
Valero, A; Hervás, C; García-Gimeno, R M; Zurera, G
2007-01-01
Different secondary modeling approaches for the estimation of Listeria monocytogenes growth rate as a function of temperature (4 to 30 degrees C), citric acid (0% to 0.4% w/v), and ascorbic acid (0% to 0.4% w/v) are presented. Response surface (RS) and square-root (SR) models are proposed together with different artificial neural networks (ANN) based on product functions units (PU), sigmoidal functions units (SU), and a novel approach based on the use of hybrid functions units (PSU), which results from a combination of PU and SU. In this study, a significantly better goodness-of-fit was obtained in the case of the ANN models presented, reflected by the lower SEP values obtained (< 24.23 for both training and generalization datasets). Among these models, the SU model provided the best generalization capacity, displaying lower RMSE and SEP values, with fewer parameters compared to the PU and PSU models. The bias factor (B(f)) and accuracy factor (A(f)) of the mathematical validation dataset were above 1 in all cases, providing fail-safe predictions. The balance between generalization properties and the ease of use is the main consideration when applying secondary modeling approaches to achieve accurate predictions about the behavior of microorganisms.
Hybrid approaches to physiologic modeling and prediction
NASA Astrophysics Data System (ADS)
Olengü, Nicholas O.; Reifman, Jaques
2005-05-01
This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.
NASA Astrophysics Data System (ADS)
Cecchet, F.; Lis, D.; Caudano, Y.; Mani, A. A.; Peremans, A.; Champagne, B.; Guthmuller, J.
2012-03-01
The knowledge of the first hyperpolarizability tensor elements of molecular groups is crucial for a quantitative interpretation of the sum frequency generation (SFG) activity of thin organic films at interfaces. Here, the SFG response of the terminal methyl group of a dodecanethiol (DDT) monolayer has been interpreted on the basis of calculations performed at the density functional theory (DFT) level of approximation. In particular, DFT calculations have been carried out on three classes of models for the aliphatic chains. The first class of models consists of aliphatic chains, containing from 3 to 12 carbon atoms, in which only one methyl group can freely vibrate, while the rest of the chain is frozen by a strong overweight of its C and H atoms. This enables us to localize the probed vibrational modes on the methyl group. In the second class, only one methyl group is frozen, while the entire remaining chain is allowed to vibrate. This enables us to analyse the influence of the aliphatic chain on the methyl stretching vibrations. Finally, the dodecanethiol (DDT) molecule is considered, for which the effects of two dielectrics, i.e. n-hexane and n-dodecane, are investigated. Moreover, DDT calculations are also carried out by using different exchange-correlation (XC) functionals in order to assess the DFT approximations. Using the DFT IR vectors and Raman tensors, the SFG spectrum of DDT has been simulated and the orientation of the methyl group has then been deduced and compared with that obtained using an analytical approach based on a bond additivity model. This analysis shows that when using DFT molecular properties, the predicted orientation of the terminal methyl group tends to converge as a function of the alkyl chain length and that the effects of the chain as well as of the dielectric environment are small. Instead, a more significant difference is observed when comparing the DFT-based results with those obtained from the analytical approach, thus indicating
A stochastic approach to model validation
NASA Astrophysics Data System (ADS)
Luis, Steven J.; McLaughlin, Dennis
This paper describes a stochastic approach for assessing the validity of environmental models. In order to illustrate basic concepts we focus on the problem of modeling moisture movement through an unsaturated porous medium. We assume that the modeling objective is to predict the mean distribution of moisture content over time and space. The mean moisture content describes the large-scale flow behavior of most interest in many practical applications. The model validation process attempts to determine whether the model's predictions are acceptably close to the mean. This can be accomplished by comparing small-scale measurements of moisture content to the model's predictions. Differences between these two quantities can be attributed to three distinct 'error sources': (1) measurement error, (2) spatial heterogeneity, and (3) model error. If we adopt appropriate stochastic descriptions for the first two sources of error we can view model validation as a hypothesis testing problem where the null hypothesis states that model error is negligible. We illustrate this concept by comparing the predictions of a simple two-dimensional deterministic model to measurements collected during a field experiment carried out near Las Cruces, New Mexico. Preliminary results from this field test indicate that a stochastic approach to validation can identify model deficiencies and provide objective standards for model performance.
Local-basis-function approach to computed tomography
NASA Astrophysics Data System (ADS)
Hanson, K. M.; Wecksung, G. W.
1985-12-01
In the local basis-function approach, a reconstruction is represented as a linear expansion of basis functions, which are arranged on a rectangular grid and possess a local region of support. The basis functions considered here are positive and may overlap. It is found that basis functions based on cubic B-splines offer significant improvements in the calculational accuracy that can be achieved with iterative tomographic reconstruction algorithms. By employing repetitive basis functions, the computational effort involved in these algorithms can be minimized through the use of tabulated values for the line or strip integrals over a single-basis function. The local nature of the basis functions reduces the difficulties associated with applying local constraints on reconstruction values, such as upper and lower limits. Since a reconstruction is specified everywhere by a set of coefficients, display of a coarsely represented image does not require an arbitrary choice of an interpolation function.
Kolker, Eugene
2009-01-01
Background Predicting protein function from primary sequence is an important open problem in modern biology. Not only are there many thousands of proteins of unknown function, current approaches for predicting function must be improved upon. One problem in particular is overly-specific function predictions which we address here with a new statistical model of the relationship between protein sequence similarity and protein function similarity. Methodology Our statistical model is based on sets of proteins with experimentally validated functions and numeric measures of function specificity and function similarity derived from the Gene Ontology. The model predicts the similarity of function between two proteins given their amino acid sequence similarity measured by statistics from the BLAST sequence alignment algorithm. A novel aspect of our model is that it predicts the degree of function similarity shared between two proteins over a continuous range of sequence similarity, facilitating prediction of function with an appropriate level of specificity. Significance Our model shows nearly exact function similarity for proteins with high sequence similarity (bit score >244.7, e-value >1e−62, non-redundant NCBI protein database (NRDB)) and only small likelihood of specific function match for proteins with low sequence similarity (bit score <54.6, e-value <1e−05, NRDB). For sequence similarity ranges in between our annotation model shows an increasing relationship between function similarity and sequence similarity, but with considerable variability. We applied the model to a large set of proteins of unknown function, and predicted functions for thousands of these proteins ranging from general to very specific. We also applied the model to a data set of proteins with previously assigned, specific functions that were electronically based. We show that, on average, these prior function predictions are more specific (quite possibly overly-specific) compared to
Zebrafish Functional Genetics Approach to the Pathogenesis of Well-Differentiated Liposarcoma
2015-12-01
AWARD NUMBER: W81XWH-13-1-0340 TITLE: Zebrafish Functional Genetics Approach to the Pathogenesis of Well- Differentiated Liposarcoma PRINCIPAL...2013 - 14 Sep 2015 4. TITLE AND SUBTITLE Zebrafish Functional Genetics Approach to the Pathogenesis of Well-Differentiated Liposarcoma 5a...that FRS2 is a 12q oncogene that activates oncogenic signal transduction, using FRS2 overexpression in genetically engineered zebrafish models and in
Selectionist and Evolutionary Approaches to Brain Function: A Critical Appraisal
Fernando, Chrisantha; Szathmáry, Eörs; Husbands, Phil
2012-01-01
We consider approaches to brain dynamics and function that have been claimed to be Darwinian. These include Edelman’s theory of neuronal group selection, Changeux’s theory of synaptic selection and selective stabilization of pre-representations, Seung’s Darwinian synapse, Loewenstein’s synaptic melioration, Adam’s selfish synapse, and Calvin’s replicating activity patterns. Except for the last two, the proposed mechanisms are selectionist but not truly Darwinian, because no replicators with information transfer to copies and hereditary variation can be identified in them. All of them fit, however, a generalized selectionist framework conforming to the picture of Price’s covariance formulation, which deliberately was not specific even to selection in biology, and therefore does not imply an algorithmic picture of biological evolution. Bayesian models and reinforcement learning are formally in agreement with selection dynamics. A classification of search algorithms is shown to include Darwinian replicators (evolutionary units with multiplication, heredity, and variability) as the most powerful mechanism for search in a sparsely occupied search space. Examples are given of cases where parallel competitive search with information transfer among the units is more efficient than search without information transfer between units. Finally, we review our recent attempts to construct and analyze simple models of true Darwinian evolutionary units in the brain in terms of connectivity and activity copying of neuronal groups. Although none of the proposed neuronal replicators include miraculous mechanisms, their identification remains a challenge but also a great promise. PMID:22557963
Measuring Social Returns to Higher Education Investments in Hong Kong: Production Function Approach.
ERIC Educational Resources Information Center
Voon, Jan P.
2001-01-01
Uses a growth model involving an aggregate production function to measure social benefits from human capital improvements due to investments in Hong Kong higher education. Returns calculated using the production-function approach are significantly higher than those derived from the wage-increment method. Returns declined during the past 10 years.…
Towards new approaches in phenological modelling
NASA Astrophysics Data System (ADS)
Chmielewski, Frank-M.; Götz, Klaus-P.; Rawel, Harshard M.; Homann, Thomas
2014-05-01
Modelling of phenological stages is based on temperature sums for many decades, describing both the chilling and the forcing requirement of woody plants until the beginning of leafing or flowering. Parts of this approach go back to Reaumur (1735), who originally proposed the concept of growing degree-days. Now, there is a growing body of opinion that asks for new methods in phenological modelling and more in-depth studies on dormancy release of woody plants. This requirement is easily understandable if we consider the wide application of phenological models, which can even affect the results of climate models. To this day, in phenological models still a number of parameters need to be optimised on observations, although some basic physiological knowledge of the chilling and forcing requirement of plants is already considered in these approaches (semi-mechanistic models). Limiting, for a fundamental improvement of these models, is the lack of knowledge about the course of dormancy in woody plants, which cannot be directly observed and which is also insufficiently described in the literature. Modern metabolomic methods provide a solution for this problem and allow both, the validation of currently used phenological models as well as the development of mechanistic approaches. In order to develop this kind of models, changes of metabolites (concentration, temporal course) must be set in relation to the variability of environmental (steering) parameters (weather, day length, etc.). This necessarily requires multi-year (3-5 yr.) and high-resolution (weekly probes between autumn and spring) data. The feasibility of this approach has already been tested in a 3-year pilot-study on sweet cherries. Our suggested methodology is not only limited to the flowering of fruit trees, it can be also applied to tree species of the natural vegetation, where even greater deficits in phenological modelling exist.
Dynamic geometry, brain function modeling, and consciousness.
Roy, Sisir; Llinás, Rodolfo
2008-01-01
Pellionisz and Llinás proposed, years ago, a geometric interpretation towards understanding brain function. This interpretation assumes that the relation between the brain and the external world is determined by the ability of the central nervous system (CNS) to construct an internal model of the external world using an interactive geometrical relationship between sensory and motor expression. This approach opened new vistas not only in brain research but also in understanding the foundations of geometry itself. The approach named tensor network theory is sufficiently rich to allow specific computational modeling and addressed the issue of prediction, based on Taylor series expansion properties of the system, at the neuronal level, as a basic property of brain function. It was actually proposed that the evolutionary realm is the backbone for the development of an internal functional space that, while being purely representational, can interact successfully with the totally different world of the so-called "external reality". Now if the internal space or functional space is endowed with stochastic metric tensor properties, then there will be a dynamic correspondence between events in the external world and their specification in the internal space. We shall call this dynamic geometry since the minimal time resolution of the brain (10-15 ms), associated with 40 Hz oscillations of neurons and their network dynamics, is considered to be responsible for recognizing external events and generating the concept of simultaneity. The stochastic metric tensor in dynamic geometry can be written as five-dimensional space-time where the fifth dimension is a probability space as well as a metric space. This extra dimension is considered an imbedded degree of freedom. It is worth noticing that the above-mentioned 40 Hz oscillation is present both in awake and dream states where the central difference is the inability of phase resetting in the latter. This framework of dynamic
Crossing Hazard Functions in Common Survival Models.
Zhang, Jiajia; Peng, Yingwei
2009-10-15
Crossing hazard functions have extensive applications in modeling survival data. However, existing studies in the literature mainly focus on comparing crossed hazard functions and estimating the time at which the hazard functions cross, and there is little theoretical work on conditions under which hazard functions from a model will have a crossing. In this paper, we investigate crossing status of hazard functions from the proportional hazards (PH) model, the accelerated hazard (AH) model, and the accelerated failure time (AFT) model. We provide and prove conditions under which the hazard functions from the AH and the AFT models have no crossings or a single crossing. A few examples are also provided to demonstrate how the conditions can be used to determine crossing status of hazard functions from the three models.
A Continuum Approach For Neural Network Modelling Of Anisotropic Materials
NASA Astrophysics Data System (ADS)
Man, Hou; Furukawa, Tomonari
2010-05-01
This paper presents an approach for constitutive modelling of anisotropic materials using neural networks on a continuum basis. The proposed approach develops the models by using an error function formulated from the minimum total potential energy principle. The variation of the strain energy of a deformed geometry is approximated by using the full field strain measurement with the neural network constitutive model (NNCM) and the coordinate frame transformation. It is subsequently compared with the variation of the applied external work, such that the discrepancy is fed back to update the model properties. The proposed approach is, therefore, able to develop the NNCM without the presence of stress data. This not only facilitates the use of multi-axial load tests and non-standard specimens to produce more realistic experimental results, but also reduces the number of different specimen configurations used for the model development. A numerical example is presented in this paper to validate the performance and applicability of the proposed approach by modelling a carbon fibre reinforced plastic (CFRP) lamina. Artificial experimental results of tensile tests with two different specimens are used to facilitate the validation. The results emphasise the flexibility and applicability of the proposed approach for constitutive modelling of anisotropic materials.
From equation to inequality using a function-based approach
NASA Astrophysics Data System (ADS)
Verikios, Petros; Farmaki, Vassiliki
2010-06-01
This article presents features of a qualitative research study concerning the teaching and learning of school algebra using a function-based approach in a grade 8 class, of 23 students, in 26 lessons, in a state school of Athens, in the school year 2003-2004. In this article, we are interested in the inequality concept and our aim is to investigate if and how our approach could facilitate students to comprehend inequality and to solve problems related to this concept. Data analysis showed that, in order to comprehend the new concept, the students should make a transition from equation to inequality. The role of the situation context proved decisive in this transition and in making sense of involved symbols. Also, students used function representations as problem-solving strategies in problems that included inequalities. However, the extension of the function-based approach in solving an abstract equation or inequality proved problematic for the students.
Shell Model Approach to Nuclear Level Density
NASA Astrophysics Data System (ADS)
Horoi, Mihai
2000-04-01
Nuclear level densities (NLD) are traditionally estimated using variations of Fermi Gas Formula (FGF) or combinatoric techniques. Recent investigations using Monte Carlo Shell Model (MCSM) techniques indicate that a shell model description of NLD may be an accurate and stable approach. Full shell model calculations of NLD are very difficult. We calculated the NLD for all nuclei in the sd shell and show that the results can be described by a single particle combinatoric model, which depends on two parameters similar to FGF. We further investigated other models and find that a sum of gaussians with means and variances given by French and Ratcliff averages (Phys. Rev. C 3, 94(1971)) is able to accurately describe shell model NLD, even when shell effects are present. The contribution of the spurious center-of-mass motion to the shell model NLD is also discussed.
Interprofessional approach for teaching functional knee joint anatomy.
Meyer, Jakob J; Obmann, Markus M; Gießler, Marianne; Schuldis, Dominik; Brückner, Ann-Kathrin; Strohm, Peter C; Sandeck, Florian; Spittau, Björn
2017-03-01
Profound knowledge in functional and clinical anatomy is a prerequisite for efficient diagnosis in medical practice. However, anatomy teaching does not always consider functional and clinical aspects. Here we introduce a new interprofessional approach to effectively teach the anatomy of the knee joint. The presented teaching approach involves anatomists, orthopaedists and physical therapists to teach anatomy of the knee joint in small groups under functional and clinical aspects. The knee joint courses were implemented during early stages of the medical curriculum and medical students were grouped with students of physical therapy to sensitize students to the importance of interprofessional work. Evaluation results clearly demonstrate that medical students and physical therapy students appreciated this teaching approach. First evaluations of following curricular anatomy exams suggest a benefit of course participants in knee-related multiple choice questions. Together, the interprofessional approach presented here proves to be a suitable approach to teach functional and clinical anatomy of the knee joint and further trains interprofessional work between prospective physicians and physical therapists as a basis for successful healthcare management.
Filtered density function approach for reactive transport in groundwater
NASA Astrophysics Data System (ADS)
Suciu, Nicolae; Schüler, Lennart; Attinger, Sabine; Knabner, Peter
2016-04-01
Spatial filtering may be used in coarse-grained simulations (CGS) of reactive transport in groundwater, similar to the large eddy simulations (LES) in turbulence. The filtered density function (FDF), stochastically equivalent to a probability density function (PDF), provides a statistical description of the sub-grid, unresolved, variability of the concentration field. Besides closing the chemical source terms in the transport equation for the mean concentration, like in LES-FDF methods, the CGS-FDF approach aims at quantifying the uncertainty over the whole hierarchy of heterogeneity scales exhibited by natural porous media. Practically, that means estimating concentration PDFs on coarse grids, at affordable computational costs. To cope with the high dimensionality of the problem in case of multi-component reactive transport and to reduce the numerical diffusion, FDF equations are solved by particle methods. But, while trajectories of computational particles are modeled as stochastic processes indexed by time, the concentration's heterogeneity is modeled as a random field, with multi-dimensional, spatio-temporal sets of indices. To overcome this conceptual inconsistency, we consider FDFs/PDFs of random species concentrations weighted by conserved scalars and we show that their evolution equations can be formulated as Fokker-Planck equations describing stochastically equivalent processes in concentration-position spaces. Numerical solutions can then be approximated by the density in the concentration-position space of an ensemble of computational particles governed by the associated Itô equations. Instead of sequential particle methods we use a global random walk (GRW) algorithm, which is stable, free of numerical diffusion, and practically insensitive to the increase of the number of particles. We illustrate the general FDF approach and the GRW numerical solution for a reduced complexity problem consisting of the transport of a single scalar in groundwater
Towards a Multiscale Approach to Cybersecurity Modeling
Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay; Halappanavar, Mahantesh; Oler, Kiri J.; Joslyn, Cliff A.
2013-11-12
We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example of a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.
The totally constrained model: three quantization approaches
NASA Astrophysics Data System (ADS)
Gambini, Rodolfo; Olmedo, Javier
2014-08-01
We provide a detailed comparison of the different approaches available for the quantization of a totally constrained system with a constraint algebra generating the non-compact group. In particular, we consider three schemes: the Refined Algebraic Quantization, the Master Constraint Programme and the Uniform Discretizations approach. For the latter, we provide a quantum description where we identify semiclassical sectors of the kinematical Hilbert space. We study the quantum dynamics of the system in order to show that it is compatible with the classical continuum evolution. Among these quantization approaches, the Uniform Discretizations provides the simpler description in agreement with the classical theory of this particular model, and it is expected to give new insights about the quantum dynamics of more realistic totally constrained models such as canonical general relativity.
Post-16 Biology--Some Model Approaches?
ERIC Educational Resources Information Center
Lock, Roger
1997-01-01
Outlines alternative approaches to the teaching of difficult concepts in A-level biology which may help student learning by making abstract ideas more concrete and accessible. Examples include models, posters, and poems for illustrating meiosis, mitosis, genetic mutations, and protein synthesis. (DDR)
New approach to folding with the Coulomb wave function
Blokhintsev, L. D.; Savin, D. A.; Kadyrov, A. S.; Mukhamedzhanov, A. M.
2015-05-15
Due to the long-range character of the Coulomb interaction theoretical description of low-energy nuclear reactions with charged particles still remains a formidable task. One way of dealing with the problem in an integral-equation approach is to employ a screened Coulomb potential. A general approach without screening requires folding of kernels of the integral equations with the Coulomb wave. A new method of folding a function with the Coulomb partial waves is presented. The partial-wave Coulomb function both in the configuration and momentum representations is written in the form of separable series. Each term of the series is represented as a product of a factor depending only on the Coulomb parameter and a function depending on the spatial variable in the configuration space and the momentum variable if the momentum representation is used. Using a trial function, the method is demonstrated to be efficient and reliable.
Thilaga, M; Vijayalakshmi, R; Nadarajan, R; Nandagopal, D
2016-06-01
The complex nature of neuronal interactions of the human brain has posed many challenges to the research community. To explore the underlying mechanisms of neuronal activity of cohesive brain regions during different cognitive activities, many innovative mathematical and computational models are required. This paper presents a novel Common Functional Pattern Mining approach to demonstrate the similar patterns of interactions due to common behavior of certain brain regions. The electrode sites of EEG-based functional brain network are modeled as a set of transactions and node-based complex network measures as itemsets. These itemsets are transformed into a graph data structure called Functional Pattern Graph. By mining this Functional Pattern Graph, the common functional patterns due to specific brain functioning can be identified. The empirical analyses show the efficiency of the proposed approach in identifying the extent to which the electrode sites (transactions) are similar during various cognitive load states.
Bovell, Adonis; Warncke, Kurt
2013-01-01
Ethanolamine ammonia-lyase (EAL) is a 5’-deoxyadenosylcobalamin (AdoCbl; coenzyme B12) –dependent bacterial enzyme that catalyzes the deamination of the short-chain vicinal amino alcohols, aminoethanol and [S]- and [R]-2-aminopropanol. The coding sequence for EAL is located within the 17-gene eut operon, which codes for the broad spectrum of proteins that comprise the eut metabolosome sub-organelle structure. A high-resolution structure of the ~500 kDa EAL [(EutB-EutC)2]3 oligomer from Escherichia coli has been determined by X-ray crystallography, but high-resolution spectroscopic determinations of reactant intermediate state structures, and detailed kinetic and thermodynamic studies of EAL, have been conducted for the Salmonella typhimurium enzyme. Therefore, a statistically robust homology model for the S. typhimurium EAL is constructed from the E. coli structure. The model structure is used to describe the hierarchy of EutB and EutC subunit interactions that construct the native EAL oligomer, and specifically, to address the long-standing challenge of reconstitution of the functional oligomer from isolated, purified subunits. Model prediction that the (EutB2)3 oligomer assembly will occur from isolated EutB, and that this hexameric structure will template the formation of the complete, native [(EutB-EutC)2]3 oligomer, is verified by biochemical methods. Prediction that cysteine residues on the exposed subunit-subunit contact surfaces of isolated EutB and EutC will interfere with assembly by cystine formation is verified by activating effects of disulfide reducing agents. Ångstrom-scale congruence of the reconstituted and native EAL in the active site region is shown by electron paramagnetic resonance spectroscopy. Overall, the hierarchy of subunit interactions and microscopic features of the contact surfaces, that are revealed by the homology model, guide and provide a rationale for a refined genetic and biochemical approach to reconstitution of the
Introduction to the Subjective Transfer Function Approach to Analyzing Systems.
1984-07-01
STANDARtDS- 1963-A R-3021-AF Intoduction to the Subjective Transfer Function Approach to Analyzing Systems 00 • CO Lf. Clairice T. Veit, Monti Callero ...34Prepared for the United States Air Force." Bibliography: p. • "R-3021-AF." 1. Subjective transfer function method. 2. System analysis. I. Callero , Monti...to Analyzing Systems T, Clairice T. Veit, Monti Callero , Barbara J. Rose July 1984 A Project AIR FORCE report prepared for the - United States Air
Building Water Models: A Different Approach
2015-01-01
Simplified classical water models are currently an indispensable component in practical atomistic simulations. Yet, despite several decades of intense research, these models are still far from perfect. Presented here is an alternative approach to constructing widely used point charge water models. In contrast to the conventional approach, we do not impose any geometry constraints on the model other than the symmetry. Instead, we optimize the distribution of point charges to best describe the “electrostatics” of the water molecule. The resulting “optimal” 3-charge, 4-point rigid water model (OPC) reproduces a comprehensive set of bulk properties significantly more accurately than commonly used rigid models: average error relative to experiment is 0.76%. Close agreement with experiment holds over a wide range of temperatures. The improvements in the proposed model extend beyond bulk properties: compared to common rigid models, predicted hydration free energies of small molecules using OPC are uniformly closer to experiment, with root-mean-square error <1 kcal/mol. PMID:25400877
Models of Protocellular Structure, Function and Evolution
NASA Technical Reports Server (NTRS)
New, Michael H.; Pohorille, Andrew; Szostak, Jack W.; Keefe, Tony; Lanyi, Janos K.; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
In the absence of any record of protocells, the most direct way to test our understanding, of the origin of cellular life is to construct laboratory models that capture important features of protocellular systems. Such efforts are currently underway in a collaborative project between NASA-Ames, Harvard Medical School and University of California. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures. The centerpiece of this project is a method for the in vitro evolution of protein enzymes toward arbitrary catalytic targets. A similar approach has already been developed for nucleic acids in which a small number of functional molecules are selected from a large, random population of candidates. The selected molecules are next vastly multiplied using the polymerase chain reaction.
Partitioned density functional approach for a Lennard-Jones fluid.
Zhou, Shiqi
2003-12-01
The existing classical density functional approach for nonuniform Lennard-Jones fluid, which is based on dividing the Lennard-Jones interaction potential into a short-range, repulsive part, and a smoothly varying, long-range, attractive tail, was improved by dividing the bulk second-order direct correlation function into strongly density-depending short-range part and weakly density-depending long-range part. The latter is treated by functional perturbation expansion truncated at the lowest order whose accuracy depends on how weakly the long-range part depends on the bulk density. The former is treated by the truncated functional perturbation expansion which is rewritten in the form of the simple weighted density approximation and incorporates the omitted higher-order terms by applying Lagrangian theorem of differential calculus to the reformulated form. The two approximations are put into the density profile equation of the density functional theory formalism to predict the density distribution for Lennard-Jones fluid in contact with a hard wall or between two hard walls within the whole density range for reduced temperature T(*)=1.35 and a density point for reduced temperature T(*)=1. The present partitioned density functional theory performs much better than several previous density functional perturbation theory approaches and a recently proposed bridge density functional approximation.
Neural network approaches for noisy language modeling.
Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid
2013-11-01
Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.
An object-oriented approach to energy-economic modeling
Wise, M.A.; Fox, J.A.; Sands, R.D.
1993-12-01
In this paper, the authors discuss the experiences in creating an object-oriented economic model of the U.S. energy and agriculture markets. After a discussion of some central concepts, they provide an overview of the model, focusing on the methodology of designing an object-oriented class hierarchy specification based on standard microeconomic production functions. The evolution of the model from the class definition stage to programming it in C++, a standard object-oriented programming language, will be detailed. The authors then discuss the main differences between writing the object-oriented program versus a procedure-oriented program of the same model. Finally, they conclude with a discussion of the advantages and limitations of the object-oriented approach based on the experience in building energy-economic models with procedure-oriented approaches and languages.
Molecular modelling approaches for cystic fibrosis transmembrane conductance regulator studies.
Odolczyk, Norbert; Zielenkiewicz, Piotr
2014-07-01
Cystic fibrosis (CF) is one of the most common genetic disorders, caused by loss of function mutations in the gene encoding the CF transmembrane conductance regulator (CFTR) protein. CFTR is a member of ATP-binding cassette (ABC) transporters superfamily and functions as an ATP-gated anion channel. This review summarises the vast majority of the efforts which utilised molecular modelling approaches to gain insight into the various aspects of CFTR protein, related to its structure, dynamic properties, function and interactions with other protein partners, or drug-like compounds, with emphasis to its relation to CF disease.
A hybrid modeling approach for option pricing
NASA Astrophysics Data System (ADS)
Hajizadeh, Ehsan; Seifi, Abbas
2011-11-01
The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.
ERIC Educational Resources Information Center
Li, Fuzhong; Duncan, Terry E.; Harmer, Peter; Acock, Alan; Stoolmiller, Mike
1998-01-01
Discusses the utility of multilevel confirmatory factor analysis and hierarchical linear modeling methods in testing measurement models in which the underlying attribute may vary as a function of levels of observation. A real dataset is used to illustrate the two approaches and their comparability. (SLD)
Questionnaire of Executive Function for Dancers: An Ecological Approach
ERIC Educational Resources Information Center
Wong, Alina; Rodriguez, Mabel; Quevedo, Liliana; de Cossio, Lourdes Fernandez; Borges, Ariel; Reyes, Alicia; Corral, Roberto; Blanco, Florentino; Alvarez, Miguel
2012-01-01
There is a current debate about the ecological validity of executive function (EF) tests. Consistent with the verisimilitude approach, this research proposes the Ballet Executive Scale (BES), a self-rating questionnaire that assimilates idiosyncratic executive behaviors of classical dance community. The BES was administrated to 149 adolescents,…
From Equation to Inequality Using a Function-Based Approach
ERIC Educational Resources Information Center
Verikios, Petros; Farmaki, Vassiliki
2010-01-01
This article presents features of a qualitative research study concerning the teaching and learning of school algebra using a function-based approach in a grade 8 class, of 23 students, in 26 lessons, in a state school of Athens, in the school year 2003-2004. In this article, we are interested in the inequality concept and our aim is to…
The Feynman-Vernon Influence Functional Approach in QED
NASA Astrophysics Data System (ADS)
Biryukov, Alexander; Shleenkov, Mark
2016-10-01
In the path integral approach we describe evolution of interacting electromagnetic and fermionic fields by the use of density matrix formalism. The equation for density matrix and transitions probability for fermionic field is obtained as average of electromagnetic field influence functional. We obtain a formula for electromagnetic field influence functional calculating for its various initial and final state. We derive electromagnetic field influence functional when its initial and final states are vacuum. We present Lagrangian for relativistic fermionic field under influence of electromagnetic field vacuum.
Functional toxicology: a new approach to detect biologically active xenobiotics.
McLachlan, J A
1993-01-01
The pervasiveness of chemicals in the environment with estrogenic activity and other biological functions recommends the development of new approaches to monitor and study them. Chemicals can be screened for activity in vitro using a panel of human or animal cells that have been transfected with a specific receptor and reporter gene; for example, the estrogen receptor. By using a variety of different receptors, the screening of xenobiotics for biological functions can be broad. Chemicals could then be classified by their function in vitro which, in some cases, may be a useful guide for toxicological studies. Images Figure 1. PMID:8119246
A simple approach to covalent functionalization of boron nitride nanotubes.
Ciofani, Gianni; Genchi, Giada Graziana; Liakos, Ioannis; Athanassiou, Athanassia; Dinucci, Dinuccio; Chiellini, Federica; Mattoli, Virgilio
2012-05-15
A novel and simple method for the preparation of chemically functionalized boron nitride nanotubes (BNNTs) is presented. Thanks to a strong oxidation followed by the silanization of the surface through 3-aminopropyl-triethoxysilane (APTES), BNNTs exposing amino groups on their surface were successfully obtained. The efficacy of the procedure was assessed with EDS and XPS analyses, which demonstrated a successful functionalization of ~15% boron sites. This approach opens interesting perspectives for further modification of BNNTs with several kinds of molecules. Since, in particular, biomedical applications are envisaged, we also demonstrated in vitro biocompatibility and cellular up-take of the functionalized BNNTs.
Järvinen, Anna; Ng, Rowena; Bellugi, Ursula
2015-01-01
Williams syndrome (WS) is a neurogenetic disorder that is saliently characterized by a unique social phenotype, most notably associated with a dramatically increased affinity and approachability toward unfamiliar people. Despite a recent proliferation of studies into the social profile of WS, the underpinnings of the pro-social predisposition are poorly understood. To this end, the present study was aimed at elucidating approach behavior of individuals with WS contrasted with typical development (TD) by employing a multidimensional design combining measures of autonomic arousal, social functioning, and two levels of approach evaluations. Given previous evidence suggesting that approach behaviors of individuals with WS are driven by a desire for social closeness, approachability tendencies were probed across two levels of social interaction: talking versus befriending. The main results indicated that while overall level of approachability did not differ between groups, an important qualitative between-group difference emerged across the two social interaction contexts: whereas individuals with WS demonstrated a similar willingness to approach strangers across both experimental conditions, TD individuals were significantly more willing to talk to than to befriend strangers. In WS, high approachability to positive faces across both social interaction levels was further associated with more normal social functioning. A novel finding linked autonomic responses with willingness to befriend negative faces in the WS group: elevated autonomic responsivity was associated with increased affiliation to negative face stimuli, which may represent an autonomic correlate of approach behavior in WS. Implications for underlying organization of the social brain are discussed. PMID:26459097
A Bayesian Shrinkage Approach for AMMI Models
de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves
2015-01-01
Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior
A Bayesian Shrinkage Approach for AMMI Models.
da Silva, Carlos Pereira; de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves; Balestre, Marcio
2015-01-01
Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior
Bayesian non-parametrics and the probabilistic approach to modelling
Ghahramani, Zoubin
2013-01-01
Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman’s coalescent, Dirichlet diffusion trees and Wishart processes. PMID:23277609
Bootstrapped models for intrinsic random functions
Campbell, K.
1988-08-01
Use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process. The fact that this function has to be estimated from data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the bootstrap in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as their kriging variance, provide a reasonable picture of variability introduced by imperfect estimation of the generalized covariance function.
Bootstrapped models for intrinsic random functions
Campbell, K.
1987-01-01
The use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process, and the fact that this function has to be estimated from the data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the ''bootstrap'' in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as of their ''kriging variance,'' provide a reasonable picture of the variability introduced by imperfect estimation of the generalized covariance function.
Muscle-Derived GDNF: A Gene Therapeutic Approach for Preserving Motor Neuron Function in ALS
2015-08-01
AWARD NUMBER: W81XWH-14-1-0189 TITLE: Muscle-Derived GDNF: A Gene Therapeutic Approach for Preserving Motor Neuron Function in ALS PRINCIPAL...NUMBER W81XWH-14-1-0189 Muscle-Derived GDNF: A Gene Therapeutic Approach for Preserving Motor Neuron Function in ALS 5b. GRANT NUMBER 5c. PROGRAM...production in muscles. Hypothesis: Intramuscular AAV5-GDNF injection will ameliorate motor neuron function in the SOD1G93A rat model of ALS. Objectives
The atomic approach for the Coqblin-Schrieffer model
NASA Astrophysics Data System (ADS)
Figueira, M. S.; Saguia, A.; Foglio, M. E.; Silva-Valencia, J.; Franco, R.
2014-12-01
In this work we consider the Coqblin-Schrieffer model when the spin is S = 1 / 2. The atomic solution has eight states: four conduction and two localized states, and we can then calculate the eigenenergies and eigenstates analytically. From this solution, employing the cumulant Green's functions results of the Anderson model, we build a "seed", that works as the input of the atomic approach, developed earlier by some of us. We obtain the T-matrix as well as the conduction Green's function of the model, both for the impurity and the lattice cases. The generalization for other moments within N states follows the same steps. We present results both for the impurity as well as for the lattice case and we indicate possible applications of the method to study ultra cold atoms confined in optical superlattices and Kondo insulators. In this last case, our results support an insulator-metal transition as a function of the temperature.
Models of Protocellular Structure, Function and Evolution
NASA Technical Reports Server (NTRS)
New, Michael H.; Pohorille, Andrew; Szostak, Jack W.; Keefe, Tony; Lanyi, Janos K.
2001-01-01
In the absence of any record of protocells, the most direct way to test our understanding of the origin of cellular life is to construct laboratory models that capture important features of protocellular systems. Such efforts are currently underway in a collaborative project between NASA-Ames, Harvard Medical School and University of California. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures. The centerpiece of this project is a method for the in vitro evolution of protein enzymes toward arbitrary catalytic targets. A similar approach has already been developed for nucleic acids in which a small number of functional molecules are selected from a large, random population of candidates. The selected molecules are next vastly multiplied using the polymerase chain reaction. A mutagenic approach, in which the sequences of selected molecules are randomly altered, can yield further improvements in performance or alterations of specificities. Unfortunately, the catalytic potential of nucleic acids is rather limited. Proteins are more catalytically capable but cannot be directly amplified. In the new technique, this problem is circumvented by covalently linking each protein of the initial, diverse, pool to the RNA sequence that codes for it. Then, selection is performed on the proteins, but the nucleic acids are replicated. Additional information is contained in the original extended abstract.
Design for diagnostics and prognostics: A physical-functional approach
NASA Astrophysics Data System (ADS)
Niculita, O.; Jennions, I. K.; Irving, P.
This paper describes an end-to-end Integrated Vehicle Health Management (IVHM) development process with a strong emphasis on the COTS software tools employed for the implementation of this process. A mix of physical simulation and functional failure analysis was chosen as a route for early assessment of degradation in complex systems as capturing system failure modes and their symptoms facilitates the assessment of health management solutions for a complex asset. The method chosen for the IVHM development is closely correlated to the generic engineering cycle. The concepts employed by this method are further demonstrated on a laboratory fuel system test rig, but they can also be applied to both new and legacy hi-tech high-value systems. Another objective of the study is to identify the relations between the different types of knowledge supporting the health management development process when using together physical and functional models. The conclusion of this lead is that functional modeling and physical simulation should not be done in isolation. The functional model requires permanent feedback from a physical system simulator in order to be able to build a functional model that will accurately represent the real system. This paper will therefore also describe the steps required to correctly develop a functional model that will reflect the physical knowledge inherently known about a given system.
Bioactive Functions of Milk Proteins: a Comparative Genomics Approach.
Sharp, Julie A; Modepalli, Vengama; Enjapoori, Ashwanth Kumar; Bisana, Swathi; Abud, Helen E; Lefevre, Christophe; Nicholas, Kevin R
2014-12-01
The composition of milk includes factors required to provide appropriate nutrition for the growth of the neonate. However, it is now clear that milk has many functions and comprises bioactive molecules that play a central role in regulating developmental processes in the young while providing a protective function for both the suckled young and the mammary gland during the lactation cycle. Identifying these bioactives and their physiological function in eutherians can be difficult and requires extensive screening of milk components that may function to improve well-being and options for prevention and treatment of disease. New animal models with unique reproductive strategies are now becoming increasingly relevant to search for these factors.
An Evolutionary Computation Approach to Examine Functional Brain Plasticity
Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.
2016-01-01
One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength
Algebraic operator approach to gas kinetic models
NASA Astrophysics Data System (ADS)
Il'ichov, L. V.
1997-02-01
Some general properties of the linear Boltzmann kinetic equation are used to present it in the form ∂ tϕ = - Â†Âϕ with the operators ÂandÂ† possessing some nontrivial algebraic properties. When applied to the Keilson-Storer kinetic model, this method gives an example of quantum ( q-deformed) Lie algebra. This approach provides also a natural generalization of the “kangaroo model”.
Systems Engineering Interfaces: A Model Based Approach
NASA Technical Reports Server (NTRS)
Fosse, Elyse; Delp, Christopher
2013-01-01
Currently: Ops Rev developed and maintains a framework that includes interface-specific language, patterns, and Viewpoints. Ops Rev implements the framework to design MOS 2.0 and its 5 Mission Services. Implementation de-couples interfaces and instances of interaction Future: A Mission MOSE implements the approach and uses the model based artifacts for reviews. The framework extends further into the ground data layers and provides a unified methodology.
Quasielastic scattering with the relativistic Green’s function approach
Meucci, Andrea; Giusti, Carlotta
2015-05-15
A relativistic model for quasielastic (QE) lepton-nucleus scattering is presented. The effects of final-state interactions (FSI) between the ejected nucleon and the residual nucleus are described in the relativistic Green’s function (RGF) model where FSI are consistently described with exclusive scattering using a complex optical potential. The results of the model are compared with experimental results of electron and neutrino scattering.
Modeling tauopathy: a range of complementary approaches.
Hall, Garth F; Yao, Jun
2005-01-03
The large group of neurodegenerative diseases which feature abnormal metabolism and accumulation of tau protein (tauopathies) characteristically produce a multiplicity of cellular and systemic abnormalities in human patients. Understanding the complex pathogenetic mechanisms by which abnormalities in tau lead to systemic neurofibrillary degenerative disease requires the construction and use of model experimental systems in which the behavior of human tau can be analyzed under controlled conditions. In this paper, we survey the ways in which in vitro, cellular and whole-animal models of human tauopathy are being used to add to our knowledge of the pathogenetic mechanisms underlying these conditions. In particular, we focus on the complementary advantages and limitations of various approaches to constructing tauopathy models presently in use with respect to those of murine transgenic tauopathy models.
Elements of a function analytic approach to probability.
Ghanem, Roger Georges; Red-Horse, John Robert
2008-02-01
We first provide a detailed motivation for using probability theory as a mathematical context in which to analyze engineering and scientific systems that possess uncertainties. We then present introductory notes on the function analytic approach to probabilistic analysis, emphasizing the connections to various classical deterministic mathematical analysis elements. Lastly, we describe how to use the approach as a means to augment deterministic analysis methods in a particular Hilbert space context, and thus enable a rigorous framework for commingling deterministic and probabilistic analysis tools in an application setting.
Proteomic approaches to dissect platelet function: half the story
Gnatenko, Dmitri V.; Perrotta, Peter L.; Bahou, Wadie F.
2006-01-01
Platelets play critical roles in diverse hemostatic and pathologic disorders and are broadly implicated in various biological processes that include inflammation, wound healing, and thrombosis. Recent progress in high-throughput mRNA and protein profiling techniques has advanced our understanding of the biological functions of platelets. Platelet proteomics has been adopted to decode the complex processes that underlie platelet function by identifying novel platelet-expressed proteins, dissecting mechanisms of signal or metabolic pathways, and analyzing functional changes of the platelet proteome in normal and pathologic states. The integration of transcriptomics and proteomics, coupled with progress in bioinformatics, provides novel tools for dissecting platelet biology. In this review, we focus on current advances in platelet proteomic studies, with emphasis on the importance of parallel transcriptomic studies to optimally dissect platelet function. Applications of these global profiling approaches to investigate platelet genetic diseases and platelet-related disorders are also addressed. PMID:16926286
Computational approaches for rational design of proteins with novel functionalities
Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul
2012-01-01
Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes. PMID:24688643
Penalized spline estimation for functional coefficient regression models.
Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan
2010-04-01
The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.
Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S
2016-02-08
Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.
Accuracy of functional surfaces on comparatively modeled protein structures
Zhao, Jieling; Dundas, Joe; Kachalo, Sema; Ouyang, Zheng; Liang, Jie
2012-01-01
Identification and characterization of protein functional surfaces are important for predicting protein function, understanding enzyme mechanism, and docking small compounds to proteins. As the rapid speed of accumulation of protein sequence information far exceeds that of structures, constructing accurate models of protein functional surfaces and identify their key elements become increasingly important. A promising approach is to build comparative models from sequences using known structural templates such as those obtained from structural genome projects. Here we assess how well this approach works in modeling binding surfaces. By systematically building three-dimensional comparative models of proteins using Modeller, we determine how well functional surfaces can be accurately reproduced. We use an alpha shape based pocket algorithm to compute all pockets on the modeled structures, and conduct a large-scale computation of similarity measurements (pocket RMSD and fraction of functional atoms captured) for 26,590 modeled enzyme protein structures. Overall, we find that when the sequence fragment of the binding surfaces has more than 45% identity to that of the tempalte protein, the modeled surfaces have on average an RMSD of 0.5 Å, and contain 48% or more of the binding surface atoms, with nearly all of the important atoms in the signatures of binding pockets captured. PMID:21541664
Approaches to modelling hydrology and ecosystem interactions
NASA Astrophysics Data System (ADS)
Silberstein, Richard P.
2014-05-01
As the pressures of industry, agriculture and mining on groundwater resources increase there is a burgeoning un-met need to be able to capture these multiple, direct and indirect stresses in a formal framework that will enable better assessment of impact scenarios. While there are many catchment hydrological models and there are some models that represent ecological states and change (e.g. FLAMES, Liedloff and Cook, 2007), these have not been linked in any deterministic or substantive way. Without such coupled eco-hydrological models quantitative assessments of impacts from water use intensification on water dependent ecosystems under changing climate are difficult, if not impossible. The concept would include facility for direct and indirect water related stresses that may develop around mining and well operations, climate stresses, such as rainfall and temperature, biological stresses, such as diseases and invasive species, and competition such as encroachment from other competing land uses. Indirect water impacts could be, for example, a change in groundwater conditions has an impact on stream flow regime, and hence aquatic ecosystems. This paper reviews previous work examining models combining ecology and hydrology with a view to developing a conceptual framework linking a biophysically defensable model that combines ecosystem function with hydrology. The objective is to develop a model capable of representing the cumulative impact of multiple stresses on water resources and associated ecosystem function.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Modeling of human artery tissue with probabilistic approach.
Xiong, Linfei; Chui, Chee-Kong; Fu, Yabo; Teo, Chee-Leong; Li, Yao
2015-04-01
Accurate modeling of biological soft tissue properties is vital for realistic medical simulation. Mechanical response of biological soft tissue always exhibits a strong variability due to the complex microstructure and different loading conditions. The inhomogeneity in human artery tissue is modeled with a computational probabilistic approach by assuming that the instantaneous stress at a specific strain varies according to normal distribution. Material parameters of the artery tissue which are modeled with a combined logarithmic and polynomial energy equation are represented by a statistical function with normal distribution. Mean and standard deviation of the material parameters are determined using genetic algorithm (GA) and inverse mean-value first-order second-moment (IMVFOSM) method, respectively. This nondeterministic approach was verified using computer simulation based on the Monte-Carlo (MC) method. Cumulative distribution function (CDF) of the MC simulation corresponds well with that of the experimental stress-strain data and the probabilistic approach is further validated using data from other studies. By taking into account the inhomogeneous mechanical properties of human biological tissue, the proposed method is suitable for realistic virtual simulation as well as an accurate computational approach for medical device validation.
Multiscale model approach for magnetization dynamics simulations
NASA Astrophysics Data System (ADS)
De Lucia, Andrea; Krüger, Benjamin; Tretiakov, Oleg A.; Kläui, Mathias
2016-11-01
Simulations of magnetization dynamics in a multiscale environment enable the rapid evaluation of the Landau-Lifshitz-Gilbert equation in a mesoscopic sample with nanoscopic accuracy in areas where such accuracy is required. We have developed a multiscale magnetization dynamics simulation approach that can be applied to large systems with spin structures that vary locally on small length scales. To implement this, the conventional micromagnetic simulation framework has been expanded to include a multiscale solving routine. The software selectively simulates different regions of a ferromagnetic sample according to the spin structures located within in order to employ a suitable discretization and use either a micromagnetic or an atomistic model. To demonstrate the validity of the multiscale approach, we simulate the spin wave transmission across the regions simulated with the two different models and different discretizations. We find that the interface between the regions is fully transparent for spin waves with frequency lower than a certain threshold set by the coarse scale micromagnetic model with no noticeable attenuation due to the interface between the models. As a comparison to exact analytical theory, we show that in a system with a Dzyaloshinskii-Moriya interaction leading to spin spirals, the simulated multiscale result is in good quantitative agreement with the analytical calculation.
Regularization of turbulence - a comprehensive modeling approach
NASA Astrophysics Data System (ADS)
Geurts, B. J.
2011-12-01
Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fluid turbulence, alternative coarsened descriptions need to be developed to cope with the wide range of length and time scales. These coarsened descriptions are known as large-eddy simulations in which one aims to capture only the primary features of a flow, at considerably reduced computational effort. Such coarsening introduces a closure problem that requires additional phenomenological modeling. A systematic approach to the closure problem, know as regularization modeling, will be reviewed. Its application to multiphase turbulent will be illustrated in which a basic regularization principle is enforced to physically consistently approximate momentum and scalar transport. Examples of Leray and LANS-alpha regularization are discussed in some detail, as are compatible numerical strategies. We illustrate regularization modeling to turbulence under the influence of rotation and buoyancy and investigate the accuracy with which particle-laden flow can be represented. A discussion of the numerical and modeling errors incurred will be given on the basis of homogeneous isotropic turbulence.
Recreating an esthetically and functionally acceptable dentition: a multidisciplinary approach.
Goyal, Mukesh Kumar; Goyal, Shelly; Hegde, Veena; Balkrishana, Dhanasekar; Narayana, Aparna I
2013-01-01
Patients today demand a youthful, attractive smile with comfortable functional acceptance. The complete oral rehabilitation of patients with a functionally compromised dentition frequently involves a multidisciplinary approach and presents a considerable clinical challenge. To a great extent, proper patient selection and careful interdisciplinary treatment planning, including acknowledgment of the patient's perceived needs, reasons for seeking services, financial ability, and socioeconomic profile, can govern the predictability of successful restorations. This clinical report describes a successful interdisciplinary approach for the management of a severely worn dentition with reduced vertical dimension of occlusion. Treatment modalities included periodontal crown lengthening procedures, endodontic treatment followed by post and core restorations, and prosthetic rehabilitation for severe tooth surface loss and reduced vertical dimension of occlusion comprising metal-ceramic restorations in esthetic zones and full-metal restorations in posterior regions.
NASA Astrophysics Data System (ADS)
Auger, P. A.; Diaz, F.; Ulses, C.; Estournel, C.; Neveux, J.; Joux, F.; Pujo-Pay, M.; Naudin, J. J.
2010-12-01
Low-salinity water (LSW, Salinity < 37.5) lenses detached from the Rhone River plume under specific wind conditions tend to favour the biological productivity and potentially a transfer of energy to higher trophic levels on the Gulf of Lions (GoL). A field cruise conducted in May 2006 (BIOPRHOFI) followed some LSW lenses by using a lagrangian strategy. A thorough analysis of the available data set enabled to further improve our understanding of the LSW lenses' functioning and their potential influence on marine ecosystems. Through an innovative 3-D coupled hydrodynamic-biogeochemical modelling approach, a specific calibration dedicated to river plume ecosystems was then proposed and validated on field data. Exploring the role of ecosystems on the particulate organic carbon (POC) export and deposition on the shelf, a sensitivity analysis to the particulate organic matter inputs from the Rhone River was carried out from 1 April to 15 July 2006. Over such a typical end-of-spring period marked by moderate floods, the main deposition area of POC was identified alongshore between 0 and 50 m depth on the GoL, extending the Rhone prodelta to the west towards the exit of the shelf. Moreover, the main deposition area of terrestrial POC was found on the prodelta region, which confirms recent results from sediment data. The averaged daily deposition of particulate organic carbon over the whole GoL is estimated by the model between 40 and 80 mgC/m2, which is in the range of previous secular estimations. The role of ecosystems on the POC export toward sediments or offshore areas was actually highlighted and feedbacks between ecosystems and particulate organic matters are proposed to explain paradoxical model results to the sensitivity test. In fact, the conversion of organic matter in living organisms would increase the retention of organic matter in the food web and this matter transfer along the food web could explain the minor quantity of POC of marine origin observed in the
Fuzzy set approach to quality function deployment: An investigation
NASA Technical Reports Server (NTRS)
Masud, Abu S. M.
1992-01-01
The final report of the 1992 NASA/ASEE Summer Faculty Fellowship at the Space Exploration Initiative Office (SEIO) in Langley Research Center is presented. Quality Function Deployment (QFD) is a process, focused on facilitating the integration of the customer's voice in the design and development of a product or service. Various input, in the form of judgements and evaluations, are required during the QFD analyses. All the input variables in these analyses are treated as numeric variables. The purpose of the research was to investigate how QFD analyses can be performed when some or all of the input variables are treated as linguistic variables with values expressed as fuzzy numbers. The reason for this consideration is that human judgement, perception, and cognition are often ambiguous and are better represented as fuzzy numbers. Two approaches for using fuzzy sets in QFD have been proposed. In both cases, all the input variables are considered as linguistic variables with values indicated as linguistic expressions. These expressions are then converted to fuzzy numbers. The difference between the two approaches is due to how the QFD computations are performed with these fuzzy numbers. In Approach 1, the fuzzy numbers are first converted to their equivalent crisp scores and then the QFD computations are performed using these crisp scores. As a result, the output of this approach are crisp numbers, similar to those in traditional QFD. In Approach 2, all the QFD computations are performed with the fuzzy numbers and the output are fuzzy numbers also. Both the approaches have been explained with the help of illustrative examples of QFD application. Approach 2 has also been applied in a QFD application exercise in SEIO, involving a 'mini moon rover' design. The mini moon rover is a proposed tele-operated vehicle that will traverse and perform various tasks, including autonomous operations, on the moon surface. The output of the moon rover application exercise is a
Evaluation of the storage function model parameter characteristics
NASA Astrophysics Data System (ADS)
Sugiyama, Hironobu; Kadoya, Mutsumi; Nagai, Akihiro; Lansey, Kevin
1997-04-01
The storage function hydrograph model is one of the most commonly used models for flood runoff analysis in Japan. This paper studies the generality of the approach and its application to Japanese basins. Through a comparison of the basic equations for the models, the storage function model parameters, K, P, and T1, are shown to be related to the terms, k and p, in the kinematic wave model. This analysis showed that P and p are identical and K and T1 can be related to k, the basin area and its land use. To apply the storage function model throughout Japan, regional parameter relationships for K and T1 were developed for different land-use conditions using data from 22 watersheds and 91 flood events. These relationships combine the kinematic wave parameters with general topographic information using Hack's Law. The sensitivity of the parameters and their physical significance are also described.
Merging Digital Surface Models Implementing Bayesian Approaches
NASA Astrophysics Data System (ADS)
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Casimir force in brane worlds: Coinciding results from Green's and zeta function approaches
Linares, Roman; Morales-Tecotl, Hugo A.; Pedraza, Omar
2010-06-15
Casimir force encodes the structure of the field modes as vacuum fluctuations and so it is sensitive to the extra dimensions of brane worlds. Now, in flat spacetimes of arbitrary dimension the two standard approaches to the Casimir force, Green's function, and zeta function yield the same result, but for brane world models this was only assumed. In this work we show that both approaches yield the same Casimir force in the case of universal extra dimensions and Randall-Sundrum scenarios with one and two branes added by p compact dimensions. Essentially, the details of the mode eigenfunctions that enter the Casimir force in the Green's function approach get removed due to their orthogonality relations with a measure involving the right hypervolume of the plates, and this leaves just the contribution coming from the zeta function approach. The present analysis corrects previous results showing a difference between the two approaches for the single brane Randall-Sundrum; this was due to an erroneous hypervolume of the plates introduced by the authors when using the Green's function. For all the models we discuss here, the resulting Casimir force can be neatly expressed in terms of two four-dimensional Casimir force contributions: one for the massless mode and the other for a tower of massive modes associated with the extra dimensions.
Connectotyping: model based fingerprinting of the functional connectome.
Miranda-Dominguez, Oscar; Mills, Brian D; Carpenter, Samuel D; Grant, Kathleen A; Kroenke, Christopher D; Nigg, Joel T; Fair, Damien A
2014-01-01
A better characterization of how an individual's brain is functionally organized will likely bring dramatic advances to many fields of study. Here we show a model-based approach toward characterizing resting state functional connectivity MRI (rs-fcMRI) that is capable of identifying a so-called "connectotype", or functional fingerprint in individual participants. The approach rests on a simple linear model that proposes the activity of a given brain region can be described by the weighted sum of its functional neighboring regions. The resulting coefficients correspond to a personalized model-based connectivity matrix that is capable of predicting the timeseries of each subject. Importantly, the model itself is subject specific and has the ability to predict an individual at a later date using a limited number of non-sequential frames. While we show that there is a significant amount of shared variance between models across subjects, the model's ability to discriminate an individual is driven by unique connections in higher order control regions in frontal and parietal cortices. Furthermore, we show that the connectotype is present in non-human primates as well, highlighting the translational potential of the approach.
Hierarchical approaches for systems modeling in cardiac development.
Gould, Russell A; Aboulmouna, Lina M; Varner, Jeffrey D; Butcher, Jonathan T
2013-01-01
Ordered cardiac morphogenesis and function are essential for all vertebrate life. The heart begins as a simple contractile tube, but quickly grows and morphs into a multichambered pumping organ complete with valves, while maintaining regulation of blood flow and nutrient distribution. Though not identical, cardiac morphogenesis shares many molecular and morphological processes across vertebrate species. Quantitative data across multiple time and length scales have been gathered through decades of reductionist single variable analyses. These range from detailed molecular signaling pathways at the cellular levels to cardiac function at the tissue/organ levels. However, none of these components act in true isolation from others, and each, in turn, exhibits short- and long-range effects in both time and space. With the absence of a gene, entire signaling cascades and genetic profiles may be shifted, resulting in complex feedback mechanisms. Also taking into account local microenvironmental changes throughout development, it is apparent that a systems level approach is an essential resource to accelerate information generation concerning the functional relationships across multiple length scales (molecular data vs physiological function) and structural development. In this review, we discuss relevant in vivo and in vitro experimental approaches, compare different computational frameworks for systems modeling, and the latest information about systems modeling of cardiac development. Finally, we conclude with some important future directions for cardiac systems modeling.
Modeling Negotiation by a Paticipatory Approach
NASA Astrophysics Data System (ADS)
Torii, Daisuke; Ishida, Toru; Bousquet, François
In a participatory approach by social scientists, role playing games (RPG) are effectively used to understand real thinking and behavior of stakeholders, but RPG is not sufficient to handle a dynamic process like negotiation. In this study, a participatory simulation where user-controlled avatars and autonomous agents coexist is introduced to the participatory approach for modeling negotiation. To establish a modeling methodology of negotiation, we have tackled the following two issues. First, for enabling domain experts to concentrate interaction design for participatory simulation, we have adopted the architecture in which an interaction layer controls agents and have defined three types of interaction descriptions (interaction protocol, interaction scenario and avatar control scenario) to be described. Second, for enabling domain experts and stakeholders to capitalize on participatory simulation, we have established a four-step process for acquiring negotiation model: 1) surveys and interviews to stakeholders, 2) RPG, 3) interaction design, and 4) participatory simulation. Finally, we discussed our methodology through a case study of agricultural economics in the northeast Thailand.
On an approach for computing the generating functions of the characters of simple Lie algebras
NASA Astrophysics Data System (ADS)
Fernández Núñez, José; García Fuertes, Wifredo; Perelomov, Askold M.
2014-04-01
We describe a general approach to obtain the generating functions of the characters of simple Lie algebras which is based on the theory of the quantum trigonometric Calogero-Sutherland model. We show how the method works in practice by means of a few examples involving some low rank classical algebras.
Combining formal and functional approaches to topic structure.
Zellers, Margaret; Post, Brechtje
2012-03-01
Fragmentation between formal and functional approaches to prosodic variation is an ongoing problem in linguistic research. In particular, the frameworks of the Phonetics of Talk-in-Interaction (PTI) and Empirical Phonology (EP) take very different theoretical and methodological approaches to this kind of variation. We argue that it is fruitful to adopt the insights of both PTI's qualitative analysis and EP's quantitative analysis and combine them into a multiple-methods approach. One realm in which it is possible to combine these frameworks is in the analysis of discourse topic structure and the prosodic cues relevant to it. By combining a quantitative and a qualitative approach to discourse topic structure, it is possible to give a better account of the observed variation in prosody, for example in the case of fundamental frequency (F0) peak timing, which can be explained in terms of pitch accent distribution over different topic structure categories. Similarly, local and global patterns in speech rate variation can be better explained and motivated by adopting insights from both PTI and EP in the study of topic structure. Combining PTI and EP can provide better accounts of speech data as well as opening up new avenues of investigation which would not have been possible in either approach alone.
Muñoz-Martínez, Amanda M; Coletti, Juan P
2015-01-01
Functional Analytic Psychotherapy (FAP) is a therapeutic approach developed in 'third wave therapies' context. FAP is characterized by use therapeutic relationship and the behaviors emit into it to improve clients daily life functioning. This therapeutic model is supported in behavior analysis principles and contextual functionalism philosophy. FAP proposes that clients behavior in session are functional equivalent with those out of session; therefore, when therapists respond to clients behaviors in session contingently, they promote and increase improvements in the natural setting. This article poses main features of FAP, its philosophical roots, achievements and research challenges to establish FAP as an independent treatment based on the evidence.
Muñoz-Martínez, Amanda M; Coletti, Juan Pablo
2015-01-01
Abstract Functional Analytic Psychotherapy (FAP) is a therapeutic approach developed in
Development of a structured approach for decomposition of complex systems on a functional basis
NASA Astrophysics Data System (ADS)
Yildirim, Unal; Felician Campean, I.
2014-07-01
The purpose of this paper is to present the System State Flow Diagram (SSFD) as a structured and coherent methodology to decompose a complex system on a solution- independent functional basis. The paper starts by reviewing common function modelling frameworks in literature and discusses practical requirements of the SSFD in the context of the current literature and current approaches in industry. The proposed methodology is illustrated through the analysis of a case study: design analysis of a generic Bread Toasting System (BTS).
Merging RANS & LES approaches in submesoscale modeling
NASA Astrophysics Data System (ADS)
Fock, B. H.; Schluenzen, K. H.
2010-09-01
Merging LES and RANS simulation is important for extending the application range of mesoscale models to the sub-mesoscale. Hence many traditional mesoscale modeling groups are currently working on adding LES capabilities to their models. To investigate the differences, which occur by switching from RANS to LES approaches, simulations with the METRAS and METRAS-LES (Fock, 2007) are presented. These differences are investigated in terms of effects caused by the choice of the computational grid and the sub-grid scale closures. Simulations of convective boundary layers on two different grids are compared to investigate the influence of vertical grid spacing and extension. One simulation is carried out on a high-resolution vertical homogeneous grid and the other with a vertical stretched grid, which has coarser resolution in higher altitudes. The stretched grid is vertical defined, as it would be done in the standard setup for the mesoscale model. Hence, this investigation shows to what amount the eddy resolving capabilities of a LES model is effected by the transition of the grid to a grid, which is vertically the same as typically used in mesoscale modeling. The differences, which occur by using different approaches for subgrid scale turbulence, are quantified and compared with the effects caused by the computational grid. Additional some details of the used LES SGS closure (Deardorff, 1980) are investigated. These details deal on evaluating the importance of the reduced characteristic filter length scale for stable stratification. But the main focus is on comparing RANS and LES and discussion of combination in a mixed turbulence scheme, which applies a the LES closure in the atmospheric boundary layer and a RANS based turbulence model in the stable atmosphere above. References: Deardorff J. W. (1980): Stratocumulus-capped mixed layers derived from a three-dimensional model. Boundary-Layer Meteorology. 18. (4). 495-527. DOI:10.1007/BF00119502 Fock B. H. (2007): METRAS
Nuclear level density: Shell-model approach
NASA Astrophysics Data System (ADS)
Sen'kov, Roman; Zelevinsky, Vladimir
2016-06-01
Knowledge of the nuclear level density is necessary for understanding various reactions, including those in the stellar environment. Usually the combinatorics of a Fermi gas plus pairing is used for finding the level density. Recently a practical algorithm avoiding diagonalization of huge matrices was developed for calculating the density of many-body nuclear energy levels with certain quantum numbers for a full shell-model Hamiltonian. The underlying physics is that of quantum chaos and intrinsic thermalization in a closed system of interacting particles. We briefly explain this algorithm and, when possible, demonstrate the agreement of the results with those derived from exact diagonalization. The resulting level density is much smoother than that coming from conventional mean-field combinatorics. We study the role of various components of residual interactions in the process of thermalization, stressing the influence of incoherent collision-like processes. The shell-model results for the traditionally used parameters are also compared with standard phenomenological approaches.
Thomas, Holly N; Thurston, Rebecca C
2016-05-01
A satisfying sex life is an important component of overall well-being, but sexual dysfunction is common, especially in midlife women. The aim of this review is (a) to define sexual function and dysfunction, (b) to present theoretical models of female sexual response, (c) to examine longitudinal studies of how sexual function changes during midlife, and (d) to review treatment options. Four types of female sexual dysfunction are currently recognized: Female Orgasmic Disorder, Female Sexual Interest/Arousal Disorder, Genito-Pelvic Pain/Penetration Disorder, and Substance/Medication-Induced Sexual Dysfunction. However, optimal sexual function transcends the simple absence of dysfunction. A biopsychosocial approach that simultaneously considers physical, psychological, sociocultural, and interpersonal factors is necessary to guide research and clinical care regarding women's sexual function. Most longitudinal studies reveal an association between advancing menopause status and worsening sexual function. Psychosocial variables, such as availability of a partner, relationship quality, and psychological functioning, also play an integral role. Future directions for research should include deepening our understanding of how sexual function changes with aging and developing safe and effective approaches to optimizing women's sexual function with aging. Overall, holistic, biopsychosocial approaches to women's sexual function are necessary to fully understand and treat this key component of midlife women's well-being.
A functional approach to movement analysis and error identification in sports and physical education
Hossner, Ernst-Joachim; Schiebl, Frank; Göhner, Ulrich
2015-01-01
In a hypothesis-and-theory paper, a functional approach to movement analysis in sports is introduced. In this approach, contrary to classical concepts, it is not anymore the “ideal” movement of elite athletes that is taken as a template for the movements produced by learners. Instead, movements are understood as the means to solve given tasks that in turn, are defined by to-be-achieved task goals. A functional analysis comprises the steps of (1) recognizing constraints that define the functional structure, (2) identifying sub-actions that subserve the achievement of structure-dependent goals, (3) explicating modalities as specifics of the movement execution, and (4) assigning functions to actions, sub-actions and modalities. Regarding motor-control theory, a functional approach can be linked to a dynamical-system framework of behavioral shaping, to cognitive models of modular effect-related motor control as well as to explicit concepts of goal setting and goal achievement. Finally, it is shown that a functional approach is of particular help for sports practice in the context of structuring part practice, recognizing functionally equivalent task solutions, finding innovative technique alternatives, distinguishing errors from style, and identifying root causes of movement errors. PMID:26441717
A modular approach to language production: models and facts.
Valle-Lisboa, Juan C; Pomi, Andrés; Cabana, Álvaro; Elvevåg, Brita; Mizraji, Eduardo
2014-06-01
Numerous cortical disorders affect language. We explore the connection between the observed language behavior and the underlying substrates by adopting a neurocomputational approach. To represent the observed trajectories of the discourse in patients with disorganized speech and in healthy participants, we design a graphical representation for the discourse as a trajectory that allows us to visualize and measure the degree of order in the discourse as a function of the disorder of the trajectories. Our work assumes that many of the properties of language production and comprehension can be understood in terms of the dynamics of modular networks of neural associative memories. Based upon this assumption, we connect three theoretical and empirical domains: (1) neural models of language processing and production, (2) statistical methods used in the construction of functional brain images, and (3) corpus linguistic tools, such as Latent Semantic Analysis (henceforth LSA), that are used to discover the topic organization of language. We show how the neurocomputational models intertwine with LSA and the mathematical basis of functional neuroimaging. Within this framework we describe the properties of a context-dependent neural model, based on matrix associative memories, that performs goal-oriented linguistic behavior. We link these matrix associative memory models with the mathematics that underlie functional neuroimaging techniques and present the "functional brain images" emerging from the model. This provides us with a completely "transparent box" with which to analyze the implication of some statistical images. Finally, we use these models to explore the possibility that functional synaptic disconnection can lead to an increase in connectivity between the representations of concepts that could explain some of the alterations in discourse displayed by patients with schizophrenia.
A validated approach for modeling collapse of steel structures
NASA Astrophysics Data System (ADS)
Saykin, Vitaliy Victorovich
A civil engineering structure is faced with many hazardous conditions such as blasts, earthquakes, hurricanes, tornadoes, floods, and fires during its lifetime. Even though structures are designed for credible events that can happen during a lifetime of the structure, extreme events do happen and cause catastrophic failures. Understanding the causes and effects of structural collapse is now at the core of critical areas of national need. One factor that makes studying structural collapse difficult is the lack of full-scale structural collapse experimental test results against which researchers could validate their proposed collapse modeling approaches. The goal of this work is the creation of an element deletion strategy based on fracture models for use in validated prediction of collapse of steel structures. The current work reviews the state-of-the-art of finite element deletion strategies for use in collapse modeling of structures. It is shown that current approaches to element deletion in collapse modeling do not take into account stress triaxiality in vulnerable areas of the structure, which is important for proper fracture and element deletion modeling. The report then reviews triaxiality and its role in fracture prediction. It is shown that fracture in ductile materials is a function of triaxiality. It is also shown that, depending on the triaxiality range, different fracture mechanisms are active and should be accounted for. An approach using semi-empirical fracture models as a function of triaxiality are employed. The models to determine fracture initiation, softening and subsequent finite element deletion are outlined. This procedure allows for stress-displacement softening at an integration point of a finite element in order to subsequently remove the element. This approach avoids abrupt changes in the stress that would create dynamic instabilities, thus making the results more reliable and accurate. The calibration and validation of these models are
Multiscale approach to equilibrating model polymer melts
NASA Astrophysics Data System (ADS)
Svaneborg, Carsten; Karimi-Varzaneh, Hossein Ali; Hojdis, Nils; Fleck, Frank; Everaers, Ralf
2016-09-01
We present an effective and simple multiscale method for equilibrating Kremer Grest model polymer melts of varying stiffness. In our approach, we progressively equilibrate the melt structure above the tube scale, inside the tube and finally at the monomeric scale. We make use of models designed to be computationally effective at each scale. Density fluctuations in the melt structure above the tube scale are minimized through a Monte Carlo simulated annealing of a lattice polymer model. Subsequently the melt structure below the tube scale is equilibrated via the Rouse dynamics of a force-capped Kremer-Grest model that allows chains to partially interpenetrate. Finally the Kremer-Grest force field is introduced to freeze the topological state and enforce correct monomer packing. We generate 15 melts of 500 chains of 10.000 beads for varying chain stiffness as well as a number of melts with 1.000 chains of 15.000 monomers. To validate the equilibration process we study the time evolution of bulk, collective, and single-chain observables at the monomeric, mesoscopic, and macroscopic length scales. Extension of the present method to longer, branched, or polydisperse chains, and/or larger system sizes is straightforward.
Energy function-based approaches to graph coloring.
Di Blas, A; Jagota, A; Hughey, R
2002-01-01
We describe an approach to optimization based on a multiple-restart quasi-Hopfield network where the only problem-specific knowledge is embedded in the energy function that the algorithm tries to minimize. We apply this method to three different variants of the graph coloring problem: the minimum coloring problem, the spanning subgraph k-coloring problem, and the induced subgraph k-coloring problem. Though Hopfield networks have been applied in the past to the minimum coloring problem, our encoding is more natural and compact than almost all previous ones. In particular, we use k-state neurons while almost all previous approaches use binary neurons. This reduces the number of connections in the network from (Nk)(2) to N(2) asymptotically and also circumvents a problem in earlier approaches, that of multiple colors being assigned to a single vertex. Experimental results show that our approach compares favorably with other algorithms, even nonneural ones specifically developed for the graph coloring problem.
Sensorimotor integration for functional recovery and the Bobath approach.
Levin, Mindy F; Panturin, Elia
2011-04-01
Bobath therapy is used to treat patients with neurological disorders. Bobath practitioners use hands-on approaches to elicit and reestablish typical movement patterns through therapist-controlled sensorimotor experiences within the context of task accomplishment. One aspect of Bobath practice, the recovery of sensorimotor function, is reviewed within the framework of current motor control theories. We focus on the role of sensory information in movement production, the relationship between posture and movement and concepts related to motor recovery and compensation with respect to this therapeutic approach. We suggest that a major barrier to the evaluation of the therapeutic effectiveness of the Bobath concept is the lack of a unified framework for both experimental identification and treatment of neurological motor deficits. More conclusive analysis of therapeutic effectiveness requires the development of specific outcomes that measure movement quality.
An ecological approach to language development: an alternative functionalism.
Dent, C H
1990-11-01
I argue for a new functionalist approach to language development, an ecological approach. A realist orientation is used that locates the causes of language development neither in the child nor in the language environment but in the functioning of perceptual systems that detect language-world relationships and use them to guide attention and action. The theory requires no concept of innateness, thus avoiding problems inherent in either the innate ideas or the genes-as-causal-programs explanations of the source of structure in language. An ecological explanation of language is discussed in relation to concepts and language, language as representation, problems in early word learning, metaphor, and syntactic development. Finally, problems incurred in using the idea of innateness are summarized: History prior to the chosen beginning point is ignored, data on organism-environment mutuality are not collected, and the explanation claims no effect of learning, which cannot be tested empirically.
Garcia-Aldea, David; Alvarellos, J. E.
2008-02-15
We propose a kinetic energy density functional scheme with nonlocal terms based on the von Weizsaecker functional, instead of the more traditional approach where the nonlocal terms have the structure of the Thomas-Fermi functional. The proposed functionals recover the exact kinetic energy and reproduce the linear response function of homogeneous electron systems. In order to assess their quality, we have tested the total kinetic energies as well as the kinetic energy density for atoms. The results show that these nonlocal functionals give as good results as the most sophisticated functionals in the literature. The proposed scheme for constructing the functionals means a step ahead in the field of fully nonlocal kinetic energy functionals, because they are capable of giving better local behavior than the semilocal functionals, yielding at the same time accurate results for total kinetic energies. Moreover, the functionals enjoy the possibility of being evaluated as a single integral in momentum space if an adequate reference density is defined, and then quasilinear scaling for the computational cost can be achieved.
NASA Astrophysics Data System (ADS)
Nocera, A.; Alvarez, G.
2016-11-01
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. This paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper then studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases studied indicate that the Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.
The fruits of a functional approach for psychological science.
Stewart, Ian
2016-02-01
The current paper introduces relational frame theory (RFT) as a functional contextual approach to complex human behaviour and examines how this theory has contributed to our understanding of several key phenomena in psychological science. I will first briefly outline the philosophical foundation of RFT and then examine its conceptual basis and core concepts. Thereafter, I provide an overview of the empirical findings and applications that RFT has stimulated in a number of key domains such as language development, linguistic generativity, rule-following, analogical reasoning, intelligence, theory of mind, psychopathology and implicit cognition.
A relaxation-based approach to damage modeling
NASA Astrophysics Data System (ADS)
Junker, Philipp; Schwarz, Stephan; Makowski, Jerzy; Hackl, Klaus
2017-01-01
Material models, including softening effects due to, for example, damage and localizations, share the problem of ill-posed boundary value problems that yield mesh-dependent finite element results. It is thus necessary to apply regularization techniques that couple local behavior described, for example, by internal variables, at a spatial level. This can take account of the gradient of the internal variable to yield mesh-independent finite element results. In this paper, we present a new approach to damage modeling that does not use common field functions, inclusion of gradients or complex integration techniques: Appropriate modifications of the relaxed (condensed) energy hold the same advantage as other methods, but with much less numerical effort. We start with the theoretical derivation and then discuss the numerical treatment. Finally, we present finite element results that prove empirically how the new approach works.
Combinatorial approach to exactly solve the 1D Ising model
NASA Astrophysics Data System (ADS)
Seth, Swarnadeep
2017-01-01
The Ising model is a well known statistical model which can be solved exactly by various methods. The most familiar one is the transfer matrix method. Sometimes it can be difficult to approach the open boundary case rather than periodic boundary ones in higher dimensions. But physically it is more intuitive to study the open boundary case, as it gives a closer view of the real system. We have introduced a new method called the pairing method to determine the exact partition function for the simplest case, a 1D Ising lattice. This method simplifies the problem's complexities and reduces it to a pure combinatorial problem. The study also reveals that it is possible to apply this pairing method in the case of a 2D square lattice. The obtained results agree perfectly with the values in the literature and this new approach provides an algorithmic insight to deal with such problems.
Novel metal resistance genes from microorganisms: a functional metagenomic approach.
González-Pastor, José E; Mirete, Salvador
2010-01-01
Most of the known metal resistance mechanisms are based on studies of cultured microorganisms, and the abundant uncultured fraction could be an important source of genes responsible for uncharacterized resistance mechanisms. A functional metagenomic approach was selected to recover metal resistance genes from the rhizosphere microbial community of an acid-mine drainage (AMD)-adapted plant, Erica andevalensis, from Rio Tinto, Spain. A total of 13 nickel resistant clones were isolated and analyzed, encoding hypothetical or conserved hypothetical proteins of uncertain functions, or well-characterized proteins, but not previously reported to be related to nickel resistance. The resistance clones were classified into two groups according to their nickel accumulation properties: those preventing or those favoring metal accumulation. Two clones encoding putative ABC transporter components and a serine O-acetyltransferase were found as representatives of each group, respectively.
Approaching nanoscale oxides: models and theoretical methods.
Bromley, Stefan T; Moreira, Ibério de P R; Neyman, Konstantin M; Illas, Francesc
2009-09-01
This tutorial review deals with the rapidly developing area of modelling oxide materials at the nanoscale. Top-down and bottom-up modelling approaches and currently used theoretical methods are discussed with the help of a selection of case studies. We show that the critical oxide nanoparticle size required to be beyond the scale where every atom counts to where structural and chemical properties are essentially bulk-like (the scalable regime) strongly depends on the structural and chemical parameters of the material under consideration. This oxide-dependent behaviour with respect to size has fundamental implications with respect to their modelling. Strongly ionic materials such as MgO and CeO(2), for example, start to exhibit scalable-to-bulk crystallite-like characteristics for nanoparticles consisting of about 100 ions. For such systems there exists an overlap in nanoparticle size where both top-down and bottom-up theoretical techniques can be applied and the main problem is the choice of the most suitable computational method. However, for more covalent systems such TiO(2) or SiO(2) the onset of the scalable regime is still unclear and for intermediate sized nanoparticles there exists a gap where neither bottom-up nor top-down modelling are fully adequate. In such difficult cases new efforts to design adequate models are required. Further exacerbating these fundamental methodological concerns are oxide nanosystems exhibiting complex electronic and magnetic behaviour. Due to the need for a simultaneous accurate treatment of the atomistic, electronic and spin degrees of freedom for such systems, the top-down vs. bottom-up separation is still large, and only few studies currently exist.
Stochastic model updating utilizing Bayesian approach and Gaussian process model
NASA Astrophysics Data System (ADS)
Wan, Hua-Ping; Ren, Wei-Xin
2016-03-01
Stochastic model updating (SMU) has been increasingly applied in quantifying structural parameter uncertainty from responses variability. SMU for parameter uncertainty quantification refers to the problem of inverse uncertainty quantification (IUQ), which is a nontrivial task. Inverse problem solved with optimization usually brings about the issues of gradient computation, ill-conditionedness, and non-uniqueness. Moreover, the uncertainty present in response makes the inverse problem more complicated. In this study, Bayesian approach is adopted in SMU for parameter uncertainty quantification. The prominent strength of Bayesian approach for IUQ problem is that it solves IUQ problem in a straightforward manner, which enables it to avoid the previous issues. However, when applied to engineering structures that are modeled with a high-resolution finite element model (FEM), Bayesian approach is still computationally expensive since the commonly used Markov chain Monte Carlo (MCMC) method for Bayesian inference requires a large number of model runs to guarantee the convergence. Herein we reduce computational cost in two aspects. On the one hand, the fast-running Gaussian process model (GPM) is utilized to approximate the time-consuming high-resolution FEM. On the other hand, the advanced MCMC method using delayed rejection adaptive Metropolis (DRAM) algorithm that incorporates local adaptive strategy with global adaptive strategy is employed for Bayesian inference. In addition, we propose the use of the powerful variance-based global sensitivity analysis (GSA) in parameter selection to exclude non-influential parameters from calibration parameters, which yields a reduced-order model and thus further alleviates the computational burden. A simulated aluminum plate and a real-world complex cable-stayed pedestrian bridge are presented to illustrate the proposed framework and verify its feasibility.
Green's function approach for quantum graphs: An overview
NASA Astrophysics Data System (ADS)
Andrade, Fabiano M.; Schmidt, A. G. M.; Vicentini, E.; Cheng, B. K.; da Luz, M. G. E.
2016-08-01
Here we review the many aspects and distinct phenomena associated to quantum dynamics on general graph structures. For so, we discuss such class of systems under the energy domain Green's function (G) framework. This approach is particularly interesting because G can be written as a sum over classical-like paths, where local quantum effects are taken into account through the scattering matrix elements (basically, transmission and reflection amplitudes) defined on each one of the graph vertices. Hence, the exact G has the functional form of a generalized semiclassical formula, which through different calculation techniques (addressed in detail here) always can be cast into a closed analytic expression. It allows to solve exactly arbitrary large (although finite) graphs in a recursive and fast way. Using the Green's function method, we survey many properties of open and closed quantum graphs as scattering solutions for the former and eigenspectrum and eigenstates for the latter, also considering quasi-bound states. Concrete examples, like cube, binary trees and Sierpiński-like topologies are presented. Along the work, possible distinct applications using the Green's function methods for quantum graphs are outlined.
A comprehensive approach to age-dependent dosimetric modeling
Leggett, R.W.; Cristy, M.; Eckerman, K.F.
1986-01-01
In the absence of age-specific biokinetic models, current retention models of the International Commission on Radiological Protection (ICRP) frequently are used as a point of departure for evaluation of exposures to the general population. These models were designed and intended for estimation of long-term integrated doses to the adult worker. Their format and empirical basis preclude incorporation of much valuable physiological information and physiologically reasonable assumptions that could be used in characterizing the age-specific behavior of radioelements in humans. In this paper we discuss a comprehensive approach to age-dependent dosimetric modeling in which consideration is given not only to changes with age in masses and relative geometries of body organs and tissues but also to best available physiological and radiobiological information relating to the age-specific biobehavior of radionuclides. This approach is useful in obtaining more accurate estimates of long-term dose commitments as a function of age at intake, but it may be particularly valuable in establishing more accurate estimates of dose rate as a function of age. Age-specific dose rates are needed for a proper analysis of the potential effects on estimates or risk of elevated dose rates per unit intake in certain stages of life, elevated response per unit dose received during some stages of life, and age-specific non-radiogenic competing risks.
Prediction of Chemical Function: Model Development and ...
The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi
Forward and reverse transfer function model synthesis
NASA Technical Reports Server (NTRS)
Houghton, J. R.
1985-01-01
A process for synthesizing a mathematical model for a linear mechanical system using the forward and reverse Fourier transform functions is described. The differential equation for a system model is given. The Bode conversion of the differential equation, and the frequency and time-domain optimization matching of the model to the forward and reverse transform functions using the geometric simplex method of Nelder and Mead (1965) are examined. The effect of the window function on the linear mechanical system is analyzed. The model is applied to two examples; in one the signal damps down before the end of the time window and in the second the signal has significant energy at the end of the time window.
FINDSITE: a combined evolution/structure-based approach to protein function prediction
Brylinski, Michal
2009-01-01
A key challenge of the post-genomic era is the identification of the function(s) of all the molecules in a given organism. Here, we review the status of sequence and structure-based approaches to protein function inference and ligand screening that can provide functional insights for a significant fraction of the ∼50% of ORFs of unassigned function in an average proteome. We then describe FINDSITE, a recently developed algorithm for ligand binding site prediction, ligand screening and molecular function prediction, which is based on binding site conservation across evolutionary distant proteins identified by threading. Importantly, FINDSITE gives comparable results when high-resolution experimental structures as well as predicted protein models are used. PMID:19324930
Mining Functional Modules in Heterogeneous Biological Networks Using Multiplex PageRank Approach
Li, Jun; Zhao, Patrick X.
2016-01-01
Identification of functional modules/sub-networks in large-scale biological networks is one of the important research challenges in current bioinformatics and systems biology. Approaches have been developed to identify functional modules in single-class biological networks; however, methods for systematically and interactively mining multiple classes of heterogeneous biological networks are lacking. In this paper, we present a novel algorithm (called mPageRank) that utilizes the Multiplex PageRank approach to mine functional modules from two classes of biological networks. We demonstrate the capabilities of our approach by successfully mining functional biological modules through integrating expression-based gene-gene association networks and protein-protein interaction networks. We first compared the performance of our method with that of other methods using simulated data. We then applied our method to identify the cell division cycle related functional module and plant signaling defense-related functional module in the model plant Arabidopsis thaliana. Our results demonstrated that the mPageRank method is effective for mining sub-networks in both expression-based gene-gene association networks and protein-protein interaction networks, and has the potential to be adapted for the discovery of functional modules/sub-networks in other heterogeneous biological networks. The mPageRank executable program, source code, the datasets and results of the presented two case studies are publicly and freely available at http://plantgrn.noble.org/MPageRank/. PMID:27446133
An approach to the residence time distribution for stochastic multi-compartment models.
Yu, Jihnhee; Wehrly, Thomas E
2004-10-01
Stochastic compartmental models are widely used in modeling processes such as drug kinetics in biological systems. This paper considers the distribution of the residence times for stochastic multi-compartment models, especially systems with non-exponential lifetime distributions. The paper first derives the moment generating function of the bivariate residence time distribution for the two-compartment model with general lifetimes and approximates the density of the residence time using the saddlepoint approximation. Then, it extends the distributional approach to the residence time for multi-compartment semi-Markov models combining the cofactor rule for a single destination and the analytic approach to the two-compartment model. This approach provides a complete specification of the residence time distribution based on the moment generating function and thus facilitates an easier calculation of high-order moments than the approach using the coefficient matrix. Applications to drug kinetics demonstrate the simplicity and usefulness of this approach.
Making metals transparent: a circuit model approach.
Molero, Carlos; Medina, Francisco; Rodríguez-Berral, Rauĺ; Mesa, Francisco
2016-05-16
Solid metal films are well known to be opaque to electromagnetic waves over a wide frequency range, from low frequency to optics. High values of the conductivity at relatively low frequencies or negative values of the permittivity at the optical regime provide the macroscopic explanation for such opacity. In the microwave range, even extremely thin metal layers (much smaller than the skin depth at the operation frequency) reflect most of the impinging electromagnetic energy, thus precluding significant transmission. However, a drastic resonant narrow-band enhancement of the transparency has recently been reported. The quasi-transparent window is opened by placing the metal film between two symmetrically arranged and closely spaced copper strip gratings. This letter proposes an analytical circuit model that yields a simple explanation to this unexpected phenomenon. The proposed approach avoids the use of lengthy numerical calculations and suggests how the transmissivity can be controlled and enhanced by manipulating the values of the electrical parameters of the associated circuit model.
a New Approach of Digital Bridge Surface Model Generation
NASA Astrophysics Data System (ADS)
Ju, H.
2012-07-01
Bridge areas present difficulties for orthophotos generation and to avoid "collapsed" bridges in the orthoimage, operator assistance is required to create the precise DBM (Digital Bridge Model), which is, subsequently, used for the orthoimage generation. In this paper, a new approach of DBM generation, based on fusing LiDAR (Light Detection And Ranging) data and aerial imagery, is proposed. The no precise exterior orientation of the aerial image is required for the DBM generation. First, a coarse DBM is produced from LiDAR data. Then, a robust co-registration between LiDAR intensity and aerial image using the orientation constraint is performed. The from-coarse-to-fine hybrid co-registration approach includes LPFFT (Log-Polar Fast Fourier Transform), Harris Corners, PDF (Probability Density Function) feature descriptor mean-shift matching, and RANSAC (RANdom Sample Consensus) as main components. After that, bridge ROI (Region Of Interest) from LiDAR data domain is projected to the aerial image domain as the ROI in the aerial image. Hough transform linear features are extracted in the aerial image ROI. For the straight bridge, the 1st order polynomial function is used; whereas, for the curved bridge, 2nd order polynomial function is used to fit those endpoints of Hough linear features. The last step is the transformation of the smooth bridge boundaries from aerial image back to LiDAR data domain and merge them with the coarse DBM. Based on our experiments, this new approach is capable of providing precise DBM which can be further merged with DTM (Digital Terrain Model) derived from LiDAR data to obtain the precise DSM (Digital Surface Model). Such a precise DSM can be used to improve the orthophoto product quality.
Direct and Evolutionary Approaches for Optimal Receiver Function Inversion
NASA Astrophysics Data System (ADS)
Dugda, Mulugeta Tuji
Receiver functions are time series obtained by deconvolving vertical component seismograms from radial component seismograms. Receiver functions represent the impulse response of the earth structure beneath a seismic station. Generally, receiver functions consist of a number of seismic phases related to discontinuities in the crust and upper mantle. The relative arrival times of these phases are correlated with the locations of discontinuities as well as the media of seismic wave propagation. The Moho (Mohorovicic discontinuity) is a major interface or discontinuity that separates the crust and the mantle. In this research, automatic techniques to determine the depth of the Moho from the earth's surface (the crustal thickness H) and the ratio of crustal seismic P-wave velocity (Vp) to S-wave velocity (Vs) (kappa= Vp/Vs) were developed. In this dissertation, an optimization problem of inverting receiver functions has been developed to determine crustal parameters and the three associated weights using evolutionary and direct optimization techniques. The first technique developed makes use of the evolutionary Genetic Algorithms (GA) optimization technique. The second technique developed combines the direct Generalized Pattern Search (GPS) and evolutionary Fitness Proportionate Niching (FPN) techniques by employing their strengths. In a previous study, Monte Carlo technique has been utilized for determining variable weights in the H-kappa stacking of receiver functions. Compared to that previously introduced variable weights approach, the current GA and GPS-FPN techniques have tremendous advantages of saving time and these new techniques are suitable for automatic and simultaneous determination of crustal parameters and appropriate weights. The GA implementation provides optimal or near optimal weights necessary in stacking receiver functions as well as optimal H and kappa values simultaneously. Generally, the objective function of the H-kappa stacking problem
Pediatrician's knowledge on the approach of functional constipation
Vieira, Mario C.; Negrelle, Isadora Carolina Krueger; Webber, Karla Ulaf; Gosdal, Marjorie; Truppel, Sabine Krüger; Kusma, Solena Ziemer
2016-01-01
Abstract Objective: To evaluate the pediatrician's knowledge regarding the diagnostic and therapeutic approach of childhood functional constipation. Methods: A descriptive cross-sectional study was performed with the application of a self-administered questionnaire concerning a hypothetical clinical case of childhood functional constipation with fecal incontinence to physicians (n=297) randomly interviewed at the 36th Brazilian Congress of Pediatrics in 2013. Results: The majority of the participants were females, the mean age was 44.1 years, the mean time of professional practice was 18.8 years; 56.9% were Board Certified by the Brazilian Society of Pediatrics. Additional tests were ordered by 40.4%; including abdominal radiography (19.5%), barium enema (10.4%), laboratory tests (9.8%), abdominal ultrasound (6.7%), colonoscopy (2.4%), manometry and rectal biopsy (both 1.7%). The most common interventions included lactulose (26.6%), mineral oil (17.5%), polyethylene glycol (14.5%), fiber supplement (9.1%) and milk of magnesia (5.4%). Nutritional guidance (84.8%), fecal disimpaction (17.2%) and toilet training (19.5%) were also indicated. Conclusions: Our results show that pediatricians do not adhere to current recommendations for the management of childhood functional constipation, as unnecessary tests were ordered and the first-line treatment was not prescribed. PMID:27449075
Functional Analysis of Jasmonates in Rice through Mutant Approaches
Dhakarey, Rohit; Kodackattumannil Peethambaran, Preshobha; Riemann, Michael
2016-01-01
Jasmonic acid, one of the major plant hormones, is, unlike other hormones, a lipid-derived compound that is synthesized from the fatty acid linolenic acid. It has been studied intensively in many plant species including Arabidopsis thaliana, in which most of the enzymes participating in its biosynthesis were characterized. In the past 15 years, mutants and transgenic plants affected in the jasmonate pathway became available in rice and facilitate studies on the functions of this hormone in an important crop. Those functions are partially conserved compared to other plant species, and include roles in fertility, response to mechanical wounding and defense against herbivores. However, new and surprising functions have also been uncovered by mutant approaches, such as a close link between light perception and the jasmonate pathway. This was not only useful to show a phenomenon that is unique to rice but also helped to establish this role in plant species where such links are less obvious. This review aims to provide an overview of currently available rice mutants and transgenic plants in the jasmonate pathway and highlights some selected roles of jasmonate in this species, such as photomorphogenesis, and abiotic and biotic stress. PMID:27135235
A real-space stochastic density matrix approach for density functional electronic structure.
Beck, Thomas L
2015-12-21
The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.
A Network Approach to Rare Disease Modeling
NASA Astrophysics Data System (ADS)
Ghiassian, Susan; Rabello, Sabrina; Sharma, Amitabh; Wiest, Olaf; Barabasi, Albert-Laszlo
2011-03-01
Network approaches have been widely used to better understand different areas of natural and social sciences. Network Science had a particularly great impact on the study of biological systems. In this project, using biological networks, candidate drugs as a potential treatment of rare diseases were identified. Developing new drugs for more than 2000 rare diseases (as defined by ORPHANET) is too expensive and beyond expectation. Disease proteins do not function in isolation but in cooperation with other interacting proteins. Research on FDA approved drugs have shown that most of the drugs do not target the disease protein but a protein which is 2 or 3 steps away from the disease protein in the Protein-Protein Interaction (PPI) network. We identified the already known drug targets in the disease gene's PPI subnetwork (up to the 3rd neighborhood) and among them those in the same sub cellular compartment and higher coexpression coefficient with the disease gene are expected to be stronger candidates. Out of 2177 rare diseases, 1092 were found not to have any drug target. Using the above method, we have found the strongest candidates among the rest in order to further experimental validations.
Model approaches for advancing interprofessional prevention education.
Evans, Clyde H; Cashman, Suzanne B; Page, Donna A; Garr, David R
2011-02-01
Healthy People 2010 included an objective to "increase the proportion of … health professional training schools whose basic curriculum for healthcare providers includes the core competencies in health promotion and disease prevention." Interprofessional prevention education has been seen by the Healthy People Curriculum Task Force as a key strategy for achieving this objective and strengthening prevention content in health professions education programs. To fulfill these aims, the Association for Prevention Teaching and Research sponsored the Institute for Interprofessional Prevention Education in 2007 and in 2008. The institutes were based on the premise that if clinicians from different professions are to function effectively in teams, health professions students need to learn with, from, and about students from other professions. The institutes assembled interprofessional teams of educators from academic health centers across the country and provided instruction in approaches for improving interprofessional prevention education. Interprofessional education also plays a key role in implementation of Healthy People 2020 Education for Health framework. The delivery of preventive services provides a nearly level playing field in which multiple professions each make important contributions. Prevention education should take place during that phase of the educational continuum in which the attitudes, skills, and knowledge necessary for both effective teamwork and prevention are incorporated into the "DNA" of future health professionals. Evaluation of the teams' educational initiatives holds important lessons. These include allowing ample time for planning, obtaining student input during planning, paying explicit attention to teamwork, and taking account of cultural differences across professions.
Spatiotemporal Infectious Disease Modeling: A BME-SIR Approach
Angulo, Jose; Yu, Hwa-Lung; Langousis, Andrea; Kolovos, Alexander; Wang, Jinfeng; Madrid, Ana Esther; Christakos, George
2013-01-01
This paper is concerned with the modeling of infectious disease spread in a composite space-time domain under conditions of uncertainty. We focus on stochastic modeling that accounts for basic mechanisms of disease distribution and multi-sourced in situ uncertainties. Starting from the general formulation of population migration dynamics and the specification of transmission and recovery rates, the model studies the functional formulation of the evolution of the fractions of susceptible-infected-recovered individuals. The suggested approach is capable of: a) modeling population dynamics within and across localities, b) integrating the disease representation (i.e. susceptible-infected-recovered individuals) with observation time series at different geographical locations and other sources of information (e.g. hard and soft data, empirical relationships, secondary information), and c) generating predictions of disease spread and associated parameters in real time, while considering model and observation uncertainties. Key aspects of the proposed approach are illustrated by means of simulations (i.e. synthetic studies), and a real-world application using hand-foot-mouth disease (HFMD) data from China. PMID:24086257
Incorporating covariates in skewed functional data models.
Li, Meng; Staicu, Ana-Maria; Bondell, Howard D
2015-07-01
We introduce a class of covariate-adjusted skewed functional models (cSFM) designed for functional data exhibiting location-dependent marginal distributions. We propose a semi-parametric copula model for the pointwise marginal distributions, which are allowed to depend on covariates, and the functional dependence, which is assumed covariate invariant. The proposed cSFM framework provides a unifying platform for pointwise quantile estimation and trajectory prediction. We consider a computationally feasible procedure that handles densely as well as sparsely observed functional data. The methods are examined numerically using simulations and is applied to a new tractography study of multiple sclerosis. Furthermore, the methodology is implemented in the R package cSFM, which is publicly available on CRAN.
NASA Astrophysics Data System (ADS)
Caponi, S.; Mattana, S.; Ricci, M.; Sagini, K.; Juarez-Hernandez, L. J.; Jimenez-Garduño, A. M.; Cornella, N.; Pasquardini, L.; Urbanelli, L.; Sassi, P.; Morresi, A.; Emiliani, C.; Fioretto, D.; Dalla Serra, M.; Pederzolli, C.; Iannotta, S.; Macchi, P.; Musio, C.
2016-11-01
A living bio-hybrid system has been successfully implemented. It is constituted by neuroblastic cells, the SH-SY5Y human neuroblastoma cells, adhering to a poly-anyline (PANI) a semiconductor polymer with memristive properties. By a multidisciplinary approach, the biocompatibility of the substrate has been analyzed and the functionality of the adhering cells has been investigated. We found that the PANI films can support the cell adhesion. Moreover, the SH-SY5Y cells were successfully differentiated into neuron-like cells for in vitro applications demonstrating that PANI can also promote cell differentiation. In order to deeply characterize the modifications of the bio-functionality induced by the cell-substrate interaction, the functional properties of the cells have been characterized by electrophysiology and Raman spectroscopy. Our results confirm that the PANI films do not strongly affect the general properties of the cells, ensuring their viability without toxic effects on their physiology. Ascribed to the adhesion process, however, a slight increase of the markers of the cell suffering has been evidenced by Raman spectroscopy and accordingly the electrophysiology shows a reduction at positive stimulations in the cells excitability.
Generalized exponential function and discrete growth models
NASA Astrophysics Data System (ADS)
Souto Martinez, Alexandre; Silva González, Rodrigo; Lauri Espíndola, Aquino
2009-07-01
Here we show that a particular one-parameter generalization of the exponential function is suitable to unify most of the popular one-species discrete population dynamic models into a simple formula. A physical interpretation is given to this new introduced parameter in the context of the continuous Richards model, which remains valid for the discrete case. From the discretization of the continuous Richards’ model (generalization of the Gompertz and Verhulst models), one obtains a generalized logistic map and we briefly study its properties. Notice, however that the physical interpretation for the introduced parameter persists valid for the discrete case. Next, we generalize the (scramble competition) θ-Ricker discrete model and analytically calculate the fixed points as well as their stabilities. In contrast to previous generalizations, from the generalized θ-Ricker model one is able to retrieve either scramble or contest models.
Model dielectric function for 2D semiconductors including substrate screening
Trolle, Mads L.; Pedersen, Thomas G.; Véniard, Valerie
2017-01-01
Dielectric screening of excitons in 2D semiconductors is known to be a highly non-local effect, which in reciprocal space translates to a strong dependence on momentum transfer q. We present an analytical model dielectric function, including the full non-linear q-dependency, which may be used as an alternative to more numerically taxing ab initio screening functions. By verifying the good agreement between excitonic optical properties calculated using our model dielectric function, and those derived from ab initio methods, we demonstrate the versatility of this approach. Our test systems include: Monolayer hBN, monolayer MoS2, and the surface exciton of a 2 × 1 reconstructed Si(111) surface. Additionally, using our model, we easily take substrate screening effects into account. Hence, we include also a systematic study of the effects of substrate media on the excitonic optical properties of MoS2 and hBN. PMID:28117326
Modeling Time-Dependent Association in Longitudinal Data: A Lag as Moderator Approach.
Selig, James P; Preacher, Kristopher J; Little, Todd D
2012-01-01
We describe a straightforward, yet novel, approach to examine time-dependent association between variables. The approach relies on a measurement-lag research design in conjunction with statistical interaction models. We base arguments in favor of this approach on the potential for better understanding the associations between variables by describing how the association changes with time. We introduce a number of different functional forms for describing these lag-moderated associations, each with a different substantive meaning. Finally, we use empirical data to demonstrate methods for exploring functional forms and model fitting based on this approach.
Electron Systems Out of Equilibrium: Nonequilibrium Green's Function Approach
NASA Astrophysics Data System (ADS)
Špička, Václav Velický, Bedřich Kalvová, Anděla
2015-10-01
This review deals with the state of the art and perspectives of description of non-equilibrium many body systems using the non-equilibrium Green's function (NGF) method. The basic aim is to describe time evolution of the many-body system from its initial state over its transient dynamics to its long time asymptotic evolution. First, we discuss basic aims of transport theories to motivate the introduction of the NGF techniques. Second, this article summarizes the present view on construction of the electron transport equations formulated within the NGF approach to non-equilibrium. We discuss incorporation of complex initial conditions to the NGF formalism, and the NGF reconstruction theorem, which serves as a tool to derive simplified kinetic equations. Three stages of evolution of the non-equilibrium, the first described by the full NGF description, the second by a Non-Markovian Generalized Master Equation and the third by a Markovian Master Equation will be related to each other.
Robotic approaches for rehabilitation of hand function after stroke.
Lum, Peter S; Godfrey, Sasha B; Brokaw, Elizabeth B; Holley, Rahsaan J; Nichols, Diane
2012-11-01
The goal of this review was to discuss the impairments in hand function after stroke and present previous work on robot-assisted approaches to movement neurorehabilitation. Robotic devices offer a unique training environment that may enhance outcomes beyond what is possible with conventional means. Robots apply forces to the hand, allowing completion of movements while preventing inappropriate movement patterns. Evidence from the literature is emerging that certain characteristics of the human-robot interaction are preferable. In light of this evidence, the robotic hand devices that have undergone clinical testing are reviewed, highlighting the authors' work in this area. Finally, suggestions for future work are offered. The ability to deliver therapy doses far higher than what has been previously tested is a potentially key advantage of robotic devices that needs further exploration. In particular, more efforts are needed to develop highly motivating home-based devices, which can increase access to high doses of assisted movement therapy.
NASA Astrophysics Data System (ADS)
Mercaldo, M. T.; Rabuffo, I.; De Cesare, L.; Caramico D'Auria, A.
2016-04-01
In this work we study the quantum phase transition, the phase diagram and the quantum criticality induced by the easy-plane single-ion anisotropy in a d-dimensional quantum spin-1 XY model in absence of an external longitudinal magnetic field. We employ the two-time Green function method by avoiding the Anderson-Callen decoupling of spin operators at the same sites which is of doubtful accuracy. Following the original Devlin procedure we treat exactly the higher order single-site anisotropy Green functions and use Tyablikov-like decouplings for the exchange higher order ones. The related self-consistent equations appear suitable for an analysis of the thermodynamic properties at and around second order phase transition points. Remarkably, the equivalence between the microscopic spin model and the continuous O(2) -vector model with transverse-Ising model (TIM)-like dynamics, characterized by a dynamic critical exponent z=1, emerges at low temperatures close to the quantum critical point with the single-ion anisotropy parameter D as the non-thermal control parameter. The zero-temperature critic anisotropy parameter Dc is obtained for dimensionalities d > 1 as a function of the microscopic exchange coupling parameter and the related numerical data for different lattices are found to be in reasonable agreement with those obtained by means of alternative analytical and numerical methods. For d > 2, and in particular for d=3, we determine the finite-temperature critical line ending in the quantum critical point and the related TIM-like shift exponent, consistently with recent renormalization group predictions. The main crossover lines between different asymptotic regimes around the quantum critical point are also estimated providing a global phase diagram and a quantum criticality very similar to the conventional ones.
Green's function approach of an anisotropic Heisenberg ferrimagnetic system
NASA Astrophysics Data System (ADS)
Mert, Gülistan
2013-12-01
We have investigated the influence of the exchange anisotropy parameter on the magnetization, critical and compensation temperatures and susceptibility of the anisotropic Heisenberg ferrimagnetic system with the single-ion anisotropy under an external magnetic field using the double-time temperature-dependent Green's function theory. In order to decouple the higher order Green's functions, Anderson-Callen's decoupling and random phase approximations have been used. This model is useful for understanding the temperature dependence of total magnetization of Lithium-chromium ferrites Li0.5Fe1.25Cr1.25O4 for which negative magnetization is characteristic. We observe that the critical temperature increases when the exchange anisotropy increases. When the system is under an external magnetic field, one obtains the first-order phase transition where the magnetization jumps for all the values of the exchange anisotropy parameters.
Crossover from BCS to Bose superconductivity: A functional integral approach
Randeria, M.; Sa de Melo, C.A.R.; Engelbrecht, J.R.
1993-04-01
We use a functional integral formulation to study the crossover from cooperative Cooper pairing to the formation and condensation of tightly bound pairs in a 3D continuum model of fermions with attractive interactions. The inadequacy of a saddle point approximation with increasing coupling is pointed out, and the importance of temporal (quantum) fluctuations for normal state properties at intermediate and strong coupling is emphasized. In addition to recovering the Nozieres-Schmitt-Pink interpolation scheme for T{sub c}, and the Leggett variational results for T = 0, we also present results for evolution of the time-dependent Ginzburg-Landau equation and collective mode spectrum as a function of the coupling.
SMJ's analysis of Ising model correlation functions
NASA Astrophysics Data System (ADS)
Kadanoff, Leo P.; Kohmoto, Mahito
1980-05-01
In a series of recent publications Sato, Miwa, and Jimbo (SMJ) have shown how to derive multispin correlation functions of the two-dimensional Ising model in the continuum, or scaling, limit by analyzing the behavior of the solutions to the two-dimensional version of the Dirac equation. The major purpose of the present work is to describe SMJ's analysis more discursively and in terms closer to that used in previous studies of the Ising model. In addition, new and more compact expressions for their basic equations are derived. A single new answer is obtained: the form of the three-spin correlation function at criticality.
Executive function and food approach behavior in middle childhood.
Groppe, Karoline; Elsner, Birgit
2014-01-01
Executive function (EF) has long been considered to be a unitary, domain-general cognitive ability. However, recent research suggests differentiating "hot" affective and "cool" cognitive aspects of EF. Yet, findings regarding this two-factor construct are still inconsistent. In particular, the development of this factor structure remains unclear and data on school-aged children is lacking. Furthermore, studies linking EF and overweight or obesity suggest that EF contributes to the regulation of eating behavior. So far, however, the links between EF and eating behavior have rarely been investigated in children and non-clinical populations. First, we examined whether EF can be divided into hot and cool factors or whether they actually correspond to a unitary construct in middle childhood. Second, we examined how hot and cool EF are associated with different eating styles that put children at risk of becoming overweight during development. Hot and cool EF were assessed experimentally in a non-clinical population of 1657 elementary-school children (aged 6-11 years). The "food approach" behavior was rated mainly via parent questionnaires. Findings indicate that hot EF is distinguishable from cool EF. However, only cool EF seems to represent a coherent functional entity, whereas hot EF does not seem to be a homogenous construct. This was true for a younger and an older subgroup of children. Furthermore, different EF components were correlated with eating styles, such as responsiveness to food, desire to drink, and restrained eating in girls but not in boys. This shows that lower levels of EF are not only seen in clinical populations of obese patients but are already associated with food approach styles in a normal population of elementary school-aged girls. Although the direction of effect still has to be clarified, results point to the possibility that EF constitutes a risk factor for eating styles contributing to the development of overweight in the long-term.
NASA Astrophysics Data System (ADS)
Lucena, Marcia; Castro, Jaelson; Silva, Carla; Alencar, Fernanda; Santos, Emanuel; Pimentel, João
Requirements engineering and architectural design are key activities for successful development of software systems. Both activities are strongly intertwined and interrelated, but many steps toward generating architecture models from requirements models are driven by intuition and architectural knowledge. Thus, systematic approaches that integrate requirements engineering and architectural design activities are needed. This paper presents an approach based on model transformations to generate architectural models from requirements models. The source and target languages are respectively the i* modeling language and Acme architectural description language (ADL). A real web-based recommendation system is used as case study to illustrate our approach.
A Non-parametric Approach to Constrain the Transfer Function in Reverberation Mapping
NASA Astrophysics Data System (ADS)
Li, Yan-Rong; Wang, Jian-Min; Bai, Jin-Ming
2016-11-01
Broad emission lines of active galactic nuclei stem from a spatially extended region (broad-line region, BLR) that is composed of discrete clouds and photoionized by the central ionizing continuum. The temporal behaviors of these emission lines are blurred echoes of continuum variations (i.e., reverberation mapping, RM) and directly reflect the structures and kinematic information of BLRs through the so-called transfer function (also known as the velocity-delay map). Based on the previous works of Rybicki and Press and Zu et al., we develop an extended, non-parametric approach to determine the transfer function for RM data, in which the transfer function is expressed as a sum of a family of relatively displaced Gaussian response functions. Therefore, arbitrary shapes of transfer functions associated with complicated BLR geometry can be seamlessly included, enabling us to relax the presumption of a specified transfer function frequently adopted in previous studies and to let it be determined by observation data. We formulate our approach in a previously well-established framework that incorporates the statistical modeling of continuum variations as a damped random walk process and takes into account long-term secular variations which are irrelevant to RM signals. The application to RM data shows the fidelity of our approach.
A genetic algorithms approach for altering the membership functions in fuzzy logic controllers
NASA Technical Reports Server (NTRS)
Shehadeh, Hana; Lea, Robert N.
1992-01-01
Through previous work, a fuzzy control system was developed to perform translational and rotational control of a space vehicle. This problem was then re-examined to determine the effectiveness of genetic algorithms on fine tuning the controller. This paper explains the problems associated with the design of this fuzzy controller and offers a technique for tuning fuzzy logic controllers. A fuzzy logic controller is a rule-based system that uses fuzzy linguistic variables to model human rule-of-thumb approaches to control actions within a given system. This 'fuzzy expert system' features rules that direct the decision process and membership functions that convert the linguistic variables into the precise numeric values used for system control. Defining the fuzzy membership functions is the most time consuming aspect of the controller design. One single change in the membership functions could significantly alter the performance of the controller. This membership function definition can be accomplished by using a trial and error technique to alter the membership functions creating a highly tuned controller. This approach can be time consuming and requires a great deal of knowledge from human experts. In order to shorten development time, an iterative procedure for altering the membership functions to create a tuned set that used a minimal amount of fuel for velocity vector approach and station-keep maneuvers was developed. Genetic algorithms, search techniques used for optimization, were utilized to solve this problem.
Functional epigenetic approach identifies frequently methylated genes in Ewing sarcoma.
Alholle, Abdullah; Brini, Anna T; Gharanei, Seley; Vaiyapuri, Sumathi; Arrigoni, Elena; Dallol, Ashraf; Gentle, Dean; Kishida, Takeshi; Hiruma, Toru; Avigad, Smadar; Grimer, Robert; Maher, Eamonn R; Latif, Farida
2013-11-01
Using a candidate gene approach we recently identified frequent methylation of the RASSF2 gene associated with poor overall survival in Ewing sarcoma (ES). To identify effective biomarkers in ES on a genome-wide scale, we used a functionally proven epigenetic approach, in which gene expression was induced in ES cell lines by treatment with a demethylating agent followed by hybridization onto high density gene expression microarrays. After following a strict selection criterion, 34 genes were selected for expression and methylation analysis in ES cell lines and primary ES. Eight genes (CTHRC1, DNAJA4, ECHDC2, NEFH, NPTX2, PHF11, RARRES2, TSGA14) showed methylation frequencies of>20% in ES tumors (range 24-71%), these genes were expressed in human bone marrow derived mesenchymal stem cells (hBMSC) and hypermethylation was associated with transcriptional silencing. Methylation of NPTX2 or PHF11 was associated with poorer prognosis in ES. In addition, six of the above genes also showed methylation frequency of>20% (range 36-50%) in osteosarcomas. Identification of these genes may provide insights into bone cancer tumorigenesis and development of epigenetic biomarkers for prognosis and detection of these rare tumor types.
Atom and Bond Fukui Functions and Matrices: A Hirshfeld-I Atoms-in-Molecule Approach.
Oña, Ofelia B; De Clercq, Olivier; Alcoba, Diego R; Torre, Alicia; Lain, Luis; Van Neck, Dimitri; Bultinck, Patrick
2016-09-19
The Fukui function is often used in its atom-condensed form by isolating it from the molecular Fukui function using a chosen weight function for the atom in the molecule. Recently, Fukui functions and matrices for both atoms and bonds separately were introduced for semiempirical and ab initio levels of theory using Hückel and Mulliken atoms-in-molecule models. In this work, a double partitioning method of the Fukui matrix is proposed within the Hirshfeld-I atoms-in-molecule framework. Diagonalizing the resulting atomic and bond matrices gives eigenvalues and eigenvectors (Fukui orbitals) describing the reactivity of atoms and bonds. The Fukui function is the diagonal element of the Fukui matrix and may be resolved in atom and bond contributions. The extra information contained in the atom and bond resolution of the Fukui matrices and functions is highlighted. The effect of the choice of weight function arising from the Hirshfeld-I approach to obtain atom- and bond-condensed Fukui functions is studied. A comparison of the results with those generated by using the Mulliken atoms-in-molecule approach shows low correlation between the two partitioning schemes.
Reducing equifinality of hydrological models by integrating Functional Streamflow Disaggregation
NASA Astrophysics Data System (ADS)
Lüdtke, Stefan; Apel, Heiko; Nied, Manuela; Carl, Peter; Merz, Bruno
2014-05-01
A universal problem of the calibration of hydrological models is the equifinality of different parameter sets derived from the calibration of models against total runoff values. This is an intrinsic problem stemming from the quality of the calibration data and the simplified process representation by the model. However, discharge data contains additional information which can be extracted by signal processing methods. An analysis specifically developed for the disaggregation of runoff time series into flow components is the Functional Streamflow Disaggregation (FSD; Carl & Behrendt, 2008). This method is used in the calibration of an implementation of the hydrological model SWIM in a medium sized watershed in Thailand. FSD is applied to disaggregate the discharge time series into three flow components which are interpreted as base flow, inter-flow and surface runoff. In addition to total runoff, the model is calibrated against these three components in a modified GLUE analysis, with the aim to identify structural model deficiencies, assess the internal process representation and to tackle equifinality. We developed a model dependent (MDA) approach calibrating the model runoff components against the FSD components, and a model independent (MIA) approach comparing the FSD of the model results and the FSD of calibration data. The results indicate, that the decomposition provides valuable information for the calibration. Particularly MDA highlights and discards a number of standard GLUE behavioural models underestimating the contribution of soil water to river discharge. Both, MDA and MIA yield to a reduction of the parameter ranges by a factor up to 3 in comparison to standard GLUE. Based on these results, we conclude that the developed calibration approach is able to reduce the equifinality of hydrological model parameterizations. The effect on the uncertainty of the model predictions is strongest by applying MDA and shows only minor reductions for MIA. Besides
Semiparametric Stochastic Modeling of the Rate Function in Longitudinal Studies
Zhu, Bin; Taylor, Jeremy M.G.; Song, Peter X.-K.
2011-01-01
In longitudinal biomedical studies, there is often interest in the rate functions, which describe the functional rates of change of biomarker profiles. This paper proposes a semiparametric approach to model these functions as the realizations of stochastic processes defined by stochastic differential equations. These processes are dependent on the covariates of interest and vary around a specified parametric function. An efficient Markov chain Monte Carlo algorithm is developed for inference. The proposed method is compared with several existing methods in terms of goodness-of-fit and more importantly the ability to forecast future functional data in a simulation study. The proposed methodology is applied to prostate-specific antigen profiles for illustration. Supplementary materials for this paper are available online. PMID:22423170
Fast approach to infrared image restoration based on shrinkage functions calibration
NASA Astrophysics Data System (ADS)
Zhang, Chengshuo; Shi, Zelin; Xu, Baoshu; Feng, Bin
2016-05-01
High-quality image restoration in real time is a challenge for infrared imaging systems. We present a fast approach to infrared image restoration based on shrinkage functions calibration. Rather than directly modeling the prior of sharp images to obtain the shrinkage functions, we calibrate them for restoration directly by using the acquirable sharp and blurred image pairs from the same infrared imaging system. The calibration method is employed to minimize the sum of squared errors between sharp images and restored images from the blurred images. Our restoration algorithm is noniterative and its shrinkage functions are stored in the look-up tables, so an architecture solution of pipeline structure can work in real time. We demonstrate the effectiveness of our approach by testing its quantitative performance from simulation experiments and its qualitative performance from a developed wavefront coding infrared imaging system.
Quantum transport: A unified approach via a multivariate hypergeometric generating function
NASA Astrophysics Data System (ADS)
Macedo-Junior, A. F.; Macêdo, A. M. S.
2014-07-01
We introduce a characteristic function method to describe charge-counting statistics (CCS) in phase coherent systems that directly connects the three most successful approaches to quantum transport: random-matrix theory (RMT), the nonlinear σ-model and the trajectory-based semiclassical method. The central idea is the construction of a generating function based on a multivariate hypergeometric function, which can be naturally represented in terms of quantities that are well-defined in each approach. We illustrate the power of our scheme by obtaining exact analytical results for the first four cumulants of CCS in a chaotic quantum dot coupled ideally to electron reservoirs via perfectly conducting leads with arbitrary number of open scattering channels.
Chromatin fiber functional organization: Some plausible models
NASA Astrophysics Data System (ADS)
Lesne, A.; Victor, J.-M.
2006-03-01
We here present a modeling study of the chromatin fiber functional organization. Multi-scale modeling is required to unravel the complex interplay between the fiber and the DNA levels. It suggests plausible scenarios, including both physical and biological aspects, for fiber condensation, its targeted decompaction, and transcription regulation. We conclude that a major role of the chromatin fiber structure might be to endow DNA with allosteric potentialities and to control DNA transactions by an epigenetic tuning of its mechanical and topological constraints.
Di Maggio, Jimena; Fernández, Carolina; Parodi, Elisa R; Diaz, M Soledad; Estrada, Vanina
2016-01-01
In this paper we address the formulation of two mechanistic water quality models that differ in the way the phytoplankton community is described. We carry out parameter estimation subject to differential-algebraic constraints and validation for each model and comparison between models performance. The first approach aggregates phytoplankton species based on their phylogenetic characteristics (Taxonomic group model) and the second one, on their morpho-functional properties following Reynolds' classification (Functional group model). The latter approach takes into account tolerance and sensitivity to environmental conditions. The constrained parameter estimation problems are formulated within an equation oriented framework, with a maximum likelihood objective function. The study site is Paso de las Piedras Reservoir (Argentina), which supplies water for consumption for 450,000 population. Numerical results show that phytoplankton morpho-functional groups more closely represent each species growth requirements within the group. Each model performance is quantitatively assessed by three diagnostic measures. Parameter estimation results for seasonal dynamics of the phytoplankton community and main biogeochemical variables for a one-year time horizon are presented and compared for both models, showing the functional group model enhanced performance. Finally, we explore increasing nutrient loading scenarios and predict their effect on phytoplankton dynamics throughout a one-year time horizon.
He, Wei; Yurkevich, Igor V; Canham, Leigh T; Loni, Armando; Kaplan, Andrey
2014-11-03
We develop an analytical model based on the WKB approach to evaluate the experimental results of the femtosecond pump-probe measurements of the transmittance and reflectance obtained on thin membranes of porous silicon. The model allows us to retrieve a pump-induced nonuniform complex dielectric function change along the membrane depth. We show that the model fitting to the experimental data requires a minimal number of fitting parameters while still complying with the restriction imposed by the Kramers-Kronig relation. The developed model has a broad range of applications for experimental data analysis and practical implementation in the design of devices involving a spatially nonuniform dielectric function, such as in biosensing, wave-guiding, solar energy harvesting, photonics and electro-optical devices.
A novel approach to modeling spacecraft spectral reflectance
NASA Astrophysics Data System (ADS)
Willison, Alexander; Bédard, Donald
2016-10-01
Simulated spectrometric observations of unresolved resident space objects are required for the interpretation of quantities measured by optical telescopes. This allows for their characterization as part of regular space surveillance activity. A peer-reviewed spacecraft reflectance model is necessary to help improve the understanding of characterization measurements. With this objective in mind, a novel approach to model spacecraft spectral reflectance as an overall spectral bidirectional reflectance distribution function (sBRDF) is presented. A spacecraft's overall sBRDF is determined using its triangular-faceted computer-aided design (CAD) model and the empirical sBRDF of its homogeneous materials. The CAD model is used to determine the proportional contribution of each homogeneous material to the overall reflectance. Each empirical sBRDF is contained in look-up tables developed from measurements made over a range of illumination and reflection geometries using simple interpolation and extrapolation techniques. A demonstration of the spacecraft reflectance model is provided through simulation of an optical ground truth characterization using the Canadian Advanced Nanospace eXperiment-1 Engineering Model nanosatellite as the subject. Validation of the reflectance model is achieved through a qualitative comparison of simulated and measured quantities.
Analogy between language and biology: a functional approach.
Victorri, Bernard
2007-03-01
We adopt here a functional approach to the classical comparison between language and biology. We first parallel events which have a functional signification in each domain, by matching the utterance of a sentence with the release of a protein. The meaning of a protein is then defined by analogy as "the constant contribution of the biochemical material composing the protein to the effects produced by any release of the protein". The proteome of an organism corresponds to an I-language (the idiolect of an individual), and the proteome of a species is equivalent to an E-language (a language in the common sense). Proteins and sentences are both characterized by a complex hierarchical structure, but the language property of 'double articulation' has no equivalent in the biological domain in this analogy, contrary to previous proposals centered on the genetic code. Besides, the same intimate relation between structure and meaning holds in both cases (syntactic structure for sentences and three-dimensional conformation for proteins). An important disanalogy comes from the combinatorial power of language which is not shared by the proteome as a whole, but it must be noted that the immune system possesses interesting properties in this respect. Regarding evolutionary aspects, the analogy still works to a certain extent. Languages and proteomes can be both considered as belonging to a general class of systems, that we call "productive self-reproductive systems", characterized by the presence of two dynamics: a fast dynamics in an external domain where functional events occur (productive aspect), and a slow dynamics responsible for the evolution of the system itself, driven by the feed-back of events related to the reproduction process.
A secured e-tendering modeling using misuse case approach
NASA Astrophysics Data System (ADS)
Mohd, Haslina; Robie, Muhammad Afdhal Muhammad; Baharom, Fauziah; Darus, Norida Muhd; Saip, Mohamed Ali; Yasin, Azman
2016-08-01
Major risk factors relating to electronic transactions may lead to destructive impacts on trust and transparency in the process of tendering. Currently, electronic tendering (e-tendering) systems still remain uncertain in issues relating to legal and security compliance and most importantly it has an unclear security framework. Particularly, the available systems are lacking in addressing integrity, confidentiality, authentication, and non-repudiation in e-tendering requirements. Thus, one of the challenges in developing an e-tendering system is to ensure the system requirements include the function for secured and trusted environment. Therefore, this paper aims to model a secured e-tendering system using misuse case approach. The modeling process begins with identifying the e-tendering process, which is based on the Australian Standard Code of Tendering (AS 4120-1994). It is followed by identifying security threats and their countermeasure. Then, the e-tendering was modelled using misuse case approach. The model can contribute to e-tendering developers and also to other researchers or experts in the e-tendering domain.
The Goodwin model: behind the Hill function.
Gonze, Didier; Abou-Jaoudé, Wassim
2013-01-01
The Goodwin model is a 3-variable model demonstrating the emergence of oscillations in a delayed negative feedback-based system at the molecular level. This prototypical model and its variants have been commonly used to model circadian and other genetic oscillators in biology. The only source of non-linearity in this model is a Hill function, characterizing the repression process. It was mathematically shown that to obtain limit-cycle oscillations, the Hill coefficient must be larger than 8, a value often considered unrealistic. It is indeed difficult to explain such a high coefficient with simple cooperative dynamics. We present here molecular models of the standard Goodwin model, based on single or multisite phosphorylation/dephosphorylation processes of a transcription factor, which have been previously shown to generate switch-like responses. We show that when the phosphorylation/dephosphorylation processes are fast enough, the limit-cycle obtained with a multisite phosphorylation-based mechanism is in very good quantitative agreement with the oscillations observed in the Goodwin model. Conditions in which the detailed mechanism is well approximated by the Goodwin model are given. A variant of the Goodwin model which displays sharp thresholds and relaxation oscillations is also explained by a double phosphorylation/dephosphorylation-based mechanism through a bistable behavior. These results not only provide rational support for the Goodwin model but also highlight the crucial role of the speed of post-translational processes, whose response curve are usually established at a steady state, in biochemical oscillators.
Infiltration under snow cover: Modeling approaches and predictive uncertainty
NASA Astrophysics Data System (ADS)
Meeks, Jessica; Moeck, Christian; Brunner, Philip; Hunkeler, Daniel
2017-03-01
Groundwater recharge from snowmelt represents a temporal redistribution of precipitation. This is extremely important because the rate and timing of snowpack drainage has substantial consequences to aquifer recharge patterns, which in turn affect groundwater availability throughout the rest of the year. The modeling methods developed to estimate drainage from a snowpack, which typically rely on temporally-dense point-measurements or temporally-limited spatially-dispersed calibration data, range in complexity from the simple degree-day method to more complex and physically-based energy balance approaches. While the gamut of snowmelt models are routinely used to aid in water resource management, a comparison of snowmelt models' predictive uncertainties had previously not been done. Therefore, we established a snowmelt model calibration dataset that is both temporally dense and represents the integrated snowmelt infiltration signal for the Vers Chez le Brandt research catchment, which functions as a rather unique natural lysimeter. We then evaluated the uncertainty associated with the degree-day, a modified degree-day and energy balance snowmelt model predictions using the null-space Monte Carlo approach. All three melt models underestimate total snowpack drainage, underestimate the rate of early and midwinter drainage and overestimate spring snowmelt rates. The actual rate of snowpack water loss is more constant over the course of the entire winter season than the snowmelt models would imply, indicating that mid-winter melt can contribute as significantly as springtime snowmelt to groundwater recharge in low alpine settings. Further, actual groundwater recharge could be between 2 and 31% greater than snowmelt models suggest, over the total winter season. This study shows that snowmelt model predictions can have considerable uncertainty, which may be reduced by the inclusion of more data that allows for the use of more complex approaches such as the energy balance
Analyzing the Boer-Mulders function within different quark models
Courtoy, A.; Vento, V.; Scopetta, S.
2009-10-01
A general formalism for the evaluation of time-reversal odd parton distributions is applied here to calculate the Boer-Mulders function. The same formalism when applied to evaluate the Sivers function led to results which fulfill the Burkardt sum rule quite well. The calculation here has been performed for two different models of proton structure: a constituent quark model and the MIT bag model. In the latter case, important differences are found with respect to a previous evaluation in the same framework, a feature already encountered in the calculation of the Sivers function. The results obtained are consistent with the present wisdom, i.e., the contributions for the u and d flavors turn out to have the same sign, following the pattern suggested analyzing the model-independent features of the impact parameter dependent generalized parton distributions. It is therefore confirmed that the present approach is suitable for the analysis of time-reversal odd distribution functions. A critical comparison between the outcomes of the two models, as well as between the results of the calculations for the Sivers and Boer-Mulders functions, is also carried out.
Gene function hypotheses for the Campylobacter jejuni glycome generated by a logic-based approach.
Sternberg, Michael J E; Tamaddoni-Nezhad, Alireza; Lesk, Victor I; Kay, Emily; Hitchen, Paul G; Cootes, Adrian; van Alphen, Lieke B; Lamoureux, Marc P; Jarrell, Harold C; Rawlings, Christopher J; Soo, Evelyn C; Szymanski, Christine M; Dell, Anne; Wren, Brendan W; Muggleton, Stephen H
2013-01-09
Increasingly, experimental data on biological systems are obtained from several sources and computational approaches are required to integrate this information and derive models for the function of the system. Here, we demonstrate the power of a logic-based machine learning approach to propose hypotheses for gene function integrating information from two diverse experimental approaches. Specifically, we use inductive logic programming that automatically proposes hypotheses explaining the empirical data with respect to logically encoded background knowledge. We study the capsular polysaccharide biosynthetic pathway of the major human gastrointestinal pathogen Campylobacter jejuni. We consider several key steps in the formation of capsular polysaccharide consisting of 15 genes of which 8 have assigned function, and we explore the extent to which functions can be hypothesised for the remaining 7. Two sources of experimental data provide the information for learning-the results of knockout experiments on the genes involved in capsule formation and the absence/presence of capsule genes in a multitude of strains of different serotypes. The machine learning uses the pathway structure as background knowledge. We propose assignments of specific genes to five previously unassigned reaction steps. For four of these steps, there was an unambiguous optimal assignment of gene to reaction, and to the fifth, there were three candidate genes. Several of these assignments were consistent with additional experimental results. We therefore show that the logic-based methodology provides a robust strategy to integrate results from different experimental approaches and propose hypotheses for the behaviour of a biological system.
Enhancements to the SSME transfer function modeling code
NASA Technical Reports Server (NTRS)
Irwin, R. Dennis; Mitchell, Jerrel R.; Bartholomew, David L.; Glenn, Russell D.
1995-01-01
This report details the results of a one year effort by Ohio University to apply the transfer function modeling and analysis tools developed under NASA Grant NAG8-167 (Irwin, 1992), (Bartholomew, 1992) to attempt the generation of Space Shuttle Main Engine High Pressure Turbopump transfer functions from time domain data. In addition, new enhancements to the transfer function modeling codes which enhance the code functionality are presented, along with some ideas for improved modeling methods and future work. Section 2 contains a review of the analytical background used to generate transfer functions with the SSME transfer function modeling software. Section 2.1 presents the 'ratio method' developed for obtaining models of systems that are subject to single unmeasured excitation sources and have two or more measured output signals. Since most of the models developed during the investigation use the Eigensystem Realization Algorithm (ERA) for model generation, Section 2.2 presents an introduction of ERA, and Section 2.3 describes how it can be used to model spectral quantities. Section 2.4 details the Residue Identification Algorithm (RID) including the use of Constrained Least Squares (CLS) and Total Least Squares (TLS). Most of this information can be found in the report (and is repeated for convenience). Section 3 chronicles the effort of applying the SSME transfer function modeling codes to the a51p394.dat and a51p1294.dat time data files to generate transfer functions from the unmeasured input to the 129.4 degree sensor output. Included are transfer function modeling attempts using five methods. The first method is a direct application of the SSME codes to the data files and the second method uses the underlying trends in the spectral density estimates to form transfer function models with less clustering of poles and zeros than the models obtained by the direct method. In the third approach, the time data is low pass filtered prior to the modeling process in an
Simplified Approach to Work Function Modulation in Polyelectrolyte Multilayers.
Torasso, Nicolás; Armaleo, Juan M; Tagliazucchi, Mario; Williams, Federico J
2017-03-07
The layer-by-layer (LbL) method is based on sequential deposition of polycations and polyanions. Many of the properties of polyelectrolyte thin films deposited via this method depend on the nature of the topmost layer. Thus, these properties show odd-even oscillations during multilayer growth as the topmost layer alternates from polycations to polyanions. The work function of a (semi)conductive substrate modified with an LbL polyelectrolyte multilayer also displays an oscillatory behavior independent of film thickness. The topmost layer modulates the work function of a substrate buried well below the film. In agreement with previous observations, in this work, we show that the work function of a gold substrate changes periodically with the number of adsorbed layers, as different combinations of polycations and polyanions are deposited using the LbL method. For the first time, we rationalize this behavior in terms of formation of a dipole layer between the excess charge at the topmost layer and the charge of the metal substrate, and we put forward a semiquantitative model based on a continuum description of the electrostatics of the system that reproduces the experimental observations.
Lightning Modelling: From 3D to Circuit Approach
NASA Astrophysics Data System (ADS)
Moussa, H.; Abdi, M.; Issac, F.; Prost, D.
2012-05-01
The topic of this study is electromagnetic environment and electromagnetic interferences (EMI) effects, specifically the modelling of lightning indirect effects [1] on aircraft electrical systems present on deported and highly exposed equipments, such as nose landing gear (NLG) and nacelle, through a circuit approach. The main goal of the presented work, funded by a French national project: PREFACE, is to propose a simple equivalent electrical circuit to represent a geometrical structure, taking into account mutual, self inductances, and resistances, which play a fundamental role in the lightning current distribution. Then this model is intended to be coupled to a functional one, describing a power train chain composed of: a converter, a shielded power harness and a motor or a set of resistors used as a load for the converter. The novelty here, is to provide a pre-sizing qualitative approach allowing playing on integration in pre-design phases. This tool intends to offer a user-friendly way for replying rapidly to calls for tender, taking into account the lightning constraints. Two cases are analysed: first, a NLG that is composed of tubular pieces that can be easily approximated by equivalent cylindrical straight conductors. Therefore, passive R, L, M elements of the structure can be extracted through analytical engineer formulas such as those implemented in the partial element equivalent circuit (PEEC) [2] technique. Second, the same approach is intended to be applied on an electrical de-icing nacelle sub-system.
Maximum entropy models of ecosystem functioning
NASA Astrophysics Data System (ADS)
Bertram, Jason
2014-12-01
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes' broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Maximum entropy models of ecosystem functioning
Bertram, Jason
2014-12-05
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Thermal performance curves of Paramecium caudatum: a model selection approach.
Krenek, Sascha; Berendonk, Thomas U; Petzoldt, Thomas
2011-05-01
The ongoing climate change has motivated numerous studies investigating the temperature response of various organisms, especially that of ectotherms. To correctly describe the thermal performance of these organisms, functions are needed which sufficiently fit to the complete optimum curve. Surprisingly, model-comparisons for the temperature-dependence of population growth rates of an important ectothermic group, the protozoa, are still missing. In this study, temperature reaction norms of natural isolates of the freshwater protist Paramecium caudatum were investigated, considering nearly the entire temperature range. These reaction norms were used to estimate thermal performance curves by applying a set of commonly used model functions. An information theory approach was used to compare models and to identify the best ones for describing these data. Our results indicate that the models which can describe negative growth at the high- and low-temperature branch of an optimum curve are preferable. This is a prerequisite for accurately calculating the critical upper and lower thermal limits. While we detected a temperature optimum of around 29 °C for all investigated clonal strains, the critical thermal limits were considerably different between individual clones. Here, the tropical clone showed the narrowest thermal tolerance, with a shift of its critical thermal limits to higher temperatures.
Lithium battery aging model based on Dakin's degradation approach
NASA Astrophysics Data System (ADS)
Baghdadi, Issam; Briat, Olivier; Delétage, Jean-Yves; Gyan, Philippe; Vinassa, Jean-Michel
2016-09-01
This paper proposes and validates a calendar and power cycling aging model for two different lithium battery technologies. The model development is based on previous SIMCAL and SIMSTOCK project data. In these previous projects, the effect of the battery state of charge, temperature and current magnitude on aging was studied on a large panel of different battery chemistries. In this work, data are analyzed using Dakin's degradation approach. In fact, the logarithms of battery capacity fade and the increase in resistance evolves linearly over aging. The slopes identified from straight lines correspond to battery aging rates. Thus, a battery aging rate expression function of aging factors was deduced and found to be governed by Eyring's law. The proposed model simulates the capacity fade and resistance increase as functions of the influencing aging factors. Its expansion using Taylor series was consistent with semi-empirical models based on the square root of time, which are widely studied in the literature. Finally, the influence of the current magnitude and temperature on aging was simulated. Interestingly, the aging rate highly increases with decreasing and increasing temperature for the ranges of -5 °C-25 °C and 25 °C-60 °C, respectively.
Linear mixed-effects modeling approach to FMRI group analysis
Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.
2013-01-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the
An implicit approach to model plant infestation by insect pests.
Lopes, Christelle; Spataro, Thierry; Doursat, Christophe; Lapchin, Laurent; Arditi, Roger
2007-09-07
Various spatial approaches were developed to study the effect of spatial heterogeneities on population dynamics. We present in this paper a flux-based model to describe an aphid-parasitoid system in a closed and spatially structured environment, i.e. a greenhouse. Derived from previous work and adapted to host-parasitoid interactions, our model represents the level of plant infestation as a continuous variable corresponding to the number of plants bearing a given density of pests at a given time. The variation of this variable is described by a partial differential equation. It is coupled to an ordinary differential equation and a delay-differential equation that describe the parasitized host population and the parasitoid population, respectively. We have applied our approach to the pest Aphis gossypii and to one of its parasitoids, Lysiphlebus testaceipes, in a melon greenhouse. Numerical simulations showed that, regardless of the number and distribution of hosts in the greenhouse, the aphid population is slightly larger if parasitoids display a type III rather than a type II functional response. However, the population dynamics depend on the initial distribution of hosts and the initial density of parasitoids released, which is interesting for biological control strategies. Sensitivity analysis showed that the delay in the parasitoid equation and the growth rate of the pest population are crucial parameters for predicting the dynamics. We demonstrate here that such a flux-based approach generates relevant predictions with a more synthetic formalism than a common plant-by-plant model. We also explain how this approach can be better adapted to test different management strategies and to manage crops of several greenhouses.
A Dynamic Density Functional Theory Approach to Diffusion in White Dwarfs and Neutron Star Envelopes
NASA Astrophysics Data System (ADS)
Diaw, A.; Murillo, M. S.
2016-09-01
We develop a multicomponent hydrodynamic model based on moments of the Born-Bogolyubov-Green-Kirkwood-Yvon hierarchy equations for physical conditions relevant to astrophysical plasmas. These equations incorporate strong correlations through a density functional theory closure, while transport enters through a relaxation approximation. This approach enables the introduction of Coulomb coupling correction terms into the standard Burgers equations. The diffusive currents for these strongly coupled plasmas is self-consistently derived. The settling of impurities and its impact on cooling can be greatly affected by strong Coulomb coupling, which we show can be quantified using the direct correlation function.
Approaches to Testing Interaction Effects Using Structural Equation Modeling Methodology.
ERIC Educational Resources Information Center
Li, Fuzhong; Harmer, Peter; Duncan, Terry E.; Duncan, Susan C.; Acock, Alan; Boles, Shawn
1998-01-01
Reviews a single indicator approach and multiple indicator approaches that simplify testing interaction effects using structural equation modeling. An illustrative application examines the interactive effect of perceptions of competence and perceptions of autonomy on exercise-intrinsic motivation. (SLD)
Conserved Functional Motifs and Homology Modeling to Predict Hidden Moonlighting Functional Sites
Wong, Aloysius; Gehring, Chris; Irving, Helen R.
2015-01-01
Moonlighting functional centers within proteins can provide them with hitherto unrecognized functions. Here, we review how hidden moonlighting functional centers, which we define as binding sites that have catalytic activity or regulate protein function in a novel manner, can be identified using targeted bioinformatic searches. Functional motifs used in such searches include amino acid residues that are conserved across species and many of which have been assigned functional roles based on experimental evidence. Molecules that were identified in this manner seeking cyclic mononucleotide cyclases in plants are used as examples. The strength of this computational approach is enhanced when good homology models can be developed to test the functionality of the predicted centers in silico, which, in turn, increases confidence in the ability of the identified candidates to perform the predicted functions. Computational characterization of moonlighting functional centers is not diagnostic for catalysis but serves as a rapid screening method, and highlights testable targets from a potentially large pool of candidates for subsequent in vitro and in vivo experiments required to confirm the functionality of the predicted moonlighting centers. PMID:26106597
ERIC Educational Resources Information Center
Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.
2011-01-01
Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…
NASA Astrophysics Data System (ADS)
Ishii, Hiroyuki; Kobayashi, Nobuhiko; Hirose, Kenji
2017-01-01
We present a wave-packet dynamical approach to charge transport using maximally localized Wannier functions based on density functional theory including van der Waals interactions. We apply it to the transport properties of pentacene and rubrene single crystals and show the temperature-dependent natures from bandlike to thermally activated behaviors as a function of the magnitude of external static disorder. We compare the results with those obtained by the conventional band and hopping models and experiments.
Multifractal analysis of Chinese stock volatilities based on the partition function approach
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Zhou, Wei-Xing
2008-08-01
We have performed a detailed multifractal analysis on the 1-min volatility of two indexes and 1139 stocks in the Chinese stock markets based on the partition function approach. The partition function χq(s) scales as a power law with respect to the box size s. The scaling exponents τ(q) form a nonlinear function of q. Statistical tests based on bootstrapping show that the extracted multifractal nature is significant at the 1% significance level. The individual securities can be well modeled by the p-model in turbulence with p=0.40±0.02. Based on the idea of ensemble averaging (including quenched and annealed average), we treat each stock exchange as a whole and confirm the existence of multifractal nature in the Chinese stock markets.
Local Bathymetry Estimation Using Variational Inverse Modeling: A Nested Approach
NASA Astrophysics Data System (ADS)
Almeida, T. G.; Walker, D. T.; Farquharson, G.
2014-12-01
Estimation of subreach river bathymetry from remotely-sensed surface velocity data is presented using variational inverse modeling applied to the 2D depth-averaged, shallow-water equations (SWEs). A nested approach is adopted to focus on obtaining an accurate estimate of bathymetry over a small region of interest within a larger complex hydrodynamic system. This approach reduces computational cost significantly. We begin by constructing a minimization problem with a cost function defined by the error between observed and estimated surface velocities, and then apply the SWEs as a constraint on the velocity field. An adjoint SWE model is developed through the use of Lagrange multipliers, converting the unconstrained minimization problem into a constrained one. The adjoint model solution is used to calculate the gradient of the cost function with respect to bathymetry. The gradient is used in a descent algorithm to determine the bathymetry that yields a surface velocity field that is a best-fit to the observational data. In this application of the algorithm, the 2D depth-averaged flow is computed within a nested framework using Delft3D-FLOW as the forward computational model. First, an outer simulation is generated using discharge rate and other measurements from USGS and NOAA, assuming a uniform bottom-friction coefficient. Then a nested, higher resolution inner model is constructed using open boundary condition data interpolated from the outer model (see figure). Riemann boundary conditions with specified tangential velocities are utilized to ensure a near seamless transition between outer and inner model results. The initial guess bathymetry matches the outer model bathymetry, and the iterative assimilation procedure is used to adjust the bathymetry only for the inner model. The observation data was collected during the ONR Rivet II field exercise for the mouth of the Columbia River near Hammond, OR. A dual beam squinted along-track-interferometric, synthetic
Executive function and food approach behavior in middle childhood
Groppe, Karoline; Elsner, Birgit
2014-01-01
Executive function (EF) has long been considered to be a unitary, domain-general cognitive ability. However, recent research suggests differentiating “hot” affective and “cool” cognitive aspects of EF. Yet, findings regarding this two-factor construct are still inconsistent. In particular, the development of this factor structure remains unclear and data on school-aged children is lacking. Furthermore, studies linking EF and overweight or obesity suggest that EF contributes to the regulation of eating behavior. So far, however, the links between EF and eating behavior have rarely been investigated in children and non-clinical populations. First, we examined whether EF can be divided into hot and cool factors or whether they actually correspond to a unitary construct in middle childhood. Second, we examined how hot and cool EF are associated with different eating styles that put children at risk of becoming overweight during development. Hot and cool EF were assessed experimentally in a non-clinical population of 1657 elementary-school children (aged 6–11 years). The “food approach” behavior was rated mainly via parent questionnaires. Findings indicate that hot EF is distinguishable from cool EF. However, only cool EF seems to represent a coherent functional entity, whereas hot EF does not seem to be a homogenous construct. This was true for a younger and an older subgroup of children. Furthermore, different EF components were correlated with eating styles, such as responsiveness to food, desire to drink, and restrained eating in girls but not in boys. This shows that lower levels of EF are not only seen in clinical populations of obese patients but are already associated with food approach styles in a normal population of elementary school-aged girls. Although the direction of effect still has to be clarified, results point to the possibility that EF constitutes a risk factor for eating styles contributing to the development of overweight in the long
Polymicrobial Multi-functional Approach for Enhancement of Crop Productivity.
Reddy, Chilekampalli A; Saravanan, Ramu S
2013-01-01
There is an increasing global need for enhancing the food production to meet the needs of the fast-growing human population. Traditional approach to increasing agricultural productivity through high inputs of chemical nitrogen and phosphate fertilizers and pesticides is not sustainable because of high costs and concerns about global warming, environmental pollution, and safety concerns. Therefore, the use of naturally occurring soil microbes for increasing productivity of food crops is an attractive eco-friendly, cost-effective, and sustainable alternative to the use of chemical fertilizers and pesticides. There is a vast body of published literature on microbial symbiotic and nonsymbiotic nitrogen fixation, multiple beneficial mechanisms used by plant growth-promoting rhizobacteria (PGPR), the nature and significance of mycorrhiza-plant symbiosis, and the growing technology on production of efficacious microbial inoculants. These areas are briefly reviewed here. The construction of an inoculant with a consortium of microbes with multiple beneficial functions such as N(2) fixation, biocontrol, phosphate solubilization, and other plant growth-promoting properties is a positive new development in this area in that a single inoculant can be used effectively for increasing the productivity of a broad spectrum of crops including legumes, cereals, vegetables, and grasses. Such a polymicrobial inoculant containing several microorganisms for each major function involved in promoting the plant growth and productivity gives it greater stability and wider applications for a range of major crops. Intensifying research in this area leading to further advances in our understanding of biochemical/molecular mechanisms involved in plant-microbe-soil interactions coupled with rapid advances in the genomics-proteomics of beneficial microbes should lead to the design and development of inoculants with greater efficacy for increasing the productivity of a wide range of crops.
A hybrid agent-based approach for modeling microbiological systems.
Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing
2008-11-21
Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.
Modelling the ecological niche from functional traits
Kearney, Michael; Simpson, Stephen J.; Raubenheimer, David; Helmuth, Brian
2010-01-01
The niche concept is central to ecology but is often depicted descriptively through observing associations between organisms and habitats. Here, we argue for the importance of mechanistically modelling niches based on functional traits of organisms and explore the possibilities for achieving this through the integration of three theoretical frameworks: biophysical ecology (BE), the geometric framework for nutrition (GF) and dynamic energy budget (DEB) models. These three frameworks are fundamentally based on the conservation laws of thermodynamics, describing energy and mass balance at the level of the individual and capturing the prodigious predictive power of the concepts of ‘homeostasis’ and ‘evolutionary fitness’. BE and the GF provide mechanistic multi-dimensional depictions of climatic and nutritional niches, respectively, providing a foundation for linking organismal traits (morphology, physiology, behaviour) with habitat characteristics. In turn, they provide driving inputs and cost functions for mass/energy allocation within the individual as determined by DEB models. We show how integration of the three frameworks permits calculation of activity constraints, vital rates (survival, development, growth, reproduction) and ultimately population growth rates and species distributions. When integrated with contemporary niche theory, functional trait niche models hold great promise for tackling major questions in ecology and evolutionary biology. PMID:20921046
Chen, Huaihou; Wang, Yuanjia; Paik, Myunghee Cho; Choi, H Alex
2013-10-01
Multilevel functional data is collected in many biomedical studies. For example, in a study of the effect of Nimodipine on patients with subarachnoid hemorrhage (SAH), patients underwent multiple 4-hour treatment cycles. Within each treatment cycle, subjects' vital signs were reported every 10 minutes. This data has a natural multilevel structure with treatment cycles nested within subjects and measurements nested within cycles. Most literature on nonparametric analysis of such multilevel functional data focus on conditional approaches using functional mixed effects models. However, parameters obtained from the conditional models do not have direct interpretations as population average effects. When population effects are of interest, we may employ marginal regression models. In this work, we propose marginal approaches to fit multilevel functional data through penalized spline generalized estimating equation (penalized spline GEE). The procedure is effective for modeling multilevel correlated generalized outcomes as well as continuous outcomes without suffering from numerical difficulties. We provide a variance estimator robust to misspecification of correlation structure. We investigate the large sample properties of the penalized spline GEE estimator with multilevel continuous data and show that the asymptotics falls into two categories. In the small knots scenario, the estimated mean function is asymptotically efficient when the true correlation function is used and the asymptotic bias does not depend on the working correlation matrix. In the large knots scenario, both the asymptotic bias and variance depend on the working correlation. We propose a new method to select the smoothing parameter for penalized spline GEE based on an estimate of the asymptotic mean squared error (MSE). We conduct extensive simulation studies to examine property of the proposed estimator under different correlation structures and sensitivity of the variance estimation to the choice
Mathematical Modelling Approach in Mathematics Education
ERIC Educational Resources Information Center
Arseven, Ayla
2015-01-01
The topic of models and modeling has come to be important for science and mathematics education in recent years. The topic of "Modeling" topic is especially important for examinations such as PISA which is conducted at an international level and measures a student's success in mathematics. Mathematical modeling can be defined as using…
Versatile approach for the fabrication of functional wrinkled polymer surfaces.
Palacios-Cuesta, Marta; Liras, Marta; del Campo, Adolfo; García, Olga; Rodríguez-Hernández, Juan
2014-11-11
A simple and versatile approach to obtaining patterned surfaces via wrinkle formation with variable dimensions and functionality is described. The method consists of the simultaneous heating and irradiation with UV light of a photosensitive monomer solution confined between two substrates with variable spacer thicknesses. Under these conditions, the system is photo-cross-linked, producing a rapid volume contraction while capillary forces attempt to maintain the contact between the monomer mixture and the cover. As a result of these two interacting forces, surface wrinkles were formed. Several parameters play a key role in the formation and final characteristics (amplitude and period) of the wrinkles generated, including the formulation of the photosensitive solution (e.g., the composition of the monomer mixture) and preparation conditions (e.g., temperature employed, irradiation time, and film thickness). Finally, in addition, the possibility of modifying the surface chemical composition of these wrinkled surfaces was investigated. For this purpose, either hydrophilic or hydrophobic comonomers were included in the photosensitive mixture. The resulting surface chemical composition could be finely tuned as was demonstrated by significant variations in the wettability of the structured surfaces, between 56° and 104°, when hydrophilic and hydrophobic monomers were incorporated, respectively.
A Geometric Approach to Decouple Robotino Motions and its Functional Controllability
NASA Astrophysics Data System (ADS)
Straßberger, Daniel; Mercorelli, Paolo; Sergiyenko, Oleg
2015-11-01
This paper analyses a functional control of the Robotino. The proposed control strategy considers a functional decoupling control strategy realized using a geometric approach and the invertibility property of the DC-drives with which the Robotino is equipped. For a given control structure the functional controllability is proven for motion trajectories of class C3, continuous functions with third derivative also being continuous. Horizontal, Vertical and Angular motions are considered and the decoupling between these motions is obtained. Control simulation results using real data of the Robotino are shown. The used control which enables to produce the presented results is a standard Linear Model Predictive Control (LMPC), even though for sake of brevity the standard algorithm is not shown.
A Functional Genomic Approach to Chlorinated Ethenes Bioremediation
NASA Astrophysics Data System (ADS)
Lee, P. K.; Brodie, E. L.; MacBeth, T. W.; Deeb, R. A.; Sorenson, K. S.; Andersen, G. L.; Alvarez-Cohen, L.
2007-12-01
With the recent advances in genomic sciences, a knowledge-based approach can now be taken to optimize the bioremediation of trichloroethene (TCE). During the bioremediation of a heterogeneous subsurface, it is vital to identify and quantify the functionally important microorganisms present, characterize the microbial community and measure their physiological activity. In our field experiments, quantitative PCR (qPCR) was coupled with reverse-transcription (RT) to analyze both copy numbers and transcripts expressed by the 16S rRNA gene and three reductive dehalogenase (RDase) genes as biomarkers of Dehalococcoides spp. in the groundwater of a TCE-DNAPL site at Ft. Lewis (WA) that was serially subjected to biostimulation and bioaugmentation. Genes in the Dehalococcoides genus were targeted as they are the only known organisms that can completely dechlorinate TCE to the innocuous product ethene. Biomarker quantification revealed an overall increase of more than three orders of magnitude in the total Dehalococcoides population and quantification of the more liable and stringently regulated mRNAs confirmed that Dehalococcoides spp. were active. Parallel with our field experiments, laboratory studies were conducted to explore the physiology of Dehalococcoides isolates in order to develop relevant biomarkers that are indicative of the metabolic state of cells. Recently, we verified the function of the nitrogenase operon in Dehalococcoides sp. strain 195 and nitrogenase-encoding genes are ideal biomarker targets to assess cellular nitrogen requirement. To characterize the microbial community, we applied a high-density phylogenetic microarray (16S PhyloChip) that simultaneous monitors over 8,700 unique taxa to track the bacterial and archaeal populations through different phases of treatment. As a measure of species richness, 1,300 to 1,520 taxa were detected in groundwater samples extracted during different stages of treatment as well as in the bioaugmentation culture. We
A Generic Modeling Process to Support Functional Fault Model Development
NASA Technical Reports Server (NTRS)
Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.
2016-01-01
Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.
A Wigner Function Approach to Coherence in a Talbot-Lau Interferometer
NASA Astrophysics Data System (ADS)
Imhof, Eric; Stickney, James; Squires, Matthew
2016-06-01
Using a thermal gas, we model the signal of a trapped interferometer. This interferometer uses two short laser pulses, separated by time T, which act as a phase grating for the matter waves. Near time 2T, there is an echo in the cloud's density due to the Talbot-Lau effect. Our model uses the Wigner function approach and includes a weak residual harmonic trap. The analysis shows that the residual potential limits the interferometer's visibility, shifts the echo time of the interferometer, and alters its time dependence. Loss of visibility can be mitigated by optimizing the initial trap frequency just before the interferometer cycle begins.
Approaches to ionospheric modelling, simulation and prediction
NASA Astrophysics Data System (ADS)
Schunk, R. W.; Sojka, J. J.
1992-08-01
The ionosphere is a complex, multispecies, anisotropic medium that exhibits a significant variation with time, space, season, solar cycle, and geomagnetic activity. In recent years, a wide range of models have been developed in an effort to describe ionospheric behavior. The modeling efforts include: (1) empirical models based on extensive worldwide data sets; (2) simple analytical models for a restricted number of ionospheric parameters; (3) comprehensive, 3D, time-dependent models that require supercomputers; (4) spherical harmonic models based on fits to output obtained from comprehensive numerical models; and (5) ionospheric models driven by real-time magnetospheric inputs. In an effort to achieve simplicity, some of the models have been restricted to certain altitude or latitude domains, while others have been restricted to certain ionospheric parameters, such as the F-region peak density, the auroral conductivity, and the plasma temperatures. The current status of the modeling efforts is reviewed.
A simplified approach to calculate atomic partition functions in plasmas
D'Ammando, Giuliano; Colonna, Gianpiero
2013-03-15
A simplified method to calculate the electronic partition functions and the corresponding thermodynamic properties of atomic species is presented and applied to C(I) up to C(VI) ions. The method consists in reducing the complex structure of an atom to three lumped levels. The ground level of the lumped model describes the ground term of the real atom, while the second lumped level represents the low lying states and the last one groups all the other atomic levels. It is also shown that for the purpose of thermodynamic function calculation, the energy and the statistical weight of the upper lumped level, describing high-lying excited atomic states, can be satisfactorily approximated by an analytic hydrogenlike formula. The results of the simplified method are in good agreement with those obtained by direct summation over a complete set (i.e., including all possible terms and configurations below a given cutoff energy) of atomic energy levels. The method can be generalized to include more lumped levels in order to improve the accuracy.
Predicting plants -modeling traits as a function of environment
NASA Astrophysics Data System (ADS)
Franklin, Oskar
2016-04-01
A central problem in understanding and modeling vegetation dynamics is how to represent the variation in plant properties and function across different environments. Addressing this problem there is a strong trend towards trait-based approaches, where vegetation properties are functions of the distributions of functional traits rather than of species. Recently there has been enormous progress in in quantifying trait variability and its drivers and effects (Van Bodegom et al. 2012; Adier et al. 2014; Kunstler et al. 2015) based on wide ranging datasets on a small number of easily measured traits, such as specific leaf area (SLA), wood density and maximum plant height. However, plant function depends on many other traits and while the commonly measured trait data are valuable, they are not sufficient for driving predictive and mechanistic models of vegetation dynamics -especially under novel climate or management conditions. For this purpose we need a model to predict functional traits, also those not easily measured, and how they depend on the plants' environment. Here I present such a mechanistic model based on fitness concepts and focused on traits related to water and light limitation of trees, including: wood density, drought response, allocation to defense, and leaf traits. The model is able to predict observed patterns of variability in these traits in relation to growth and mortality, and their responses to a gradient of water limitation. The results demonstrate that it is possible to mechanistically predict plant traits as a function of the environment based on an eco-physiological model of plant fitness. References Adier, P.B., Salguero-Gómez, R., Compagnoni, A., Hsu, J.S., Ray-Mukherjee, J., Mbeau-Ache, C. et al. (2014). Functional traits explain variation in plant lifehistory strategies. Proc. Natl. Acad. Sci. U. S. A., 111, 740-745. Kunstler, G., Falster, D., Coomes, D.A., Hui, F., Kooyman, R.M., Laughlin, D.C. et al. (2015). Plant functional traits
NASA Astrophysics Data System (ADS)
Cadier, Mathilde; Gorgues, Thomas; Sourisseau, Marc; Edwards, Christopher A.; Aumont, Olivier; Marié, Louis; Memery, Laurent
2017-01-01
Understanding the dynamic interplay between physical, biogeochemical and biological processes represents a key challenge in oceanography, particularly in shelf seas where complex hydrodynamics are likely to drive nutrient distribution and niche partitioning of phytoplankton communities. The Iroise Sea includes a tidal front called the 'Ushant Front' that undergoes a pronounced seasonal cycle, with a marked signal during the summer. These characteristics as well as relatively good observational sampling make it a region of choice to study processes impacting phytoplankton dynamics. This innovative modeling study employs a phytoplankton-diversity model, coupled to a regional circulation model to explore mechanisms that alter biogeography of phytoplankton in this highly dynamic environment. Phytoplankton assemblages are mainly influenced by the depth of the mixed layer on a seasonal time scale. Indeed, solar incident irradiance is a limiting resource for phototrophic growth and small phytoplankton cells are advantaged over larger cells. This phenomenon is particularly relevant when vertical mixing is intense, such as during winter and early spring. Relaxation of wind-induced mixing in April causes an improvement of irradiance experienced by cells across the whole study area. This leads, in late spring, to a competitive advantage of larger functional groups such as diatoms as long as the nutrient supply is sufficient. This dominance of large, fast-growing autotrophic cells is also maintained during summer in the productive tidally-mixed shelf waters. In the oligotrophic surface layer of the western part of the Iroise Sea, small cells coexist in a greater proportion with large, nutrient limited cells. The productive Ushant tidal front's region (1800 mgC·m- 2·d- 1 between August and September) is also characterized by a high degree of coexistence between three functional groups (diatoms, micro/nano-flagellates and small eukaryotes/cyanobacteria). Consistent with
Rival approaches to mathematical modelling in immunology
NASA Astrophysics Data System (ADS)
Andrew, Sarah M.; Baker, Christopher T. H.; Bocharov, Gennady A.
2007-08-01
In order to formulate quantitatively correct mathematical models of the immune system, one requires an understanding of immune processes and familiarity with a range of mathematical techniques. Selection of an appropriate model requires a number of decisions to be made, including a choice of the modelling objectives, strategies and techniques and the types of model considered as candidate models. The authors adopt a multidisciplinary perspective.
Pathway logic modeling of protein functional domains in signal transduction.
Talcott, C; Eker, S; Knapp, M; Lincoln, P; Laderoute, K
2004-01-01
Protein functional domains (PFDs) are consensus sequences within signaling molecules that recognize and assemble other signaling components into complexes. Here we describe the application of an approach called Pathway Logic to the symbolic modeling signal transduction networks at the level of PFDs. These models are developed using Maude, a symbolic language founded on rewriting logic. Models can be queried (analyzed) using the execution, search and model-checking tools of Maude. We show how signal transduction processes can be modeled using Maude at very different levels of abstraction involving either an overall state of a protein or its PFDs and their interactions. The key insight for the latter is our algebraic representation of binding interactions as a graph.
Modeling of functionally graded piezoelectric ultrasonic transducers.
Rubio, Wilfredo Montealegre; Buiochi, Flávio; Adamowski, Julio Cezar; Silva, Emílio Carlos Nelli
2009-05-01
The application of functionally graded material (FGM) concept to piezoelectric transducers allows the design of composite transducers without interfaces, due to the continuous change of property values. Thus, large improvements can be achieved, as reduction of stress concentration, increasing of bonding strength, and bandwidth. This work proposes to design and to model FGM piezoelectric transducers and to compare their performance with non-FGM ones. Analytical and finite element (FE) modeling of FGM piezoelectric transducers radiating a plane pressure wave in fluid medium are developed and their results are compared. The ANSYS software is used for the FE modeling. The analytical model is based on FGM-equivalent acoustic transmission-line model, which is implemented using MATLAB software. Two cases are considered: (i) the transducer emits a pressure wave in water and it is composed of a graded piezoceramic disk, and backing and matching layers made of homogeneous materials; (ii) the transducer has no backing and matching layer; in this case, no external load is simulated. Time and frequency pressure responses are obtained through a transient analysis. The material properties are graded along thickness direction. Linear and exponential gradation functions are implemented to illustrate the influence of gradation on the transducer pressure response, electrical impedance, and resonance frequencies.
Functional Security Model: Managers Engineers Working Together
NASA Astrophysics Data System (ADS)
Guillen, Edward Paul; Quintero, Rulfo
2008-05-01
Information security has a wide variety of solutions including security policies, network architectures and technological applications, they are usually designed and implemented by security architects, but in its own complexity this solutions are difficult to understand by company managers and they are who finally fund the security project. The main goal of the functional security model is to achieve a solid security platform reliable and understandable in the whole company without leaving of side the rigor of the recommendations and the laws compliance in a single frame. This paper shows a general scheme of the model with the use of important standards and tries to give an integrated solution.
A Green's function quantum average atom model
Starrett, Charles Edward
2015-05-21
A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.
Cheng, Longlong; Zhang, Guangju; Wan, Baikun; Hao, Linlin; Qi, Hongzhi; Ming, Dong
2009-01-01
Functional electrical stimulation (FES) has been widely used in the area of neural engineering. It utilizes electrical current to activate nerves innervating extremities affected by paralysis. An effective combination of a traditional PID controller and a neural network, being capable of nonlinear expression and adaptive learning property, supply a more reliable approach to construct FES controller that help the paraplegia complete the action they want. A FES system tuned by Radial Basis Function (RBF) Neural Network-based Proportional-Integral-Derivative (PID) model was designed to control the knee joint according to the desired trajectory through stimulation of lower limbs muscles in this paper. Experiment result shows that the FES system with RBF Neural Network-based PID model get a better performance when tracking the preset trajectory of knee angle comparing with the system adjusted by Ziegler- Nichols tuning PID model.
Astrocytes, Synapses and Brain Function: A Computational Approach
NASA Astrophysics Data System (ADS)
Nadkarni, Suhita
2006-03-01
Modulation of synaptic reliability is one of the leading mechanisms involved in long- term potentiation (LTP) and long-term depression (LTD) and therefore has implications in information processing in the brain. A recently discovered mechanism for modulating synaptic reliability critically involves recruitments of astrocytes - star- shaped cells that outnumber the neurons in most parts of the central nervous system. Astrocytes until recently were thought to be subordinate cells merely participating in supporting neuronal functions. New evidence, however, made available by advances in imaging technology has changed the way we envision the role of these cells in synaptic transmission and as modulator of neuronal excitability. We put forward a novel mathematical framework based on the biophysics of the bidirectional neuron-astrocyte interactions that quantitatively accounts for two distinct experimental manifestation of recruitment of astrocytes in synaptic transmission: a) transformation of a low fidelity synapse transforms into a high fidelity synapse and b) enhanced postsynaptic spontaneous currents when astrocytes are activated. Such a framework is not only useful for modeling neuronal dynamics in a realistic environment but also provides a conceptual basis for interpreting experiments. Based on this modeling framework, we explore the role of astrocytes for neuronal network behavior such as synchrony and correlations and compare with experimental data from cultured networks.
Vlah, Zvonimir; Seljak, Uroš; Baldauf, Tobias; McDonald, Patrick; Okumura, Teppei E-mail: seljak@physik.uzh.ch E-mail: teppei@ewha.ac.kr
2012-11-01
We develop a perturbative approach to redshift space distortions (RSD) using the phase space distribution function approach and apply it to the dark matter redshift space power spectrum and its moments. RSD can be written as a sum over density weighted velocity moments correlators, with the lowest order being density, momentum density and stress energy density. We use standard and extended perturbation theory (PT) to determine their auto and cross correlators, comparing them to N-body simulations. We show which of the terms can be modeled well with the standard PT and which need additional terms that include higher order corrections which cannot be modeled in PT. Most of these additional terms are related to the small scale velocity dispersion effects, the so called finger of god (FoG) effects, which affect some, but not all, of the terms in this expansion, and which can be approximately modeled using a simple physically motivated ansatz such as the halo model. We point out that there are several velocity dispersions that enter into the detailed RSD analysis with very different amplitudes, which can be approximately predicted by the halo model. In contrast to previous models our approach systematically includes all of the terms at a given order in PT and provides a physical interpretation for the small scale dispersion values. We investigate RSD power spectrum as a function of μ, the cosine of the angle between the Fourier mode and line of sight, focusing on the lowest order powers of μ and multipole moments which dominate the observable RSD power spectrum. Overall we find considerable success in modeling many, but not all, of the terms in this expansion. This is similar to the situation in real space, but predicting power spectrum in redshift space is more difficult because of the explicit influence of small scale dispersion type effects in RSD, which extend to very large scales.
Pathprinting: An integrative approach to understand the functional basis of disease.
Altschuler, Gabriel M; Hofmann, Oliver; Kalatskaya, Irina; Payne, Rebecca; Ho Sui, Shannan J; Saxena, Uma; Krivtsov, Andrei V; Armstrong, Scott A; Cai, Tianxi; Stein, Lincoln; Hide, Winston A
2013-01-01
New strategies to combat complex human disease require systems approaches to biology that integrate experiments from cell lines, primary tissues and model organisms. We have developed Pathprint, a functional approach that compares gene expression profiles in a set of pathways, networks and transcriptionally regulated targets. It can be applied universally to gene expression profiles across species. Integration of large-scale profiling methods and curation of the public repository overcomes platform, species and batch effects to yield a standard measure of functional distance between experiments. We show that pathprints combine mouse and human blood developmental lineage, and can be used to identify new prognostic indicators in acute myeloid leukemia. The code and resources are available at http://compbio.sph.harvard.edu/hidelab/pathprint.
Mathematical Models of Cardiac Pacemaking Function
NASA Astrophysics Data System (ADS)
Li, Pan; Lines, Glenn T.; Maleckar, Mary M.; Tveito, Aslak
2013-10-01
Over the past half century, there has been intense and fruitful interaction between experimental and computational investigations of cardiac function. This interaction has, for example, led to deep understanding of cardiac excitation-contraction coupling; how it works, as well as how it fails. However, many lines of inquiry remain unresolved, among them the initiation of each heartbeat. The sinoatrial node, a cluster of specialized pacemaking cells in the right atrium of the heart, spontaneously generates an electro-chemical wave that spreads through the atria and through the cardiac conduction system to the ventricles, initiating the contraction of cardiac muscle essential for pumping blood to the body. Despite the fundamental importance of this primary pacemaker, this process is still not fully understood, and ionic mechanisms underlying cardiac pacemaking function are currently under heated debate. Several mathematical models of sinoatrial node cell membrane electrophysiology have been constructed as based on different experimental data sets and hypotheses. As could be expected, these differing models offer diverse predictions about cardiac pacemaking activities. This paper aims to present the current state of debate over the origins of the pacemaking function of the sinoatrial node. Here, we will specifically review the state-of-the-art of cardiac pacemaker modeling, with a special emphasis on current discrepancies, limitations, and future challenges.
A featureless approach to 3D polyhedral building modeling from aerial images.
Hammoudi, Karim; Dornaika, Fadi
2011-01-01
This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach.
A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images
Hammoudi, Karim; Dornaika, Fadi
2011-01-01
This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575
A Unified Approach to Model-Based Planning and Execution
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Norvig, Peter (Technical Monitor)
2000-01-01
Writing autonomous software is complex, requiring the coordination of functionally and technologically diverse software modules. System and mission engineers must rely on specialists familiar with the different software modules to translate requirements into application software. Also, each module often encodes the same requirement in different forms. The results are high costs and reduced reliability due to the difficulty of tracking discrepancies in these encodings. In this paper we describe a unified approach to planning and execution that we believe provides a unified representational and computational framework for an autonomous agent. We identify the four main components whose interplay provides the basis for the agent's autonomous behavior: the domain model, the plan database, the plan running module, and the planner modules. This representational and problem solving approach can be applied at all levels of the architecture of a complex agent, such as Remote Agent. In the rest of the paper we briefly describe the Remote Agent architecture. The new agent architecture proposed here aims at achieving the full Remote Agent functionality. We then give the fundamental ideas behind the new agent architecture and point out some implication of the structure of the architecture, mainly in the area of reactivity and interaction between reactive and deliberative decision making. We conclude with related work and current status.
Yu, Rongjie; Abdel-Aty, Mohamed
2013-07-01
The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors' effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made.
Validation of density-functional versus density-functional+U approaches for oxide ultrathin films
NASA Astrophysics Data System (ADS)
Barcaro, Giovanni; Thomas, Iorwerth Owain; Fortunelli, Alessandro
2010-03-01
A comparison between available experimental information and the predictions of density-functional and density-functional+U approaches is presented for oxide ultrathin films grown on single-crystal metal surfaces. Prototypical examples of monolayer phases of an ionic oxide (ZnO), a late transition metal oxide (NiO), and an early transition metal oxide (TiO2) are considered. The aim is to validate the theoretical approaches, focusing on the prediction of structural features and the reproduction of scanning tunneling microscopy images, rationalized in terms of the local density of states of the systems. It is found that it is possible to reasonably estimate the optimal lattice constant of ultrathin supported films and that the inclusion of the Hubbard U term appreciably improves the accuracy of theoretical predictions, especially in the case of nonpolar ultrathin phases of a transition metal oxide. Moreover, the optimal value of U for the oxide layer at the interface with the metal support is found to differ from that appropriate for the bulk oxide, as a consequence of the intermixing of oxide and support electronic states and screening effects.
Longitudinal Functional Magnetic Resonance Imaging in Animal Models
Silva, Afonso C.; Liu, Junjie V.; Hirano, Yoshiyuki; Leoni, Renata F.; Merkle, Hellmut; Mackel, Julie B.; Zhang, Xian Feng; Nascimento, George C.; Stefanovic, Bojana
2016-01-01
Functional magnetic resonance imaging (fMRI) has had an essential role in furthering our understanding of brain physiology and function. fMRI techniques are nowadays widely applied in neuroscience research, as well as in translational and clinical studies. The use of animal models in fMRI studies has been fundamental in helping elucidate the mechanisms of cerebral blood flow regulation, and in the exploration of basic neuroscience questions, such as the mechanisms of perception, behavior, and cognition. Because animals are inherently noncompliant, most fMRI performed to date have required the use of anesthesia, which interferes with brain function and compromises interpretability and applicability of results to our understanding of human brain function. An alternative approach that eliminates the need for anesthesia involves training the animal to tolerate physical restraint during the data acquisition. In the present work we review these two different approaches to obtaining fMRI data from animal models, with a specific focus on the acquisition of longitudinal data from the same subjects. PMID:21279608
de Almeida, Patrícia Maria Duarte
2006-02-01
Considering the body structures and systems loss of function, after a Spinal Cord Injury, with is respective activities limitations and social participation restriction, the rehabilitation process goals are to achieve the maximal functional independence and quality of life allowed by the clinical lesion. For this is necessary a rehabilitation period with a rehabilitation team, including the physiotherapist whose interventions will depend on factors such degree of completeness or incompleteness and patient clinical stage. Physiotherapy approach includes several procedures and techniques related with a traditional model or with the recent perspective of neuronal regeneration. Following a traditional model, the interventions in complete A and incomplete B lesions, is based on compensatory method of functional rehabilitation using the non affected muscles. In the incomplete C and D lesions, motor re-education below the lesion, using key points to facilitate normal and selective patterns of movement is preferable. In other way if the neuronal regeneration is possible with respective function improve; the physiotherapy approach goals are to maintain muscular trofism and improve the recruitment of motor units using intensive techniques. In both, there is no scientific evidence to support the procedures, exists a lack of investigation and most of the research are methodologically poor.
The Pleiades mass function: Models versus observations
NASA Astrophysics Data System (ADS)
Moraux, E.; Kroupa, P.; Bouvier, J.
2004-10-01
Two stellar-dynamical models of binary-rich embedded proto-Orion-Nebula-type clusters that evolve to Pleiades-like clusters are studied with an emphasis on comparing the stellar mass function with observational constraints. By the age of the Pleiades (about 100 Myr) both models show a similar degree of mass segregation which also agrees with observational constraints. This thus indicates that the Pleiades is well relaxed and that it is suffering from severe amnesia. It is found that the initial mass function (IMF) must have been indistinguishable from the standard or Galactic-field IMF for stars with mass m ≲ 2 M⊙, provided the Pleiades precursor had a central density of about 104.8 stars/pc3. A denser model with 105.8 stars/pc3 also leads to reasonable agreement with observational constraints, but owing to the shorter relaxation time of the embedded cluster it evolves through energy equipartition to a mass-segregated condition just prior to residual-gas expulsion. This model consequently preferentially loses low-mass stars and brown dwarfs (BDs), but the effect is not very pronounced. The empirical data indicate that the Pleiades IMF may have been steeper than the Salpeter for stars with m⪆ 2 M⊙.
Predicting transfer performance: a comparison of competing function learning models.
McDaniel, Mark A; Dimperio, Eric; Griego, Jacqueline A; Busemeyer, Jerome R
2009-01-01
The population of linear experts (POLE) model suggests that function learning and transfer are mediated by activation of a set of prestored linear functions that together approximate the given function (Kalish, Lewandowsky, & Kruschke, 2004). In the extrapolation-association (EXAM) model, an exemplar-based architecture associates trained input values with their paired output values. Transfer incorporates a linear rule-based response mechanism (McDaniel & Busemeyer, 2005). Learners were trained on a functional relationship defined by 2 linear-function segments with mirror slopes. In Experiment 1, 1 segment was densely trained and 1 was sparsely trained; in Experiment 2, both segments were trained equally, but the 2 segments were widely separated. Transfer to new input values was tested. For each model, training performance for each individual participant was fit, and transfer predictions were generated. POLE generally better fit the training data than did EXAM, but EXAM was more accurate at predicting (and fitting) transfer behaviors. It was especially telling that in Experiment 2 the transfer pattern was more consistent with EXAM's but not POLE's predictions, even though the presentation of salient linear segments during training dovetailed with POLE's approach.
Metal mixture modeling evaluation project: 2. Comparison of four modeling approaches.
Farley, Kevin J; Meyer, Joseph S; Balistrieri, Laurie S; De Schamphelaere, Karel A C; Iwasaki, Yuichi; Janssen, Colin R; Kamo, Masashi; Lofts, Stephen; Mebane, Christopher A; Naito, Wataru; Ryan, Adam C; Santore, Robert C; Tipping, Edward
2015-04-01
As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the US Geological Survey (USA), HDR|HydroQual (USA), and the Centre for Ecology and Hydrology (United Kingdom) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME workshop in Brussels, Belgium (May 2012), is provided in the present study. Overall, the models were found to be similar in structure (free ion activities computed by the Windermere humic aqueous model [WHAM]; specific or nonspecific binding of metals/cations in or on the organism; specification of metal potency factors or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single vs multiple types of binding sites on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong interrelationships among the model parameters (binding constants, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.
A new approach in regression analysis for modeling adsorption isotherms.
Marković, Dana D; Lekić, Branislava M; Rajaković-Ognjanović, Vladana N; Onjia, Antonije E; Rajaković, Ljubinka V
2014-01-01
Numerous regression approaches to isotherm parameters estimation appear in the literature. The real insight into the proper modeling pattern can be achieved only by testing methods on a very big number of cases. Experimentally, it cannot be done in a reasonable time, so the Monte Carlo simulation method was applied. The objective of this paper is to introduce and compare numerical approaches that involve different levels of knowledge about the noise structure of the analytical method used for initial and equilibrium concentration determination. Six levels of homoscedastic noise and five types of heteroscedastic noise precision models were considered. Performance of the methods was statistically evaluated based on median percentage error and mean absolute relative error in parameter estimates. The present study showed a clear distinction between two cases. When equilibrium experiments are performed only once, for the homoscedastic case, the winning error function is ordinary least squares, while for the case of heteroscedastic noise the use of orthogonal distance regression or Margart's percent standard deviation is suggested. It was found that in case when experiments are repeated three times the simple method of weighted least squares performed as well as more complicated orthogonal distance regression method.
A New Approach in Regression Analysis for Modeling Adsorption Isotherms
Onjia, Antonije E.
2014-01-01
Numerous regression approaches to isotherm parameters estimation appear in the literature. The real insight into the proper modeling pattern can be achieved only by testing methods on a very big number of cases. Experimentally, it cannot be done in a reasonable time, so the Monte Carlo simulation method was applied. The objective of this paper is to introduce and compare numerical approaches that involve different levels of knowledge about the noise structure of the analytical method used for initial and equilibrium concentration determination. Six levels of homoscedastic noise and five types of heteroscedastic noise precision models were considered. Performance of the methods was statistically evaluated based on median percentage error and mean absolute relative error in parameter estimates. The present study showed a clear distinction between two cases. When equilibrium experiments are performed only once, for the homoscedastic case, the winning error function is ordinary least squares, while for the case of heteroscedastic noise the use of orthogonal distance regression or Margart's percent standard deviation is suggested. It was found that in case when experiments are repeated three times the simple method of weighted least squares performed as well as more complicated orthogonal distance regression method. PMID:24672394
An algebraic approach to the Hubbard model
NASA Astrophysics Data System (ADS)
de Leeuw, Marius; Regelskis, Vidas
2016-02-01
We study the algebraic structure of an integrable Hubbard-Shastry type lattice model associated with the centrally extended su (2 | 2) superalgebra. This superalgebra underlies Beisert's AdS/CFT worldsheet R-matrix and Shastry's R-matrix. The considered model specializes to the one-dimensional Hubbard model in a certain limit. We demonstrate that Yangian symmetries of the R-matrix specialize to the Yangian symmetry of the Hubbard model found by Korepin and Uglov. Moreover, we show that the Hubbard model Hamiltonian has an algebraic interpretation as the so-called secret symmetry. We also discuss Yangian symmetries of the A and B models introduced by Frolov and Quinn.
The TETRAD Approach to Model Respecification.
Ting, K F
1998-01-01
The TETRAD project revives the tetrad analysis developed almost a century ago. Vanishing tetrads are overidentifying restrictions implied by the structure of a model. As such, it is possible to examine a model empirically by these constraints. Scheines, Spirtes, Glymour, Meek, & Richardson (1998) advocate using vanishing tetrads as a tool for automatic model searches. Despite the search algorithm proving to be superior to those from LISREL and EQS in an earlier report, it is argued that TETRAD II, the search program, is still a datamining procedure. It is important that substantive justifications should be given before, not after, a model is selected. This is impossible with any type of automatic, procedure for specification search. Researchers should take an active role in formulating alternative ' models rather than looking for a quick fix. Finally, the tetrad test developed by Bollen and Ting (1993) is discussed with its application for testing competing models or their components formulated in I advance.
Serpentinization reaction pathways: implications for modeling approach
Janecky, D.R.
1986-01-01
Experimental seawater-peridotite reaction pathways to form serpentinites at 300/sup 0/C, 500 bars, can be accurately modeled using the EQ3/6 codes in conjunction with thermodynamic and kinetic data from the literature and unpublished compilations. These models provide both confirmation of experimental interpretations and more detailed insight into hydrothermal reaction processes within the oceanic crust. The accuracy of these models depends on careful evaluation of the aqueous speciation model, use of mineral compositions that closely reproduce compositions in the experiments, and definition of realistic reactive components in terms of composition, thermodynamic data, and reaction rates.
Consumer preference models: fuzzy theory approach
NASA Astrophysics Data System (ADS)
Turksen, I. B.; Wilson, I. A.
1993-12-01
Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).
Stumm, Gabriele; Russ, Andreas; Nehls, Michael
2002-01-01
The sequencing of the human genome has generated a drug discovery process that is based on sequence analysis and hypothesis-driven (inductive) prediction of gene function. This approach, which we term inductive genomics, is currently dominating the efforts of the pharmaceutical industry to identify new drug targets. According to recent studies, this sequence-driven discovery process is paradoxically increasing the average cost of drug development, thus falling short of the promise of the Human Genome Project to simplify the creation of much needed novel therapeutics. In the early stages of discovery, the flurry of new gene sequences makes it difficult to pick and prioritize the most promising product candidates for product development, as with existing technologies important decisions have to be based on circumstantial evidence that does not strongly predict therapeutic potential. This is because the physiological function of a potential target cannot be predicted by gene sequence analysis and in vitro technologies alone. In contrast, deductive genomics, or large-scale forward genetics, bridges the gap between sequence and function by providing a function-driven in vivo screen of a highly orthologous mammalian model genome for medically relevant physiological functions and drug targets. This approach allows drug discovery to move beyond the focus on sequence-driven identification of new members of classical drug-able protein families towards the biology-driven identification of innovative targets and biological pathways.
An inverse problem approach to modelling coastal effluent plumes
NASA Astrophysics Data System (ADS)
Lam, D. C. L.; Murthy, C. R.; Miners, K. C.
Formulated as an inverse problem, the diffusion parameters associated with length-scale dependent eddy diffusivities can be viewed as the unknowns in the mass conservation equation for coastal zone transport problems. The values of the diffusion parameters can be optimized according to an error function incorporated with observed concentration data. Examples are given for the Fickian, shear diffusion and inertial subrange diffusion models. Based on a new set of dyeplume data collected in the coastal zone off Bronte, Lake Ontario, it is shown that the predictions of turbulence closure models can be evaluated for different flow conditions. The choice of computational schemes for this diagnostic approach is based on tests with analytic solutions and observed data. It is found that the optimized shear diffusion model produced a better agreement with observations for both high and low advective flows than, e.g., the unoptimized semi-empirical model, Ky=0.075 σy1.2, described by Murthy and Kenney.
Using the Cultural Models Approach in Studying the Multicultural Experience.
ERIC Educational Resources Information Center
Koiva, Enn O., Ed.
The paper describes how social studies classroom teachers can use the cultural models approach to help high school students understand multicultural societies. The multicultural models approach is based on identification of values which are common to all cultures and on the interchangeability and transferability of these values (universal…
Students' Approaches to Learning a New Mathematical Model
ERIC Educational Resources Information Center
Flegg, Jennifer A.; Mallet, Daniel G.; Lupton, Mandy
2013-01-01
In this article, we report on the findings of an exploratory study into the experience of undergraduate students as they learn new mathematical models. Qualitative and quantitative data based around the students' approaches to learning new mathematical models were collected. The data revealed that students actively adopt three approaches to…
The Hourglass Approach: A Conceptual Model for Group Facilitators.
ERIC Educational Resources Information Center
Kriner, Lon S.; Goulet, Everett F.
1983-01-01
Presents a model to clarify the facilitator's role in working with groups. The Hourglass Approach model incorporates Carkhuff's empathetic levels of communication and Schultz's theory of personality. It is designed to be a systematic and comprehensive method usable with a variety of counseling approaches in all types of groups. (JAC)
Functionalized Anatomical Models for EM-Neuron Interaction Modeling
Neufeld, Esra; Cassará, Antonino Mario; Montanaro, Hazael; Kuster, Niels; Kainz, Wolfgang
2017-01-01
The understanding of interactions between electromagnetic (EM) fields and nerves are crucial in contexts ranging from therapeutic neurostimulation to low frequency EM exposure safety. To properly consider the impact of in-vivo induced field inhomogeneity on non-linear neuronal dynamics, coupled EM-neuronal dynamics modeling is required. For that purpose, novel functionalized computable human phantoms have been developed. Their implementation and the systematic verification of the integrated anisotropic quasi-static EM solver and neuronal dynamics modeling functionality, based on the method of manufactured solutions and numerical reference data, is described. Electric and magnetic stimulation of the ulnar and sciatic nerve were modeled to help understanding a range of controversial issues related to the magnitude and optimal determination of strength-duration (SD) time constants. The results indicate the importance of considering the stimulation-specific inhomogeneous field distributions (especially at tissue interfaces), realistic models of non-linear neuronal dynamics, very short pulses, and suitable SD extrapolation models. These results and the functionalized computable phantom will influence and support the development of safe and effective neuroprosthetic devices and novel electroceuticals. Furthermore they will assist the evaluation of existing low frequency exposure standards for the entire population under all exposure conditions. PMID:27224508
Functionalized anatomical models for EM-neuron Interaction modeling
NASA Astrophysics Data System (ADS)
Neufeld, Esra; Cassará, Antonino Mario; Montanaro, Hazael; Kuster, Niels; Kainz, Wolfgang
2016-06-01
The understanding of interactions between electromagnetic (EM) fields and nerves are crucial in contexts ranging from therapeutic neurostimulation to low frequency EM exposure safety. To properly consider the impact of in vivo induced field inhomogeneity on non-linear neuronal dynamics, coupled EM-neuronal dynamics modeling is required. For that purpose, novel functionalized computable human phantoms have been developed. Their implementation and the systematic verification of the integrated anisotropic quasi-static EM solver and neuronal dynamics modeling functionality, based on the method of manufactured solutions and numerical reference data, is described. Electric and magnetic stimulation of the ulnar and sciatic nerve were modeled to help understanding a range of controversial issues related to the magnitude and optimal determination of strength-duration (SD) time constants. The results indicate the importance of considering the stimulation-specific inhomogeneous field distributions (especially at tissue interfaces), realistic models of non-linear neuronal dynamics, very short pulses, and suitable SD extrapolation models. These results and the functionalized computable phantom will influence and support the development of safe and effective neuroprosthetic devices and novel electroceuticals. Furthermore they will assist the evaluation of existing low frequency exposure standards for the entire population under all exposure conditions.
A Mixed Approach for Modeling Blood Flow in Brain Microcirculation
NASA Astrophysics Data System (ADS)
Peyrounette, M.; Sylvie, L.; Davit, Y.; Quintard, M.
2014-12-01
We have previously demonstrated [1] that the vascular system of the healthy human brain cortex is a superposition of two structural components, each corresponding to a different spatial scale. At small-scale, the vascular network has a capillary structure, which is homogeneous and space-filling over a cut-off length. At larger scale, veins and arteries conform to a quasi-fractal branched structure. This structural duality is consistent with the functional duality of the vasculature, i.e. distribution and exchange. From a modeling perspective, this can be viewed as the superposition of: (a) a continuum model describing slow transport in the small-scale capillary network, characterized by a representative elementary volume and effective properties; and (b) a discrete network approach [2] describing fast transport in the arterial and venous network, which cannot be homogenized because of its fractal nature. This problematic is analogous to modeling problems encountered in geological media, e.g, in petroleum engineering, where fast conducting channels (wells or fractures) are embedded in a porous medium (reservoir rock). An efficient method to reduce the computational cost of fractures/continuum simulations is to use relatively large grid blocks for the continuum model. However, this also makes it difficult to accurately couple both structural components. In this work, we solve this issue by adapting the "well model" concept used in petroleum engineering [3] to brain specific 3-D situations. We obtain a unique linear system of equations describing the discrete network, the continuum and the well model coupling. Results are presented for realistic geometries and compared with a non-homogenized small-scale network model of an idealized periodic capillary network of known permeability. [1] Lorthois & Cassot, J. Theor. Biol. 262, 614-633, 2010. [2] Lorthois et al., Neuroimage 54 : 1031-1042, 2011. [3] Peaceman, SPE J. 18, 183-194, 1978.
Piecewise Linear Membership Function Generator-Divider Approach
NASA Technical Reports Server (NTRS)
Hart, Ron; Martinez, Gene; Yuan, Bo; Zrilic, Djuro; Ramirez, Jaime
1997-01-01
In this paper a simple, inexpensive, membership function circuit for fuzzy controllers is presented. The proposed circuit may be used to generate a general trapezoidal membership function. The slope and horizontal shift are fully programmable parameters.
Gyrokinetic modeling: A multi-water-bag approach
Morel, P.; Gravier, E.; Besse, N.; Klein, R.; Ghizzo, A.; Bertrand, P.; Garbet, X.; Ghendrih, P.; Grandgirard, V.; Sarazin, Y.
2007-11-15
Predicting turbulent transport in nearly collisionless fusion plasmas requires one to solve kinetic (or, more precisely, gyrokinetic) equations. In spite of considerable progress, several pending issues remain; although more accurate, the kinetic calculation of turbulent transport is much more demanding in computer resources than fluid simulations. An alternative approach is based on a water-bag representation of the distribution function that is not an approximation but rather a special class of initial conditions, allowing one to reduce the full kinetic Vlasov equation into a set of hydrodynamic equations while keeping its kinetic character. The main result for the water-bag model is a lower cost in the parallel velocity direction since no differential operator associated with some approximate numerical scheme has to be carried out on this variable v{sub parallel}. Indeed, a small bag number is sufficient to correctly describe the ion temperature gradient instability.
Adaptive Modeling: An Approach for Incorporating Nonlinearity in Regression Analyses.
Knafl, George J; Barakat, Lamia P; Hanlon, Alexandra L; Hardie, Thomas; Knafl, Kathleen A; Li, Yimei; Deatrick, Janet A
2017-02-01
Although regression relationships commonly are treated as linear, this often is not the case. An adaptive approach is described for identifying nonlinear relationships based on power transforms of predictor (or independent) variables and for assessing whether or not relationships are distinctly nonlinear. It is also possible to model adaptively both means and variances of continuous outcome (or dependent) variables and to adaptively power transform positive-valued continuous outcomes, along with their predictors. Example analyses are provided of data from parents in a nursing study on emotional-health-related quality of life for childhood brain tumor survivors as a function of the effort to manage the survivors' condition. These analyses demonstrate that relationships, including moderation relationships, can be distinctly nonlinear, that conclusions about means can be affected by accounting for non-constant variances, and that outcome transformation along with predictor transformation can provide distinct improvements and can resolve skewness problems.© 2017 Wiley Periodicals, Inc.
COMPARING AND LINKING PLUMES ACROSS MODELING APPROACHES
River plumes carry many pollutants, including microorganisms, into lakes and the coastal ocean. The physical scales of many stream and river plumes often lie between the scales for mixing zone plume models, such as the EPA Visual Plumes model, and larger-sized grid scales for re...
Quantum Supersymmetric Models in the Causal Approach
NASA Astrophysics Data System (ADS)
Grigore, Dan-Radu
2007-04-01
We consider the massless supersymmetric vector multiplet in a purely quantum framework. First order gauge invariance determines uniquely the interaction Lagrangian as in the case of Yang-Mills models. Going to the second order of perturbation theory produces an anomaly which cannot be eliminated. We make the analysis of the model working only with the component fields.
Monte Carlo path sampling approach to modeling aeolian sediment transport
NASA Astrophysics Data System (ADS)
Hardin, E. J.; Mitasova, H.; Mitas, L.
2011-12-01
Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient
The GPRIME approach to finite element modeling
NASA Technical Reports Server (NTRS)
Wallace, D. R.; Mckee, J. H.; Hurwitz, M. M.
1983-01-01
GPRIME, an interactive modeling system, runs on the CDC 6000 computers and the DEC VAX 11/780 minicomputer. This system includes three components: (1) GPRIME, a user friendly geometric language and a processor to translate that language into geometric entities, (2) GGEN, an interactive data generator for 2-D models; and (3) SOLIDGEN, a 3-D solid modeling program. Each component has a computer user interface of an extensive command set. All of these programs make use of a comprehensive B-spline mathematics subroutine library, which can be used for a wide variety of interpolation problems and other geometric calculations. Many other user aids, such as automatic saving of the geometric and finite element data bases and hidden line removal, are available. This interactive finite element modeling capability can produce a complete finite element model, producing an output file of grid and element data.
Experimental Approaches for Defining Functional Roles of Microbes in the Human Gut
Dantas, Gautam; Sommer, Morten O.A.; Degnan, Patrick H.; Goodman, Andrew L.
2016-01-01
The complex and intimate relationship between humans and their gut microbial communities is becoming less obscure, due in part to large-scale gut microbial genome-sequencing projects and culture-independent surveys of the composition and gene content of these communities. These studies build upon, and are complemented by, experimental efforts to define underlying mechanisms of host-microbe interactions in simplified model systems. This review highlights the intersection of these approaches. Experimental studies now leverage the advances in high-throughput DNA sequencing that have driven the explosion of microbial genome and community profiling projects, and the loss-of-function and gain-of-function strategies long employed in model organisms are now being extended to microbial genes, species, and communities from the human gut. These developments promise to deepen our understanding of human gut host–microbiota relationships and are readily applicable to other host-associated and free-living microbial communities. PMID:24024637
Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches
Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward
2015-01-01
As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.
Yu, Jue; Zhuang, Jian; Yu, Dehong
2015-01-01
This paper concerns a state feedback integral control using a Lyapunov function approach for a rotary direct drive servo valve (RDDV) while considering parameter uncertainties. Modeling of this RDDV servovalve reveals that its mechanical performance is deeply influenced by friction torques and flow torques; however, these torques are uncertain and mutable due to the nature of fluid flow. To eliminate load resistance and to achieve satisfactory position responses, this paper develops a state feedback control that integrates an integral action and a Lyapunov function. The integral action is introduced to address the nonzero steady-state error; in particular, the Lyapunov function is employed to improve control robustness by adjusting the varying parameters within their value ranges. This new controller also has the advantages of simple structure and ease of implementation. Simulation and experimental results demonstrate that the proposed controller can achieve higher control accuracy and stronger robustness.
A simple approach to modeling ductile failure.
Wellman, Gerald William
2012-06-01
Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.
NASA Astrophysics Data System (ADS)
Katanin, A. A.
2015-06-01
We consider formulations of the functional renormalization-group (fRG) flow for correlated electronic systems with the dynamical mean-field theory as a starting point. We classify the corresponding renormalization-group schemes into those neglecting one-particle irreducible six-point vertices (with respect to the local Green's functions) and neglecting one-particle reducible six-point vertices. The former class is represented by the recently introduced DMF2RG approach [31], but also by the scale-dependent generalization of the one-particle irreducible representation (with respect to local Green's functions, 1PI-LGF) of the generating functional [20]. The second class is represented by the fRG flow within the dual fermion approach [16, 32]. We compare formulations of the fRG approach in each of these cases and suggest their further application to study 2D systems within the Hubbard model.
Functional GI disorders: from animal models to drug development
Mayer, E A; Bradesi, S; Chang, L; Spiegel, B M R; Bueller, J A; Naliboff, B D
2014-01-01
Despite considerable efforts by academic researchers and by the pharmaceutical industry, the development of novel pharmacological treatments for irritable bowel syndrome (IBS) and other functional gastrointestinal (GI) disorders has been slow and disappointing. The traditional approach to identifying and evaluating novel drugs for these symptom-based syndromes has relied on a fairly standard algorithm using animal models, experimental medicine models and clinical trials. In the current article, the empirical basis for this process is reviewed, focusing on the utility of the assessment of visceral hypersensitivity and GI transit, in both animals and humans, as well as the predictive validity of preclinical and clinical models of IBS for identifying successful treatments for IBS symptoms and IBS-related quality of life impairment. A review of published evidence suggests that abdominal pain, defecation-related symptoms (urgency, straining) and psychological factors all contribute to overall symptom severity and to health-related quality of life. Correlations between readouts obtained in preclinical and clinical models and respective symptoms are small, and the ability to predict drug effectiveness for specific as well as for global IBS symptoms is limited. One possible drug development algorithm is proposed which focuses on pharmacological imaging approaches in both preclinical and clinical models, with decreased emphasis on evaluating compounds in symptom-related animal models, and more rapid screening of promising candidate compounds in man. PMID:17965064
Hubbard Model Approach to X-ray Spectroscopy
NASA Astrophysics Data System (ADS)
Ahmed, Towfiq
We have implemented a Hubbard model based first-principles approach for real-space calculations of x-ray spectroscopy, which allows one to study excited state electronic structure of correlated systems. Theoretical understanding of many electronic features in d and f electron systems remains beyond the scope of conventional density functional theory (DFT). In this work our main effort is to go beyond the local density approximation (LDA) by incorporating the Hubbard model within the real-space multiple-scattering Green's function (RSGF) formalism. Historically, the first theoretical description of correlated systems was published by Sir Neville Mott and others in 1937. They realized that the insulating gap and antiferromagnetism in the transition metal oxides are mainly caused by the strong on-site Coulomb interaction of the localized unfilled 3d orbitals. Even with the recent progress of first principles methods (e.g. DFT) and model Hamiltonian approaches (e.g., Hubbard-Anderson model), the electronic description of many of these systems remains a non-trivial combination of both. X-ray absorption near edge spectra (XANES) and x-ray emission spectra (XES) are very powerful spectroscopic probes for many electronic features near Fermi energy (EF), which are caused by the on-site Coulomb interaction of localized electrons. In this work we focus on three different cases of many-body effects due to the interaction of localized d electrons. Here, for the first time, we have applied the Hubbard model in the real-space multiple scattering (RSGF) formalism for the calculation of x-ray spectra of Mott insulators (e.g., NiO and MnO). Secondly, we have implemented in our RSGF approach a doping dependent self-energy that was constructed from a single-band Hubbard model for the over doped high-T c cuprate La2-xSrxCuO4. Finally our RSGF calculation of XANES is calculated with the spectral function from Lee and Hedin's charge transfer satellite model. For all these cases our
Hierarchical organization of functional connectivity in the mouse brain: a complex network approach
NASA Astrophysics Data System (ADS)
Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano
2016-08-01
This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges.
Hierarchical organization of functional connectivity in the mouse brain: a complex network approach
Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano
2016-01-01
This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges. PMID:27534708
Eldawlatly, Seif; Jin, Rong; Oweiss, Karim G
2009-02-01
Identifying functional connectivity between neuronal elements is an essential first step toward understanding how the brain orchestrates information processing at the single-cell and population levels to carry out biological computations. This letter suggests a new approach to identify functional connectivity between neuronal elements from their simultaneously recorded spike trains. In particular, we identify clusters of neurons that exhibit functional interdependency over variable spatial and temporal patterns of interaction. We represent neurons as objects in a graph and connect them using arbitrarily defined similarity measures calculated across multiple timescales. We then use a probabilistic spectral clustering algorithm to cluster the neurons in the graph by solving a minimum graph cut optimization problem. Using point process theory to model population activity, we demonstrate the robustness of the approach in tracking a broad spectrum of neuronal interaction, from synchrony to rate co-modulation, by systematically varying the length of the firing history interval and the strength of the connecting synapses that govern the discharge pattern of each neuron. We also demonstrate how activity-dependent plasticity can be tracked and quantified in multiple network topologies built to mimic distinct behavioral contexts. We compare the performance to classical approaches to illustrate the substantial gain in performance.
A novel bone scaffold design approach based on shape function and all-hexahedral mesh refinement.
Cai, Shengyong; Xi, Juntong; Chua, Chee Kai
2012-01-01
Tissue engineering is the application of interdisciplinary knowledge in the building and repairing of tissues. Generally, an engineered tissue is a combination of living cells and a support structure called a scaffold. The scaffold provides support for bone-producing cells and can be used to heal or replace a defective bone. In this chapter, a novel bone scaffold design approach based on shape function and an all-hexahedral mesh refinement method is presented. Based on the shape function in the finite element method, an all-hexahedral mesh is used to design a porous bone scaffold. First, the individual pore based on the subdivided individual element is modeled; then, the Boolean operation union among the pores is used to generate the whole pore model of TE bone scaffold; finally, the bone scaffold which contains various irregular pores can be modeled by the Boolean operation difference between the solid model and the whole pore model. From the SEM images, the pore size distribution in the native bone is not randomly distributed and there are gradients for pore size distribution. Therefore, a control approach for pore size distribution in the bone scaffold based on the hexahedral mesh refinement is also proposed in this chapter. A well-defined pore size distribution can be achieved based on the fact that a hexahedral element size distribution can be obtained through an all-hexahedral mesh refinement and the pore morphology and size are under the control of the hexahedral element. The designed bone scaffold can be converted to a universal 3D file format (such as STL or STEP) which could be used for rapid prototyping (RP). Finally, 3D printing (Spectrum Z510), a type of RP system, is adopted to fabricate these bone scaffolds. The successfully fabricated scaffolds validate the novel computer-aided design approach in this research.
An improved approach for tank purge modeling
NASA Astrophysics Data System (ADS)
Roth, Jacob R.; Chintalapati, Sunil; Gutierrez, Hector M.; Kirk, Daniel R.
2013-05-01
Many launch support processes use helium gas to purge rocket propellant tanks and fill lines to rid them of hazardous contaminants. As an example, the purge of the Space Shuttle's External Tank used approximately 1,100 kg of helium. With the rising cost of helium, initiatives are underway to examine methods to reduce helium consumption. Current helium purge processes have not been optimized using physics-based models, but rather use historical 'rules of thumb'. To develop a more accurate and useful model of the tank purge process, computational fluid dynamics simulations of several tank configurations were completed and used as the basis for the development of an algebraic model of the purge process. The computationally efficient algebraic model of the purge process compares well with a detailed transient, three-dimensional computational fluid dynamics (CFD) simulation as well as with experimental data from two external tank purges.
Engelmann spruce site index models: a comparison of model functions and parameterizations.
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce - Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike's Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements.
Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
A Probabilistic Approach to Model Update
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.
2001-01-01
Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.
"Dispersion modeling approaches for near road | Science ...
Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of applications. For example, such models can be useful for evaluating the mitigation potential of roadside barriers in reducing near-road exposures and their associated adverse health effects. Two databases, a tracer field study and a wind tunnel study, provide measurements used in the development and/or validation of algorithms to simulate dispersion in the presence of noise barriers. The tracer field study was performed in Idaho Falls, ID, USA with a 6-m noise barrier and a finite line source in a variety of atmospheric conditions. The second study was performed in the meteorological wind tunnel at the US EPA and simulated line sources at different distances from a model noise barrier to capture the effect on emissions from individual lanes of traffic. In both cases, velocity and concentration measurements characterized the effect of the barrier on dispersion.This paper presents comparisons with the two datasets of the barrier algorithms implemented in two different dispersion models: US EPA’s R-LINE (a research dispersion modelling tool under development by the US EPA’s Office of Research and Development) and CERC’s ADMS model (ADMS-Urban). In R-LINE the physical features reveal
The Notional-Functional Approach: Teaching the Real Language in Its Natural Context.
ERIC Educational Resources Information Center
Laine, Elaine
This study of the notional-functional approach to second language teaching reviews the history and theoretical background of the method, current issues, and implementation of a notional-functional syllabus. Chapter 1 discusses the history and theory of the approach and the organization and advantages of the notional-functional syllabus. Chapter 2…
An elementary and real approach to values of the Riemann zeta function
NASA Astrophysics Data System (ADS)
Bagdasaryan, A. G.
2010-02-01
An elementary approach for computing the values at negative integers of the Riemann zeta function is presented. The approach is based on a new method for ordering the integers. We show that the values of the Riemann zeta function can be computed, without using the theory of analytic continuation and any knowledge of functions of complex variable.
An Odds Ratio Approach for Detecting DDF under the Nested Logit Modeling Framework
ERIC Educational Resources Information Center
Terzi, Ragip; Suh, Youngsuk
2015-01-01
An odds ratio approach (ORA) under the framework of a nested logit model was proposed for evaluating differential distractor functioning (DDF) in multiple-choice items and was compared with an existing ORA developed under the nominal response model. The performances of the two ORAs for detecting DDF were investigated through an extensive…
Runoff-rainfall (sic!) modelling: Comparing two different approaches
NASA Astrophysics Data System (ADS)
Herrnegger, Mathew; Schulz, Karsten
2015-04-01
rainfall estimates from the two models. Here, time series from a station observation in the proximity of the catchment and the independent INCA rainfall analysis of Austrian Central Institute for Meteorology and Geodynamics (ZAMG, Haiden et al., 2011) are used. References: Adamovic, M., Braud, I., Branger, F., and Kirchner, J. W. (2014). Does the simple dynamical systems approach provide useful information about catchment hydrological functioning in a Mediterranean context? Application to the Ardèche catchment (France), Hydrol. Earth Syst. Sci. Discuss., 11, 10725-10786. Haiden, T., Kann, A., Wittman, C., Pistotnik, G., Bica, B., and Gruber, C. (2011). The Integrated Nowcasting through Comprehensive Analysis (INCA) system and its validation over the Eastern Alpine region. Wea. Forecasting 26, 166-183, doi: 10.1175/2010WAF2222451.1. Herrnegger, M., Nachtnebel, H.P., and Schulz, K. (2014). From runoff to rainfall: inverse rainfall-runoff modelling in a high temporal resolution, Hydrol. Earth Syst. Sci. Discuss., 11, 13259-13309. Kirchner, J. W. (2009). Catchments as simple dynamical systems: catchment characterization, rainfall-runoff modeling, and doing hydrology backward. Water Resour .Res., 45, W02429. Krier, R., Matgen, P., Goergen, K., Pfister, L., Hoffmann, L., Kirchner, J. W., Uhlenbrook, S., and Savenije, H.H.G. (2012). Inferring catchment precipitation by doing hydrology backward: A test in 24 small and mesoscale catchments in Luxembourg, Water Resour. Res., 48, W10525.
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; Tartakovsky, Daniel M.
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulic head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.
A new algebraic transition model based on stress length function
NASA Astrophysics Data System (ADS)
Xiao, Meng-Juan; She, Zhen-Su
2016-11-01
Transition, as one of the two biggest challenges in turbulence research, is of critical importance for engineering application. For decades, the fundamental research seems to be unable to capture the quantitative details in real transition process. On the other hand, numerous empirical parameters in engineering transition models provide no unified description of the transition under varying physical conditions. Recently, we proposed a symmetry-based approach to canonical wall turbulence based on stress length function, which is here extended to describe the transition via a new algebraic transition model. With a multi-layer analytic form of the stress length function in both the streamwise and wall normal directions, the new model gives rise to accurate description of the mean field and friction coefficient, comparing with both the experimental and DNS results at different inlet conditions. Different types of transition process, such as the transition with varying incoming turbulence intensities or that with blow and suck disturbance, are described by only two or three model parameters, each of which has their own specific physical interpretation. Thus, the model enables one to extract physical information from both experimental and DNS data to reproduce the transition process, which may prelude to a new class of generalized transition model for engineering applications.
A chain reaction approach to modelling gene pathways
Cheng, Gary C.; Chen, Dung-Tsa; Chen, James J.; Soong, Seng-jaw; Lamartiniere, Coral; Barnes, Stephen
2012-01-01
the nutrient-containing diets regulate gene expression in the estrogen synthesis pathway during puberty; (II) global tests to assess an overall association of this particular pathway with time factor by utilizing generalized linear models to analyze microarray data; and (III) a chain reaction model to simulate the pathway. This is a novel application because we are able to translate the gene pathway into the chemical reactions in which each reaction channel describes gene-gene relationship in the pathway. In the chain reaction model, the implicit scheme is employed to efficiently solve the differential equations. Data analysis results show the proposed model is capable of predicting gene expression changes and demonstrating the effect of nutrient-containing diets on gene expression changes in the pathway. One of the objectives of this study is to explore and develop a numerical approach for simulating the gene expression change so that it can be applied and calibrated when the data of more time slices are available, and thus can be used to interpolate the expression change at a desired time point without conducting expensive experiments for a large amount of time points. Hence, we are not claiming this is either essential or the most efficient way for simulating this problem, rather a mathematical/numerical approach that can model the expression change of a large set of genes of a complex pathway. In addition, we understand the limitation of this experiment and realize that it is still far from being a complete model of predicting nutrient-gene interactions. The reason is that in the present model, the reaction rates were estimated based on available data at two time points; hence, the gene expression change is dependent upon the reaction rates and a linear function of the gene expressions. More data sets containing gene expression at various time slices are needed in order to improve the present model so that a non-linear variation of gene expression changes at
A chain reaction approach to modelling gene pathways.
Cheng, Gary C; Chen, Dung-Tsa; Chen, James J; Soong, Seng-Jaw; Lamartiniere, Coral; Barnes, Stephen
2012-08-01
nutrient-containing diets regulate gene expression in the estrogen synthesis pathway during puberty; (II) global tests to assess an overall association of this particular pathway with time factor by utilizing generalized linear models to analyze microarray data; and (III) a chain reaction model to simulate the pathway. This is a novel application because we are able to translate the gene pathway into the chemical reactions in which each reaction channel describes gene-gene relationship in the pathway. In the chain reaction model, the implicit scheme is employed to efficiently solve the differential equations. Data analysis results show the proposed model is capable of predicting gene expression changes and demonstrating the effect of nutrient-containing diets on gene expression changes in the pathway. One of the objectives of this study is to explore and develop a numerical approach for simulating the gene expression change so that it can be applied and calibrated when the data of more time slices are available, and thus can be used to interpolate the expression change at a desired time point without conducting expensive experiments for a large amount of time points. Hence, we are not claiming this is either essential or the most efficient way for simulating this problem, rather a mathematical/numerical approach that can model the expression change of a large set of genes of a complex pathway. In addition, we understand the limitation of this experiment and realize that it is still far from being a complete model of predicting nutrient-gene interactions. The reason is that in the present model, the reaction rates were estimated based on available data at two time points; hence, the gene expression change is dependent upon the reaction rates and a linear function of the gene expressions. More data sets containing gene expression at various time slices are needed in order to improve the present model so that a non-linear variation of gene expression changes at different time
Frequency response function-based model updating using Kriging model
NASA Astrophysics Data System (ADS)
Wang, J. T.; Wang, C. J.; Zhao, J. P.
2017-03-01
An acceleration frequency response function (FRF) based model updating method is presented in this paper, which introduces Kriging model as metamodel into the optimization process instead of iterating the finite element analysis directly. The Kriging model is taken as a fast running model that can reduce solving time and facilitate the application of intelligent algorithms in model updating. The training samples for Kriging model are generated by the design of experiment (DOE), whose response corresponds to the difference between experimental acceleration FRFs and its counterpart of finite element model (FEM) at selected frequency points. The boundary condition is taken into account, and a two-step DOE method is proposed for reducing the number of training samples. The first step is to select the design variables from the boundary condition, and the selected variables will be passed to the second step for generating the training samples. The optimization results of the design variables are taken as the updated values of the design variables to calibrate the FEM, and then the analytical FRFs tend to coincide with the experimental FRFs. The proposed method is performed successfully on a composite structure of honeycomb sandwich beam, after model updating, the analytical acceleration FRFs have a significant improvement to match the experimental data especially when the damping ratios are adjusted.
Mixture modeling approach to flow cytometry data.
Boedigheimer, Michael J; Ferbas, John
2008-05-01
Flow Cytometry has become a mainstay technique for measuring fluorescent and physical attributes of single cells in a suspended mixture. These data are reduced during analysis using a manual or semiautomated process of gating. Despite the need to gate data for traditional analyses, it is well recognized that analyst-to-analyst variability can impact the dataset. Moreover, cells of interest can be inadvertently excluded from the gate, and relationships between collected variables may go unappreciated because they were not included in the original analysis plan. A multivariate non-gating technique was developed and implemented that accomplished the same goal as traditional gating while eliminating many weaknesses. The procedure was validated against traditional gating for analysis of circulating B cells in normal donors (n = 20) and persons with Systemic Lupus Erythematosus (n = 42). The method recapitulated relationships in the dataset while providing for an automated and objective assessment of the data. Flow cytometry analyses are amenable to automated analytical techniques that are not predicated on discrete operator-generated gates. Such alternative approaches can remove subjectivity in data analysis, improve efficiency and may ultimately enable construction of large bioinformatics data systems for more sophisticated approaches to hypothesis testing.
Modular protein domains: an engineering approach toward functional biomaterials.
Lin, Charng-Yu; Liu, Julie C
2016-08-01
Protein domains and peptide sequences are a powerful tool for conferring specific functions to engineered biomaterials. Protein sequences with a wide variety of functionalities, including structure, bioactivity, protein-protein interactions, and stimuli responsiveness, have been identified, and advances in molecular biology continue to pinpoint new sequences. Protein domains can be combined to make recombinant proteins with multiple functionalities. The high fidelity of the protein translation machinery results in exquisite control over the sequence of recombinant proteins and the resulting properties of protein-based materials. In this review, we discuss protein domains and peptide sequences in the context of functional protein-based materials, composite materials, and their biological applications.
Model predictive control: A new approach
NASA Astrophysics Data System (ADS)
Nagy, Endre
2017-01-01
New methods are proposed in this paper for solution of the model predictive control problem. Nonlinear state space design techniques are also treated. For nonlinear state prediction (state evolution computation) a new predictor given with an operator is introduced and tested. Settling the model predictive control problem may be obtained through application of the principle "direct stochastic optimum tracking" with a simple algorithm, which can be derived from a previously developed optimization procedure. The final result is obtained through iterations. Two examples show the applicability and advantages of the method.
Aircraft engine mathematical model - linear system approach
NASA Astrophysics Data System (ADS)
Rotaru, Constantin; Roateşi, Simona; Cîrciu, Ionicǎ
2016-06-01
This paper examines a simplified mathematical model of the aircraft engine, based on the theory of linear and nonlinear systems. The dynamics of the engine was represented by a linear, time variant model, near a nominal operating point within a finite time interval. The linearized equations were expressed in a matrix form, suitable for the incorporation in the MAPLE program solver. The behavior of the engine was included in terms of variation of the rotational speed following a deflection of the throttle. The engine inlet parameters can cover a wide range of altitude and Mach numbers.
Modeling the three-point correlation function
Marin, Felipe; Wechsler, Risa; Frieman, Joshua A.; Nichol, Robert; /Portsmouth U., ICG
2007-04-01
We present new theoretical predictions for the galaxy three-point correlation function (3PCF) using high-resolution dissipationless cosmological simulations of a flat {Lambda}CDM Universe which resolve galaxy-size halos and subhalos. We create realistic mock galaxy catalogs by assigning luminosities and colors to dark matter halos and subhalos, and we measure the reduced 3PCF as a function of luminosity and color in both real and redshift space. As galaxy luminosity and color are varied, we find small differences in the amplitude and shape dependence of the reduced 3PCF, at a level qualitatively consistent with recent measurements from the SDSS and 2dFGRS. We confirm that discrepancies between previous 3PCF measurements can be explained in part by differences in binning choices. We explore the degree to which a simple local bias model can fit the simulated 3PCF. The agreement between the model predictions and galaxy 3PCF measurements lends further credence to the straightforward association of galaxies with CDM halos and subhalos.
Transfer function modeling of damping mechanisms in viscoelastic plates
NASA Technical Reports Server (NTRS)
Slater, J. C.; Inman, D. J.
1991-01-01
This work formulates a method for the modeling of material damping characteristics in plates. The Sophie German equation of classical plate theory is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes, (1985). However, this procedure is not limited to this representation. The governing characteristic equation is decoupled through separation of variables, yielding a solution similar to that of undamped classical plate theory, allowing solution of the steady state as well as the transient response problem.
Spectral properties of a double-quantum-dot structure: A causal Green's function approach
NASA Astrophysics Data System (ADS)
You, J. Q.; Zheng, Hou-Zhi
1999-09-01
Spectral properties of a double quantum dot (QD) structure are studied by a causal Green's function (GF) approach. The double QD system is modeled by an Anderson-type Hamiltonian in which both the intra- and interdot Coulomb interactions are taken into account. The GF's are derived by an equation-of-motion method and the real-space renormalization-group technique. The numerical results show that the average occupation number of electrons in the QD exhibits staircase features and the local density of states depends appreciably on the electron occupation of the dot.
Peng, Changhui; Guiot, Joel; Wu, Haibin; Jiang, Hong; Luo, Yiqi
2011-05-01
It is increasingly being recognized that global ecological research requires novel methods and strategies in which to combine process-based ecological models and data in cohesive, systematic ways. Model-data fusion (MDF) is an emerging area of research in ecology and palaeoecology. It provides a new quantitative approach that offers a high level of empirical constraint over model predictions based on observations using inverse modelling and data assimilation (DA) techniques. Increasing demands to integrate model and data methods in the past decade has led to MDF utilization in palaeoecology, ecology and earth system sciences. This paper reviews key features and principles of MDF and highlights different approaches with regards to DA. After providing a critical evaluation of the numerous benefits of MDF and its current applications in palaeoecology (i.e., palaeoclimatic reconstruction, palaeovegetation and palaeocarbon storage) and ecology (i.e. parameter and uncertainty estimation, model error identification, remote sensing and ecological forecasting), the paper discusses method limitations, current challenges and future research direction. In the ongoing data-rich era of today's world, MDF could become an important diagnostic and prognostic tool in which to improve our understanding of ecological processes while testing ecological theory and hypotheses and forecasting changes in ecosystem structure, function and services.
NASA Astrophysics Data System (ADS)
Dries, M.; Trager, S. C.; Koopmans, L. V. E.
2016-11-01
Recent studies based on the integrated light of distant galaxies suggest that the initial mass function (IMF) might not be universal. Variations of the IMF with galaxy type and/or formation time may have important consequences for our understanding of galaxy evolution. We have developed a new stellar population synthesis (SPS) code specifically designed to reconstruct the IMF. We implement a novel approach combining regularization with hierarchical Bayesian inference. Within this approach, we use a parametrized IMF prior to regulate a direct inference of the IMF. This direct inference gives more freedom to the IMF and allows the model to deviate from parametrized models when demanded by the data. We use Markov chain Monte Carlo sampling techniques to reconstruct the best parameters for the IMF prior, the age and the metallicity of a single stellar population. We present our code and apply our model to a number of mock single stellar populations with different ages, metallicities and IMFs. When systematic uncertainties are not significant, we are able to reconstruct the input parameters that were used to create the mock populations. Our results show that if systematic uncertainties do play a role, this may introduce a bias on the results. Therefore, it is important to objectively compare different ingredients of SPS models. Through its Bayesian framework, our model is well suited for this.
INDIVIDUAL BASED MODELLING APPROACH TO THERMAL ...
Diadromous fish populations in the Pacific Northwest face challenges along their migratory routes from declining habitat quality, harvest, and barriers to longitudinal connectivity. Changes in river temperature regimes are producing an additional challenge for upstream migrating adult salmon and steelhead, species that are sensitive to absolute and cumulative thermal exposure. Adult salmon populations have been shown to utilize cold water patches along migration routes when mainstem river temperatures exceed thermal optimums. We are employing an individual based model (IBM) to explore the costs and benefits of spatially-distributed cold water refugia for adult migrating salmon. Our model, developed in the HexSim platform, is built around a mechanistic behavioral decision tree that drives individual interactions with their spatially explicit simulated environment. Population-scale responses to dynamic thermal regimes, coupled with other stressors such as disease and harvest, become emergent properties of the spatial IBM. Other model outputs include arrival times, species-specific survival rates, body energetic content, and reproductive fitness levels. Here, we discuss the challenges associated with parameterizing an individual based model of salmon and steelhead in a section of the Columbia River. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec
"Dispersion modeling approaches for near road
Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of app...
Instructional Model for FLES: A Conversational Approach
ERIC Educational Resources Information Center
Clivaz, Denise; Roberts, Elizabeth
2010-01-01
In their tenure as French teachers at the Avery Coonley School in Downers Grove, Illinois, the authors have continually sought an effective instructional model for their K-8 students. Their school is blessed to have a stable and historic French program for young learners, and each of their students is required to take French from kindergarten…
Medical image denoising using one-dimensional singularity function model.
Luo, Jianhua; Zhu, Yuemin; Hiba, Bassem
2010-03-01
A novel denoising approach is proposed that is based on a spectral data substitution mechanism through using a mathematical model of one-dimensional singularity function analysis (1-D SFA). The method consists in dividing the complete spectral domain of the noisy signal into two subsets: the preserved set where the spectral data are kept unchanged, and the substitution set where the original spectral data having lower signal-to-noise ratio (SNR) are replaced by those reconstructed using the 1-D SFA model. The preserved set containing original spectral data is determined according to the SNR of the spectrum. The singular points and singularity degrees in the 1-D SFA model are obtained through calculating finite difference of the noisy signal. The theoretical formulation and experimental results demonstrated that the proposed method allows more efficient denoising while introducing less distortion, and presents significant improvement over conventional denoising methods.
A Bilingual Production Model: Levelt's "Speaking" Model Approach.
ERIC Educational Resources Information Center
De Bot, Kees
1992-01-01
A description is given of a model of the bilingual speaker. The model is based on Levelt's (1989) "speaking model," which sketches a framework in which a number of highly autonomous information processing components are postulated. (56 references) (JL)
A Qualitative Approach to Sketch the Graph of a Function.
ERIC Educational Resources Information Center
Alson, Pedro
1992-01-01
Presents a qualitative and global method of graphing functions that involves transformations of the graph of a known function in the cartesian coordinate system referred to as graphic operators. Explains how the method has been taught to students and some comments about the results obtained. (MDH)
Approaching Functions: Cabri Tools as Instruments of Semiotic Mediation
ERIC Educational Resources Information Center
Falcade, Rossana; Laborde, Colette; Mariotti, Maria Alessandra
2007-01-01
Assuming that dynamic features of Dynamic Geometry Software may provide a basic representation of both variation and functional dependency, and taking the Vygotskian perspective of semiotic mediation, a teaching experiment was designed with the aim of introducing students to the idea of function. This paper focuses on the use of the Trace tool and…
Using Loss Functions for DIF Detection: An Empirical Bayes Approach.
ERIC Educational Resources Information Center
Zwick, Rebecca; Thayer, Dorothy; Lewis, Charles
2000-01-01
Studied a method for flagging differential item functioning (DIF) based on loss functions. Builds on earlier research that led to the development of an empirical Bayes enhancement to the Mantel-Haenszel DIF analysis. Tested the method through simulation and found its performance better than some commonly used DIF classification systems. (SLD)
Development on electromagnetic impedance function modeling and its estimation
Sutarno, D.
2015-09-30
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition
Integration models: multicultural and liberal approaches confronted
NASA Astrophysics Data System (ADS)
Janicki, Wojciech
2012-01-01
European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.
Recent approaches in physical modification of protein functionality.
Mirmoghtadaie, Leila; Shojaee Aliabadi, Saeedeh; Hosseini, Seyede Marzieh
2016-05-15
Today, there is a growing demand for novel technologies, such as high hydrostatic pressure, irradiation, ultrasound, filtration, supercritical carbon dioxide, plasma technology, and electrical methods, which are not based on chemicals or heat treatment for modifying ingredient functionality and extending product shelf life. Proteins are essential components in many food processes, and provide various functions in food quality and stability. They can create interfacial films that stabilize emulsions and foams as well as interact to make networks that play key roles in gel and edible film production. These properties of protein are referred to as 'protein functionality', because they can be modified by different processing. The common protein modification (chemical, enzymatic and physical) methods have strong effects on the structure and functionality of food proteins. Furthermore, novel technologies can modify protein structure and functional properties that will be reviewed in this study.
A Gaussian graphical model approach to climate networks
Zerenner, Tanja; Friederichs, Petra; Hense, Andreas; Lehnertz, Klaus
2014-06-15
Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.
Methodological approaches for nanotoxicology using cnidarian models.
Ambrosone, Alfredo; Tortiglione, Claudia
2013-03-01
The remarkable amenability of aquatic invertebrates to laboratory manipulation has already made a few species belonging to the phylum Cnidaria as attracting systems for exploring animal development. The proliferation of molecular and genomic tools, including the whole genomic sequence of the freshwater polyp Hydra vulgaris and the starlet sea anemone Nematostella vectensis, further enhances the promise of these species to investigate the evolution of key aspects of development biology. In addition, the facility with which cnidarian population can be investigated within their natural ecological context suggests that these models may be profitably expanded to address important questions in ecology and toxicology. In this review, we explore the traits that make Hydra and Nematostella exceptionally attractive model organisms in context of nanotoxicology, and highlight a number of methods and developments likely to further increase that utility in the near future.
Intelligence Fusion Modeling. A Proposed Approach.
1983-09-16
based techniques developed by artificial intelligence researchers. This paper describes the application of these techniques in the modeling of an... intelligence requirements, although the methods presented are applicable . We treat PIR/IR as given. -7- -- -W V"W v* 1.- . :71.,v It k*~ ~-- Movement...items from the PIR/IR/HVT decomposition are received from the CMDS. Formatted tactical intelligence reports are received from sensors of like types
A Prediction Model of the Capillary Pressure J-Function
Xu, W. S.; Luo, P. Y.; Sun, L.; Lin, N.
2016-01-01
The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701
Different experimental approaches in modelling cataractogenesis
Kyselova, Zuzana
2010-01-01
Cataract, the opacification of eye lens, is the leading cause of blindness worldwide. At present, the only remedy is surgical removal of the cataractous lens and substitution with a lens made of synthetic polymers. However, besides significant costs of operation and possible complications, an artificial lens just does not have the overall optical qualities of a normal one. Hence it remains a significant public health problem, and biochemical solutions or pharmacological interventions that will maintain the transparency of the lens are highly required. Naturally, there is a persistent demand for suitable biological models. The ocular lens would appear to be an ideal organ for maintaining culture conditions because of lacking blood vessels and nerves. The lens in vivo obtains its nutrients and eliminates waste products via diffusion with the surrounding fluids. Lens opacification observed in vivo can be mimicked in vitro by addition of the cataractogenic agent sodium selenite (Na2SeO3) to the culture medium. Moreover, since an overdose of sodium selenite induces also cataract in young rats, it became an extremely rapid and convenient model of nuclear cataract in vivo. The main focus of this review will be on selenium (Se) and its salt sodium selenite, their toxicological characteristics and safety data in relevance of modelling cataractogenesis, either under in vivo or in vitro conditions. The studies revealing the mechanisms of lens opacification induced by selenite are highlighted, the representatives from screening for potential anti-cataract agents are listed. PMID:21217865
Kizilkaya, Kadir; Tempelman, Robert J
2005-01-01
We propose a general Bayesian approach to heteroskedastic error modeling for generalized linear mixed models (GLMM) in which linked functions of conditional means and residual variances are specified as separate linear combinations of fixed and random effects. We focus on the linear mixed model (LMM) analysis of birth weight (BW) and the cumulative probit mixed model (CPMM) analysis of calving ease (CE). The deviance information criterion (DIC) was demonstrated to be useful in correctly choosing between homoskedastic and heteroskedastic error GLMM for both traits when data was generated according to a mixed model specification for both location parameters and residual variances. Heteroskedastic error LMM and CPMM were fitted, respectively, to BW and CE data on 8847 Italian Piemontese first parity dams in which residual variances were modeled as functions of fixed calf sex and random herd effects. The posterior mean residual variance for male calves was over 40% greater than that for female calves for both traits. Also, the posterior means of the standard deviation of the herd-specific variance ratios (relative to a unitary baseline) were estimated to be 0.60 ± 0.09 for BW and 0.74 ± 0.14 for CE. For both traits, the heteroskedastic error LMM and CPMM were chosen over their homoskedastic error counterparts based on DIC values. PMID:15588567
Mathematical Modeling in Mathematics Education: Basic Concepts and Approaches
ERIC Educational Resources Information Center
Erbas, Ayhan Kürsat; Kertil, Mahmut; Çetinkaya, Bülent; Çakiroglu, Erdinç; Alacaci, Cengiz; Bas, Sinem
2014-01-01
Mathematical modeling and its role in mathematics education have been receiving increasing attention in Turkey, as in many other countries. The growing body of literature on this topic reveals a variety of approaches to mathematical modeling and related concepts, along with differing perspectives on the use of mathematical modeling in teaching and…
A modular approach for item response theory modeling with the R package flirt.
Jeon, Minjeong; Rijmen, Frank
2016-06-01
The new R package flirt is introduced for flexible item response theory (IRT) modeling of psychological, educational, and behavior assessment data. flirt integrates a generalized linear and nonlinear mixed modeling framework with graphical model theory. The graphical model framework allows for efficient maximum likelihood estimation. The key feature of flirt is its modular approach to facilitate convenient and flexible model specifications. Researchers can construct customized IRT models by simply selecting various modeling modules, such as parametric forms, number of dimensions, item and person covariates, person groups, link functions, etc. In this paper, we describe major features of flirt and provide examples to illustrate how flirt works in practice.
A Comparison of Functional Models for Use in the Function-Failure Design Method
NASA Technical Reports Server (NTRS)
Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.
2006-01-01
When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based
Linking geophysics and soil function modelling - two examples
NASA Astrophysics Data System (ADS)
Krüger, J.; Franko, U.; Werban, U.; Dietrich, P.; Behrens, T.; Schmidt, K.; Fank, J.; Kroulik, M.
2011-12-01
potential hot spots where local adaptations of agricultural management would be required to improve soil functions. Example B realizes a soil function modelling with an adapted model parameterization based on data of ground penetration radar (GPR). This work shows an approach to handle heterogeneity of soil properties with geophysical data used for modelling. The field site in Austria is characterised by highly heterogenic soil with fluvioglacial gravel sediments. The variation of thickness of topsoil above a sandy subsoil with gravels strongly influences the soil water balance. GPR detected exact soil horizon depth between topsoil and subsoil. The extension of the input data improves the model performance of CANDY PLUS for plant biomass production. Both examples demonstrate how geophysics provide a surplus of data for agroecosystem modelling which identifies and contributes alternative options for agricultural management decisions.
Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach
Liao, James C.
2016-10-01
Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.
Coupling approaches used in atmospheric entry models
NASA Astrophysics Data System (ADS)
Gritsevich, M. I.
2012-09-01
While a planet orbits the Sun, it is subject to impact by smaller objects, ranging from tiny dust particles and space debris to much larger asteroids and comets. Such collisions have taken place frequently over geological time and played an important role in the evolution of planets and the development of life on the Earth. Though the search for near-Earth objects addresses one of the main points of the Asteroid and Comet Hazard, one should not underestimate the useful information to be gleaned from smaller atmospheric encounters, known as meteors or fireballs. Not only do these events help determine the linkages between meteorites and their parent bodies; due to their relative regularity they provide a good statistical basis for analysis. For successful cases with found meteorites, the detailed atmospheric path record is an excellent tool to test and improve existing entry models assuring the robustness of their implementation. There are many more important scientific questions meteoroids help us to answer, among them: Where do these objects come from, what are their origins, physical properties and chemical composition? What are the shapes and bulk densities of the space objects which fully ablate in an atmosphere and do not reach the planetary surface? Which values are directly measured and which are initially assumed as input to various models? How to couple both fragmentation and ablation effects in the model, taking real size distribution of fragments into account? How to specify and speed up the recovery of a recently fallen meteorites, not letting weathering to affect samples too much? How big is the pre-atmospheric projectile to terminal body ratio in terms of their mass/volume? Which exact parameters beside initial mass define this ratio? More generally, how entering object affects Earth's atmosphere and (if applicable) Earth's surface? How to predict these impact consequences based on atmospheric trajectory data? How to describe atmospheric entry
Path probability of stochastic motion: A functional approach
NASA Astrophysics Data System (ADS)
Hattori, Masayuki; Abe, Sumiyoshi
2016-06-01
The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.
Jackiw-Pi model: A superfield approach
NASA Astrophysics Data System (ADS)
Gupta, Saurabh
2014-12-01
We derive the off-shell nilpotent and absolutely anticommuting Becchi-Rouet-Stora-Tyutin (BRST) as well as anti-BRST transformations s ( a) b corresponding to the Yang-Mills gauge transformations of 3D Jackiw-Pi model by exploiting the "augmented" super-field formalism. We also show that the Curci-Ferrari restriction, which is a hallmark of any non-Abelian 1-form gauge theories, emerges naturally within this formalism and plays an instrumental role in providing the proof of absolute anticommutativity of s ( a) b .
Bresson, D; Madadaki, C; Poisson, I; Habas, C; Mandonnet, E
2013-01-01
It is commonly believed that sulci offer a natural path to reach deep-seated lesions. However, it has also been argued that this approach carries a risk of damaging the vessels during the opening of the sulcus.We therefore were prompted to test the possibility of finding a transcortical path identified as non-functional by intraoperative brain mapping. A successful resection is presented of a left posterior is thmusclear cell ependymoma through a selected corridor based on functional mapping in an awake patient.MRI performed at 12 months showed no tumour recurrence. Pre- and postoperative extensive testing confirmed an improvement of the patient's cognitive functions. Therefore, we were able to demonstrate the feasibility of a functionally tailored transcortical approach as an alternative to the transulcal approach for deep-seated lesions. This concept should be validated in a larger patient series.
Functional integral approach to the kinetic theory of inhomogeneous systems
NASA Astrophysics Data System (ADS)
Fouvry, Jean-Baptiste; Chavanis, Pierre-Henri; Pichon, Christophe
2016-10-01
We present a derivation of the kinetic equation describing the secular evolution of spatially inhomogeneous systems with long-range interactions, the so-called inhomogeneous Landau equation, by relying on a functional integral formalism. We start from the BBGKY hierarchy derived from the Liouville equation. At the order 1 / N, where N is the number of particles, the evolution of the system is characterised by its 1-body distribution function and its 2-body correlation function. Introducing associated auxiliary fields, the evolution of these quantities may be rewritten as a traditional functional integral. By functionally integrating over the 2-body autocorrelation, one obtains a new constraint connecting the 1-body DF and the auxiliary fields. When inverted, this constraint allows us to obtain the closed non-linear kinetic equation satisfied by the 1-body distribution function. This derivation provides an alternative to previous methods, either based on the direct resolution of the truncated BBGKY hierarchy or on the Klimontovich equation. It may turn out to be fruitful to derive more accurate kinetic equations, e.g., accounting for collective effects, or higher order correlation terms.
Logan, Deirdre E.; Carpino, Elizabeth A.; Chiang, Gloria; Condon, Marianne; Firn, Emily; Gaughan, Veronica J.; Hogan, Melinda, P.T.; Leslie, David S.; Olson, Katie, P.T.; Sager, Susan; Sethna, Navil; Simons, Laura E.; Zurakowski, David; Berde, Charles B.
2013-01-01
Objectives To examine clinical outcomes of an interdisciplinary day hospital treatment program (comprised of physical, occupational, and cognitive-behavioral therapies with medical and nursing services) for pediatric complex regional pain syndrome (CRPS). Methods The study is a longitudinal case series of consecutive patients treated in a day hospital pediatric pain rehabilitation program. Participants were 56 children and adolescents ages 8–18 years (median = 14 years) with CRPS spectrum conditions who failed to progress sufficiently with a previous outpatient and/or inpatient treatments. Patients participated in daily physical therapy, occupational therapy and psychological treatment and received nursing and medical care as necessary. The model places equal emphasis on physical and cognitive-behavioral approaches to pain management. Median duration of stay was 3 weeks. Outcome measures included assessments of physical, occupational, and psychological functioning at program admission, discharge, and at post-treatment follow-up at a median of 10 months post-discharge. Scores at discharge and follow-up were compared with measures on admission by Wilcoxon tests, paired t tests, or ANOVA as appropriate, with corrections for multiple comparisons. Results Outcomes demonstrate clinically and statistically significant improvements from admission to discharge in pain intensity (p<0.001), functional disability (p<0.001), subjective report of limb function (p<0.001), timed running (p<0.001) occupational performance (p<0.001), medication use (p<0.01), use of assistive devices (p<0.001), and emotional functioning (anxiety, p<0.001; depression, p<0.01). Functional gains were maintained or further improved at follow-up. Discussion A day-hospital interdisciplinary rehabilitation approach appears effective in reducing disability and improving physical and emotional functioning and occupational performance among children and adolescents with complex regional pain syndromes that
Piertney, Stuart B; Webster, Lucy M I
2010-04-01
Over the past two decades the fields of molecular ecology and population genetics have been dominated by the use of putatively neutral DNA markers, primarily to resolve spatio-temporal patterns of genetic variation to inform our understanding of population structure, gene flow and pedigree. Recent emphasis in comparative functional genomics, however, has fuelled a resurgence of interest in functionally important genetic variation that underpins phenotypic traits of adaptive or ecological significance. It may prove a major challenge to transfer genomics information from classical model species to examine functional diversity in non-model species in natural populations, but already multiple gene-targeted candidate loci with major effect on phenotype and fitness have been identified. Here we briefly describe some of the research strategies used for isolating and characterising functional genetic diversity at candidate gene-targeted loci, and illustrate the efficacy of some of these approaches using our own studies on red grouse (Lagopus lagopus scoticus). We then review how candidate gene markers have been used to: (1) quantify genetic diversity among populations to identify those depauperate in genetic diversity and requiring specific management action; (2) identify the strength and mode of selection operating on individuals within natural populations; and (3) understand direct mechanistic links between allelic variation at single genes and variance in individual fitness.
Cooperative fuzzy games approach to setting target levels of ECs in quality function deployment.
Yang, Zhihui; Chen, Yizeng; Yin, Yunqiang
2014-01-01
Quality function deployment (QFD) can provide a means of translating customer requirements (CRs) into engineering characteristics (ECs) for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.
NASA Astrophysics Data System (ADS)
Das, Priyanka; Ahmad, Zeeshan; Singh, P. N.; Prasad, Ashutosh
2011-11-01
The present work makes use of experimental data for real part of microwave complex permittivity of spring oats (Avena sativa L.) at 2.45 GHz and 24 °C as a function of moisture content, as extracted from the literature. These permittivity data were individually converted to those for solid materials using seven independent mixture equations for effective permittivity of random media. Moisture dependent quadratic models for complex permittivity of spring oats (Avena sativa L.), as developed by the present group, were used to evaluate the dielectric loss factor of spring oats kernels. Using these data, seven density—independent permittivity functions were evaluated and plotted as a function of moisture content of the samples. Second and third order polynomial regression equations were used for curve fittings with these data and their performances are reported. Coefficients of determination (r2) approaching unity (˜ 0.95-0.9999) and very small Standard Deviation (SD) ˜0.001-8.87 show good acceptability for these models. The regularity in the nature of these variations revealed the usefulness of the density—independent permittivity functions as indicators/calibrators of moisture content of spring oats kernels. Keeping in view the fact that moisture content of grains and seeds is an important factor determining quality and affecting the storage, transportation, and milling of grains and seeds, the work has the potentiality of its practical applications.
ERIC Educational Resources Information Center
Samejima, Fumiko
Simple sum procedure of the conditional PDF approach (plausiblity of distractor function) combined with the normal approach method was applied for estimating the plausibility functions of the distractors of the Level II vocabulary subtest items of the Iowa Tests of Basic Skills. In so doing, the normal ogive model was adopted for the correct…
Gomez-Ramirez, Jaime; Sanz, Ricardo
2013-09-01
One of the most important scientific challenges today is the quantitative and predictive understanding of biological function. Classical mathematical and computational approaches have been enormously successful in modeling inert matter, but they may be inadequate to address inherent features of biological systems. We address the conceptual and methodological obstacles that lie in the inverse problem in biological systems modeling. We introduce a full Bayesian approach (FBA), a theoretical framework to study biological function, in which probability distributions are conditional on biophysical information that physically resides in the biological system that is studied by the scientist.
Shen, Hua; McHale, Cliona M.; Smith, Martyn T; Zhang, Luoping
2015-01-01
Characterizing variability in the extent and nature of responses to environmental exposures is a critical aspect of human health risk assessment. Chemical toxicants act by many different mechanisms, however, and the genes involved in adverse outcome pathways (AOPs) and AOP networks are not yet characterized. Functional genomic approaches can reveal both toxicity pathways and susceptibility genes, through knockdown or knockout of all non-essential genes in a cell of interest, and identification of genes associated with a toxicity phenotype following toxicant exposure. Screening approaches in yeast and human near-haploid leukemic KBM7 cells, have identified roles for genes and pathways involved in response to many toxicants but are limited by partial homology among yeast and human genes and limited relevance to normal diploid cells. RNA interference (RNAi) suppresses mRNA expression level but is limited by off-target effects (OTEs) and incomplete knockdown. The recently developed gene editing approach called clustered regularly interspaced short palindrome repeats-associated nuclease (CRISPR)-Cas9, can precisely knock-out most regions of the genome at the DNA level with fewer OTEs than RNAi, in multiple human cell types, thus overcoming the limitations of the other approaches. It has been used to identify genes involved in the response to chemical and microbial toxicants in several human cell types and could readily be extended to the systematic screening of large numbers of environmental chemicals. CRISPR-Cas9 can also repress and activate gene expression, including that of non-coding RNA, with near-saturation, thus offering the potential to more fully characterize AOPs and AOP networks. Finally, CRISPR-Cas9 can generate complex animal models in which to conduct preclinical toxicity testing at the level of individual genotypes or haplotypes. Therefore, CRISPR-Cas9 is a powerful and flexible functional genomic screening approach that can be harnessed to provide
Models of protocellular structures, functions and evolution
NASA Technical Reports Server (NTRS)
Pohorille, Andrew; New, Michael H.; DeVincenzi, Donald L. (Technical Monitor)
2000-01-01
The central step in the origin of life was the emergence of organized structures from organic molecules available on the early earth. These predecessors to modern cells, called 'proto-cells,' were simple, membrane bounded structures able to maintain themselves, grow, divide, and evolve. Since there is no fossil record of these earliest of life forms, it is a scientific challenge to discover plausible mechanisms for how these entities formed and functioned. To meet this challenge, it is essential to create laboratory models of protocells that capture the main attributes associated with living systems, while remaining consistent with known, or inferred, protobiological conditions. This report provides an overview of a project which has focused on protocellular metabolism and the coupling of metabolism to energy transduction. We have assumed that the emergence of systems endowed with genomes and capable of Darwinian evolution was preceded by a pre-genomic phase, in which protocells functioned and evolved using mostly proteins, without self-replicating nucleic acids such as RNA.
Bystander Approaches: Empowering Students to Model Ethical Sexual Behavior
ERIC Educational Resources Information Center
Lynch, Annette; Fleming, Wm. Michael
2005-01-01
Sexual violence on college campuses is well documented. Prevention education has emerged as an alternative to victim-- and perpetrator--oriented approaches used in the past. One sexual violence prevention education approach focuses on educating and empowering the bystander to become a point of ethical intervention. In this model, bystanders to…
An Evaluation of Cluster Analytic Approaches to Initial Model Specification.
ERIC Educational Resources Information Center
Bacon, Donald R.
2001-01-01
Evaluated the performance of several alternative cluster analytic approaches to initial model specification using population parameter analyses and a Monte Carlo simulation. Of the six cluster approaches evaluated, the one using the correlations of item correlations as a proximity metric and average linking as a clustering algorithm performed the…
An Estimating Equations Approach for the LISCOMP Model.
ERIC Educational Resources Information Center
Reboussin, Beth A.; Liang, Kung-Lee
1998-01-01
A quadratic estimating equations approach for the LISCOMP model is proposed that only requires specification of the first two moments. This method is compared with a three-stage generalized least squares approach through a numerical study and application to a study of life events and neurotic illness. (SLD)
Optimizing technology investments: a broad mission model approach
NASA Technical Reports Server (NTRS)
Shishko, R.
2003-01-01
A long-standing problem in NASA is how to allocate scarce technology development resources across advanced technologies in order to best support a large set of future potential missions. Within NASA, two orthogonal paradigms have received attention in recent years: the real-options approach and the broad mission model approach. This paper focuses on the latter.
A Constructive Neural-Network Approach to Modeling Psychological Development
ERIC Educational Resources Information Center
Shultz, Thomas R.
2012-01-01
This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…
Bayesian approach to decompression sickness model parameter estimation.
Howle, L E; Weber, P W; Nichols, J M
2017-03-01
We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.
Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling
NASA Technical Reports Server (NTRS)
Lohn, Jason; Colombano, Silvano
1997-01-01
We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.
A Networks Approach to Modeling Enzymatic Reactions.
Imhof, P
2016-01-01
Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes.
A CFD Approach to Modeling Spacecraft Fuel Slosh
NASA Technical Reports Server (NTRS)
Marsell, Brandon; Gangadharan, Sathya; Chatman, Yadira; Sudermann, James; Schlee, Keith; Ristow, James E.
2009-01-01
Energy dissipation and resonant coupling from sloshing fuel in spacecraft fuel tanks is a problem that occurs in the design of many spacecraft. In the case of a spin stabilized spacecraft, this energy dissipation can cause a growth in the spacecrafts' nutation (wobble) that may lead to disastrous consequences for the mission. Even in non-spinning spacecraft, coupling between the spacecraft or upper stage flight control system and an unanticipated slosh resonance can result in catastrophe. By using a Computational Fluid Dynamics (CFD) solver such as Fluent, a model for this fuel slosh can be created. The accuracy of the model must be tested by comparing its results to an experimental test case. Such a model will allow for the variation of many different parameters such as fluid viscosity and gravitational field, yielding a deeper understanding of spacecraft slosh dynamics. In order to gain a better understanding of the dynamics behind sloshing fluids, the Launch Services Program (LSP) at the NASA Kennedy Space Center (KSC) is interested in finding ways to better