Shih, Shirley L; Zafonte, Ross; Bates, David W; Gerrard, Paul; Goldstein, Richard; Mix, Jacqueline; Niewczyk, Paulette; Greysen, S Ryan; Kazis, Lewis; Ryan, Colleen M; Schneider, Jeffrey C
2016-10-01
Functional status is associated with patient outcomes, but is rarely included in hospital readmission risk models. The objective of this study was to determine whether functional status is a better predictor of 30-day acute care readmission than traditionally investigated variables including demographics and comorbidities. Retrospective database analysis between 2002 and 2011. 1158 US inpatient rehabilitation facilities. 4,199,002 inpatient rehabilitation facility admissions comprising patients from 16 impairment groups within the Uniform Data System for Medical Rehabilitation database. Logistic regression models predicting 30-day readmission were developed based on age, gender, comorbidities (Elixhauser comorbidity index, Deyo-Charlson comorbidity index, and Medicare comorbidity tier system), and functional status [Functional Independence Measure (FIM)]. We hypothesized that (1) function-based models would outperform demographic- and comorbidity-based models and (2) the addition of demographic and comorbidity data would not significantly enhance function-based models. For each impairment group, Function Only Models were compared against Demographic-Comorbidity Models and Function Plus Models (Function-Demographic-Comorbidity Models). The primary outcome was 30-day readmission, and the primary measure of model performance was the c-statistic. All-cause 30-day readmission rate from inpatient rehabilitation facilities to acute care hospitals was 9.87%. C-statistics for the Function Only Models were 0.64 to 0.70. For all 16 impairment groups, the Function Only Model demonstrated better c-statistics than the Demographic-Comorbidity Models (c-statistic difference: 0.03-0.12). The best-performing Function Plus Models exhibited negligible improvements in model performance compared to Function Only Models, with c-statistic improvements of only 0.01 to 0.05. Readmissions are currently used as a marker of hospital performance, with recent financial penalties to hospitals for excessive readmissions. Function-based readmission models outperform models based only on demographics and comorbidities. Readmission risk models would benefit from the inclusion of functional status as a primary predictor. Copyright © 2016 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Modified hyperbolic sine model for titanium dioxide-based memristive thin films
NASA Astrophysics Data System (ADS)
Abu Bakar, Raudah; Syahirah Kamarozaman, Nur; Fazlida Hanim Abdullah, Wan; Herman, Sukreen Hana
2018-03-01
Since the emergence of memristor as the newest fundamental circuit elements, studies on memristor modeling have been evolved. To date, the developed models were based on the linear model, linear ionic drift model using different window functions, tunnelling barrier model and hyperbolic-sine function based model. Although using hyperbolic-sine function model could predict the memristor electrical properties, the model was not well fitted to the experimental data. In order to improve the performance of the hyperbolic-sine function model, the state variable equation was modified. On the one hand, the addition of window function cannot provide an improved fitting. By multiplying the Yakopcic’s state variable model to Chang’s model on the other hand resulted in the closer agreement with the TiO2 thin film experimental data. The percentage error was approximately 2.15%.
NASA Astrophysics Data System (ADS)
Song, H. S.; Li, M.; Qian, W.; Song, X.; Chen, X.; Scheibe, T. D.; Fredrickson, J.; Zachara, J. M.; Liu, C.
2016-12-01
Modeling environmental microbial communities at individual organism level is currently intractable due to overwhelming structural complexity. Functional guild-based approaches alleviate this problem by lumping microorganisms into fewer groups based on their functional similarities. This reduction may become ineffective, however, when individual species perform multiple functions as environmental conditions vary. In contrast, the functional enzyme-based modeling approach we present here describes microbial community dynamics based on identified functional enzymes (rather than individual species or their groups). Previous studies in the literature along this line used biomass or functional genes as surrogate measures of enzymes due to the lack of analytical methods for quantifying enzymes in environmental samples. Leveraging our recent development of a signature peptide-based technique enabling sensitive quantification of functional enzymes in environmental samples, we developed a genetically structured microbial community model (GSMCM) to incorporate enzyme concentrations and various other omics measurements (if available) as key modeling input. We formulated the GSMCM based on the cybernetic metabolic modeling framework to rationally account for cellular regulation without relying on empirical inhibition kinetics. In the case study of modeling denitrification process in Columbia River hyporheic zone sediments collected from the Hanford Reach, our GSMCM provided a quantitative fit to complex experimental data in denitrification, including the delayed response of enzyme activation to the change in substrate concentration. Our future goal is to extend the modeling scope to the prediction of carbon and nitrogen cycles and contaminant fate. Integration of a simpler version of the GSMCM with PFLOTRAN for multi-scale field simulations is in progress.
ERIC Educational Resources Information Center
Tumthong, Suwut; Piriyasurawong, Pullop; Jeerangsuwan, Namon
2016-01-01
This research proposes a functional competency development model for academic personnel based on international professional qualification standards in computing field and examines the appropriateness of the model. Specifically, the model consists of three key components which are: 1) functional competency development model, 2) blended training…
Hong, X; Harris, C J
2000-01-01
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
USDA-ARS?s Scientific Manuscript database
In this paper we develop a model for computing directional output distance functions with endogenously determined direction vectors. We show how this model is related to the slacks-based directional distance function introduced by Fare and Grosskopf and show how to use the slacks-based function to e...
NASA Astrophysics Data System (ADS)
Hoffmann, Aswin L.; den Hertog, Dick; Siem, Alex Y. D.; Kaanders, Johannes H. A. M.; Huizenga, Henk
2008-11-01
Finding fluence maps for intensity-modulated radiation therapy (IMRT) can be formulated as a multi-criteria optimization problem for which Pareto optimal treatment plans exist. To account for the dose-per-fraction effect of fractionated IMRT, it is desirable to exploit radiobiological treatment plan evaluation criteria based on the linear-quadratic (LQ) cell survival model as a means to balance the radiation benefits and risks in terms of biologic response. Unfortunately, the LQ-model-based radiobiological criteria are nonconvex functions, which make the optimization problem hard to solve. We apply the framework proposed by Romeijn et al (2004 Phys. Med. Biol. 49 1991-2013) to find transformations of LQ-model-based radiobiological functions and establish conditions under which transformed functions result in equivalent convex criteria that do not change the set of Pareto optimal treatment plans. The functions analysed are: the LQ-Poisson-based model for tumour control probability (TCP) with and without inter-patient heterogeneity in radiation sensitivity, the LQ-Poisson-based relative seriality s-model for normal tissue complication probability (NTCP), the equivalent uniform dose (EUD) under the LQ-Poisson model and the fractionation-corrected Probit-based model for NTCP according to Lyman, Kutcher and Burman. These functions differ from those analysed before in that they cannot be decomposed into elementary EUD or generalized-EUD functions. In addition, we show that applying increasing and concave transformations to the convexified functions is beneficial for the piecewise approximation of the Pareto efficient frontier.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Minjing; Qian, Wei-jun; Gao, Yuqian
The kinetics of biogeochemical processes in natural and engineered environmental systems are typically described using Monod-type or modified Monod-type models. These models rely on biomass as surrogates for functional enzymes in microbial community that catalyze biogeochemical reactions. A major challenge to apply such models is the difficulty to quantitatively measure functional biomass for constraining and validating the models. On the other hand, omics-based approaches have been increasingly used to characterize microbial community structure, functions, and metabolites. Here we proposed an enzyme-based model that can incorporate omics-data to link microbial community functions with biogeochemical process kinetics. The model treats enzymes asmore » time-variable catalysts for biogeochemical reactions and applies biogeochemical reaction network to incorporate intermediate metabolites. The sequences of genes and proteins from metagenomes, as well as those from the UniProt database, were used for targeted enzyme quantification and to provide insights into the dynamic linkage among functional genes, enzymes, and metabolites that are necessary to be incorporated in the model. The application of the model was demonstrated using denitrification as an example by comparing model-simulated with measured functional enzymes, genes, denitrification substrates and intermediates« less
NASA Astrophysics Data System (ADS)
Li, Yutong; Wang, Yuxin; Duffy, Alex H. B.
2014-11-01
Computer-based conceptual design for routine design has made great strides, yet non-routine design has not been given due attention, and it is still poorly automated. Considering that the function-behavior-structure(FBS) model is widely used for modeling the conceptual design process, a computer-based creativity enhanced conceptual design model(CECD) for non-routine design of mechanical systems is presented. In the model, the leaf functions in the FBS model are decomposed into and represented with fine-grain basic operation actions(BOA), and the corresponding BOA set in the function domain is then constructed. Choosing building blocks from the database, and expressing their multiple functions with BOAs, the BOA set in the structure domain is formed. Through rule-based dynamic partition of the BOA set in the function domain, many variants of regenerated functional schemes are generated. For enhancing the capability to introduce new design variables into the conceptual design process, and dig out more innovative physical structure schemes, the indirect function-structure matching strategy based on reconstructing the combined structure schemes is adopted. By adjusting the tightness of the partition rules and the granularity of the divided BOA subsets, and making full use of the main function and secondary functions of each basic structure in the process of reconstructing of the physical structures, new design variables and variants are introduced into the physical structure scheme reconstructing process, and a great number of simpler physical structure schemes to accomplish the overall function organically are figured out. The creativity enhanced conceptual design model presented has a dominant capability in introducing new deign variables in function domain and digging out simpler physical structures to accomplish the overall function, therefore it can be utilized to solve non-routine conceptual design problem.
Satoshi Hirabayashi; Chuck Kroll; David Nowak
2011-01-01
The Urban Forest Effects-Deposition model (UFORE-D) was developed with a component-based modeling approach. Functions of the model were separated into components that are responsible for user interface, data input/output, and core model functions. Taking advantage of the component-based approach, three UFORE-D applications were developed: a base application to estimate...
Bolandzadeh, Niousha; Kording, Konrad; Salowitz, Nicole; Davis, Jennifer C; Hsu, Liang; Chan, Alison; Sharma, Devika; Blohm, Gunnar; Liu-Ambrose, Teresa
2015-01-01
Current research suggests that the neuropathology of dementia-including brain changes leading to memory impairment and cognitive decline-is evident years before the onset of this disease. Older adults with cognitive decline have reduced functional independence and quality of life, and are at greater risk for developing dementia. Therefore, identifying biomarkers that can be easily assessed within the clinical setting and predict cognitive decline is important. Early recognition of cognitive decline could promote timely implementation of preventive strategies. We included 89 community-dwelling adults aged 70 years and older in our study, and collected 32 measures of physical function, health status and cognitive function at baseline. We utilized an L1-L2 regularized regression model (elastic net) to identify which of the 32 baseline measures were strongly predictive of cognitive function after one year. We built three linear regression models: 1) based on baseline cognitive function, 2) based on variables consistently selected in every cross-validation loop, and 3) a full model based on all the 32 variables. Each of these models was carefully tested with nested cross-validation. Our model with the six variables consistently selected in every cross-validation loop had a mean squared prediction error of 7.47. This number was smaller than that of the full model (115.33) and the model with baseline cognitive function (7.98). Our model explained 47% of the variance in cognitive function after one year. We built a parsimonious model based on a selected set of six physical function and health status measures strongly predictive of cognitive function after one year. In addition to reducing the complexity of the model without changing the model significantly, our model with the top variables improved the mean prediction error and R-squared. These six physical function and health status measures can be easily implemented in a clinical setting.
Zsuga, Judit; Biro, Klara; Papp, Csaba; Tajti, Gabor; Gesztelyi, Rudolf
2016-02-01
Reinforcement learning (RL) is a powerful concept underlying forms of associative learning governed by the use of a scalar reward signal, with learning taking place if expectations are violated. RL may be assessed using model-based and model-free approaches. Model-based reinforcement learning involves the amygdala, the hippocampus, and the orbitofrontal cortex (OFC). The model-free system involves the pedunculopontine-tegmental nucleus (PPTgN), the ventral tegmental area (VTA) and the ventral striatum (VS). Based on the functional connectivity of VS, model-free and model based RL systems center on the VS that by integrating model-free signals (received as reward prediction error) and model-based reward related input computes value. Using the concept of reinforcement learning agent we propose that the VS serves as the value function component of the RL agent. Regarding the model utilized for model-based computations we turned to the proactive brain concept, which offers an ubiquitous function for the default network based on its great functional overlap with contextual associative areas. Hence, by means of the default network the brain continuously organizes its environment into context frames enabling the formulation of analogy-based association that are turned into predictions of what to expect. The OFC integrates reward-related information into context frames upon computing reward expectation by compiling stimulus-reward and context-reward information offered by the amygdala and hippocampus, respectively. Furthermore we suggest that the integration of model-based expectations regarding reward into the value signal is further supported by the efferent of the OFC that reach structures canonical for model-free learning (e.g., the PPTgN, VTA, and VS). (c) 2016 APA, all rights reserved).
Weil, Joyce; Hutchinson, Susan R; Traxler, Karen
2014-11-01
Data from the Women's Health and Aging Study were used to test a model of factors explaining depressive symptomology. The primary purpose of the study was to explore the association between performance-based measures of functional ability and depression and to examine the role of self-rated physical difficulties and perceived instrumental support in mediating the relationship between performance-based functioning and depression. The inclusion of performance-based measures allows for the testing of functional ability as a clinical precursor to disability and depression: a critical, but rarely examined, association in the disablement process. Structural equation modeling supported the overall fit of the model and found an indirect relationship between performance-based functioning and depression, with perceived physical difficulties serving as a significant mediator. Our results highlight the complementary nature of performance-based and self-rated measures and the importance of including perception of self-rated physical difficulties when examining depression in older persons. © The Author(s) 2014.
Jiao, Y; Chen, R; Ke, X; Cheng, L; Chu, K; Lu, Z; Herskovits, E H
2011-01-01
Autism spectrum disorder (ASD) is a neurodevelopmental disorder, of which Asperger syndrome and high-functioning autism are subtypes. Our goal is: 1) to determine whether a diagnostic model based on single-nucleotide polymorphisms (SNPs), brain regional thickness measurements, or brain regional volume measurements can distinguish Asperger syndrome from high-functioning autism; and 2) to compare the SNP, thickness, and volume-based diagnostic models. Our study included 18 children with ASD: 13 subjects with high-functioning autism and 5 subjects with Asperger syndrome. For each child, we obtained 25 SNPs for 8 ASD-related genes; we also computed regional cortical thicknesses and volumes for 66 brain structures, based on structural magnetic resonance (MR) examination. To generate diagnostic models, we employed five machine-learning techniques: decision stump, alternating decision trees, multi-class alternating decision trees, logistic model trees, and support vector machines. For SNP-based classification, three decision-tree-based models performed better than the other two machine-learning models. The performance metrics for three decision-tree-based models were similar: decision stump was modestly better than the other two methods, with accuracy = 90%, sensitivity = 0.95 and specificity = 0.75. All thickness and volume-based diagnostic models performed poorly. The SNP-based diagnostic models were superior to those based on thickness and volume. For SNP-based classification, rs878960 in GABRB3 (gamma-aminobutyric acid A receptor, beta 3) was selected by all tree-based models. Our analysis demonstrated that SNP-based classification was more accurate than morphometry-based classification in ASD subtype classification. Also, we found that one SNP--rs878960 in GABRB3--distinguishes Asperger syndrome from high-functioning autism.
Functional Additive Mixed Models
Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja
2014-01-01
We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach. PMID:26347592
Functional Additive Mixed Models.
Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja
2015-04-01
We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.
Directivity models produced for the Next Generation Attenuation West 2 (NGA-West 2) project
Spudich, Paul A.; Watson-Lamprey, Jennie; Somerville, Paul G.; Bayless, Jeff; Shahi, Shrey; Baker, Jack W.; Rowshandel, Badie; Chiou, Brian
2012-01-01
Five new directivity models are being developed for the NGA-West 2 project. All are based on the NGA-West 2 data base, which is considerably expanded from the original NGA-West data base, containing about 3,000 more records from earthquakes having finite-fault rupture models. All of the new directivity models have parameters based on fault dimension in km, not normalized fault dimension. This feature removes a peculiarity of previous models which made them inappropriate for modeling large magnitude events on long strike-slip faults. Two models are explicitly, and one is implicitly, 'narrowband' models, in which the effect of directivity does not monotonically increase with spectral period but instead peaks at a specific period that is a function of earthquake magnitude. These narrowband models' functional forms are capable of simulating directivity over a wider range of earthquake magnitude than previous models. The functional forms of the five models are presented.
Liu, Hong; Zhu, Jingping; Wang, Kai
2015-08-24
The geometrical attenuation model given by Blinn was widely used in the geometrical optics bidirectional reflectance distribution function (BRDF) models. Blinn's geometrical attenuation model based on symmetrical V-groove assumption and ray scalar theory causes obvious inaccuracies in BRDF curves and negatives the effects of polarization. Aiming at these questions, a modified polarized geometrical attenuation model based on random surface microfacet theory is presented by combining of masking and shadowing effects and polarized effect. The p-polarized, s-polarized and unpolarized geometrical attenuation functions are given in their separate expressions and are validated with experimental data of two samples. It shows that the modified polarized geometrical attenuation function reaches better physical rationality, improves the precision of BRDF model, and widens the applications for different polarization.
Jeong, Chan-Seok; Kim, Dongsup
2016-02-24
Elucidating the cooperative mechanism of interconnected residues is an important component toward understanding the biological function of a protein. Coevolution analysis has been developed to model the coevolutionary information reflecting structural and functional constraints. Recently, several methods have been developed based on a probabilistic graphical model called the Markov random field (MRF), which have led to significant improvements for coevolution analysis; however, thus far, the performance of these models has mainly been assessed by focusing on the aspect of protein structure. In this study, we built an MRF model whose graphical topology is determined by the residue proximity in the protein structure, and derived a novel positional coevolution estimate utilizing the node weight of the MRF model. This structure-based MRF method was evaluated for three data sets, each of which annotates catalytic site, allosteric site, and comprehensively determined functional site information. We demonstrate that the structure-based MRF architecture can encode the evolutionary information associated with biological function. Furthermore, we show that the node weight can more accurately represent positional coevolution information compared to the edge weight. Lastly, we demonstrate that the structure-based MRF model can be reliably built with only a few aligned sequences in linear time. The results show that adoption of a structure-based architecture could be an acceptable approximation for coevolution modeling with efficient computation complexity.
Exploiting the functional and taxonomic structure of genomic data by probabilistic topic modeling.
Chen, Xin; Hu, Xiaohua; Lim, Tze Y; Shen, Xiajiong; Park, E K; Rosen, Gail L
2012-01-01
In this paper, we present a method that enable both homology-based approach and composition-based approach to further study the functional core (i.e., microbial core and gene core, correspondingly). In the proposed method, the identification of major functionality groups is achieved by generative topic modeling, which is able to extract useful information from unlabeled data. We first show that generative topic model can be used to model the taxon abundance information obtained by homology-based approach and study the microbial core. The model considers each sample as a “document,” which has a mixture of functional groups, while each functional group (also known as a “latent topic”) is a weight mixture of species. Therefore, estimating the generative topic model for taxon abundance data will uncover the distribution over latent functions (latent topic) in each sample. Second, we show that, generative topic model can also be used to study the genome-level composition of “N-mer” features (DNA subreads obtained by composition-based approaches). The model consider each genome as a mixture of latten genetic patterns (latent topics), while each functional pattern is a weighted mixture of the “N-mer” features, thus the existence of core genomes can be indicated by a set of common N-mer features. After studying the mutual information between latent topics and gene regions, we provide an explanation of the functional roles of uncovered latten genetic patterns. The experimental results demonstrate the effectiveness of proposed method.
NASA Astrophysics Data System (ADS)
Rodriguez Marco, Albert
Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.
Bayesian Inference for Functional Dynamics Exploring in fMRI Data.
Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing
2016-01-01
This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.
"Shape function + memory mechanism"-based hysteresis modeling of magnetorheological fluid actuators
NASA Astrophysics Data System (ADS)
Qian, Li-Jun; Chen, Peng; Cai, Fei-Long; Bai, Xian-Xu
2018-03-01
A hysteresis model based on "shape function + memory mechanism" is presented and its feasibility is verified through modeling the hysteresis behavior of a magnetorheological (MR) damper. A hysteresis phenomenon in resistor-capacitor (RC) circuit is first presented and analyzed. In the hysteresis model, the "memory mechanism" originating from the charging and discharging processes of the RC circuit is constructed by adopting a virtual displacement variable and updating laws for the reference points. The "shape function" is achieved and generalized from analytical solutions of the simple semi-linear Duhem model. Using the approach, the memory mechanism reveals the essence of specific Duhem model and the general shape function provides a direct and clear means to fit the hysteresis loop. In the frame of the structure of a "Restructured phenomenological model", the original hysteresis operator, i.e., the Bouc-Wen operator, is replaced with the new hysteresis operator. The comparative work with the Bouc-Wen operator based model demonstrates superior performances of high computational efficiency and comparable accuracy of the new hysteresis operator-based model.
Hemorrhage and Hemorrhagic Shock in Swine: A Review
1989-11-01
17 Temperature Regulation ....................... 18 Blood Gas and Acid- Base Status ....................... 18 Electrolyte...22 Renal Function .................................. 23 Hepatic Function ................................ 24 Central Nervous System Function...MODELS Most porcine hemorrhage models are based on concepts and procedures previously developed in other species, especially the dog. As a consequence
Functional Risk Modeling for Lunar Surface Systems
NASA Technical Reports Server (NTRS)
Thomson, Fraser; Mathias, Donovan; Go, Susie; Nejad, Hamed
2010-01-01
We introduce an approach to risk modeling that we call functional modeling , which we have developed to estimate the capabilities of a lunar base. The functional model tracks the availability of functions provided by systems, in addition to the operational state of those systems constituent strings. By tracking functions, we are able to identify cases where identical functions are provided by elements (rovers, habitats, etc.) that are connected together on the lunar surface. We credit functional diversity in those cases, and in doing so compute more realistic estimates of operational mode availabilities. The functional modeling approach yields more realistic estimates of the availability of the various operational modes provided to astronauts by the ensemble of surface elements included in a lunar base architecture. By tracking functional availability the effects of diverse backup, which often exists when two or more independent elements are connected together, is properly accounted for.
ERIC Educational Resources Information Center
Fukuhara, Hirotaka; Kamata, Akihito
2011-01-01
A differential item functioning (DIF) detection method for testlet-based data was proposed and evaluated in this study. The proposed DIF model is an extension of a bifactor multidimensional item response theory (MIRT) model for testlets. Unlike traditional item response theory (IRT) DIF models, the proposed model takes testlet effects into…
Reflection and emission models for deserts derived from Nimbus-7 ERB scanner measurements
NASA Technical Reports Server (NTRS)
Staylor, W. F.; Suttles, J. T.
1986-01-01
Broadband shortwave and longwave radiance measurements obtained from the Nimbus-7 Earth Radiation Budget scanner were used to develop reflectance and emittance models for the Sahara-Arabian, Gibson, and Saudi Deserts. The models were established by fitting the satellite measurements to analytic functions. For the shortwave, the model function is based on an approximate solution to the radiative transfer equation. The bidirectional-reflectance function was obtained from a single-scattering approximation with a Rayleigh-like phase function. The directional-reflectance model followed from integration of the bidirectional model and is a function of the sum and product of cosine solar and viewing zenith angles, thus satisfying reciprocity between these angles. The emittance model was based on a simple power-law of cosine viewing zenith angle.
Finite-element modeling of the human neurocranium under functional anatomical aspects.
Mall, G; Hubig, M; Koebke, J; Steinbuch, R
1997-08-01
Due to its functional significance the human skull plays an important role in biomechanical research. The present work describes a new Finite-Element model of the human neurocranium. The dry skull of a middle-aged woman served as a pattern. The model was developed using only the preprocessor (Mentat) of a commercial FE-system (Marc). Unlike that of other FE models of the human skull mentioned in the literature, the geometry in this model was designed according to functional anatomical findings. Functionally important morphological structures representing loci minoris resistentiae, especially the foramina and fissures of the skull base, were included in the model. The results of two linear static loadcase analyses in the region of the skull base underline the importance of modeling from the functional anatomical point of view.
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Prediction of Chemical Function: Model Development and ...
The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi
Gruber-Baldini, Ann L.; Hicks, Gregory; Ostir, Glen; Klinedinst, N. Jennifer; Orwig, Denise; Magaziner, Jay
2015-01-01
Background Measurement of physical function post hip fracture has been conceptualized using multiple different measures. Purpose This study tested a comprehensive measurement model of physical function. Design This was a descriptive secondary data analysis including 168 men and 171 women post hip fracture. Methods Using structural equation modeling, a measurement model of physical function which included grip strength, activities of daily living, instrumental activities of daily living and performance was tested for fit at 2 and 12 months post hip fracture and among male and female participants and validity of the measurement model of physical function was evaluated based on how well the model explained physical activity, exercise and social activities post hip fracture. Findings The measurement model of physical function fit the data. The amount of variance the model or individual factors of the model explained varied depending on the activity. Conclusion Decisions about the ideal way in which to measure physical function should be based on outcomes considered and participant Clinical Implications The measurement model of physical function is a reliable and valid method to comprehensively measure physical function across the hip fracture recovery trajectory. Practical but useful assessment of function should be considered and monitored over the recovery trajectory post hip fracture. PMID:26492866
2007-12-01
model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality. 15. NUMBER OF...and a Behavioral model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality...prototypes an architectural design which is generalizable, reusable, and extensible. We have created an initial set of model elements that demonstrate
Identifying Model-Based Reconfiguration Goals through Functional Deficiencies
NASA Technical Reports Server (NTRS)
Benazera, Emmanuel; Trave-Massuyes, Louise
2004-01-01
Model-based diagnosis is now advanced to the point autonomous systems face some uncertain and faulty situations with success. The next step toward more autonomy is to have the system recovering itself after faults occur, a process known as model-based reconfiguration. After faults occur, given a prediction of the nominal behavior of the system and the result of the diagnosis operation, this paper details how to automatically determine the functional deficiencies of the system. These deficiencies are characterized in the case of uncertain state estimates. A methodology is then presented to determine the reconfiguration goals based on the deficiencies. Finally, a recovery process interleaves planning and model predictive control to restore the functionalities in prioritized order.
Improved protein model quality assessments by changing the target function.
Uziela, Karolis; Menéndez Hurtado, David; Shu, Nanjiang; Wallner, Björn; Elofsson, Arne
2018-06-01
Protein modeling quality is an important part of protein structure prediction. We have for more than a decade developed a set of methods for this problem. We have used various types of description of the protein and different machine learning methodologies. However, common to all these methods has been the target function used for training. The target function in ProQ describes the local quality of a residue in a protein model. In all versions of ProQ the target function has been the S-score. However, other quality estimation functions also exist, which can be divided into superposition- and contact-based methods. The superposition-based methods, such as S-score, are based on a rigid body superposition of a protein model and the native structure, while the contact-based methods compare the local environment of each residue. Here, we examine the effects of retraining our latest predictor, ProQ3D, using identical inputs but different target functions. We find that the contact-based methods are easier to predict and that predictors trained on these measures provide some advantages when it comes to identifying the best model. One possible reason for this is that contact based methods are better at estimating the quality of multi-domain targets. However, training on the S-score gives the best correlation with the GDT_TS score, which is commonly used in CASP to score the global model quality. To take the advantage of both of these features we provide an updated version of ProQ3D that predicts local and global model quality estimates based on different quality estimates. © 2018 Wiley Periodicals, Inc.
Simulating bimodal tall fescue growth with a degree-day-based process-oriented plant model
USDA-ARS?s Scientific Manuscript database
Plant growth simulation models have a temperature response function driving development, with a base temperature and an optimum temperature defined. Such growth simulation models often function well when plant development rate shows a continuous change throughout the growing season. This approach ...
Weighted functional linear regression models for gene-based association analysis.
Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I
2018-01-01
Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P < 0.1 in at least one analysis had lower P values with weighted models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html.
NASA Astrophysics Data System (ADS)
Vanini, Seyed Ali Sadough; Abolghasemzadeh, Mohammad; Assadi, Abbas
2013-07-01
Functionally graded steels with graded ferritic and austenitic regions including bainite and martensite intermediate layers produced by electroslag remelting have attracted much attention in recent years. In this article, an empirical model based on the Zener-Hollomon (Z-H) constitutive equation with generalized material constants is presented to investigate the effects of temperature and strain rate on the hot working behavior of functionally graded steels. Next, a theoretical model, generalized by strain compensation, is developed for the flow stress estimation of functionally graded steels under hot compression based on the phase mixture rule and boundary layer characteristics. The model is used for different strains and grading configurations. Specifically, the results for αβγMγ steels from empirical and theoretical models showed excellent agreement with those of experiments of other references within acceptable error.
Curriculum Development: A Philosophical Model.
ERIC Educational Resources Information Center
Bruening, William H.
Presenting models based on the philosophies of Carl Rogers, John Dewey, Erich Fromm, and Jean-Paul Sartre, this paper proposes a philosophical approach to education and concludes with pragmatic suggestions concerning teaching based on a fully-functioning-person model. The fully-functioning person is characterized as being open to experience,…
Function-based payment model for inpatient medical rehabilitation: an evaluation.
Sutton, J P; DeJong, G; Wilkerson, D
1996-07-01
To describe the components of a function-based prospective payment model for inpatient medical rehabilitation that parallels diagnosis-related groups (DRGs), to evaluate this model in relation to stakeholder objectives, and to detail the components of a quality of care incentive program that, when combined with this payment model, creates an incentive for provides to maximize functional outcomes. This article describes a conceptual model, involving no data collection or data synthesis. The basic payment model described parallels DRGs. Information on the potential impact of this model on medical rehabilitation is gleaned from the literature evaluating the impact of DRGs. The conceptual model described is evaluated against the results of a Delphi Survey of rehabilitation providers, consumers, policymakers, and researchers previously conducted by members of the research team. The major shortcoming of a function-based prospective payment model for inpatient medical rehabilitation is that it contains no inherent incentive to maximize functional outcomes. Linkage of reimbursement to outcomes, however, by withholding a fixed proportion of the standard FRG payment amount, placing that amount in a "quality of care" pool, and distributing that pool annually among providers whose predesignated, facility-level, case-mix-adjusted outcomes are attained, may be one strategy for maximizing outcome goals.
Optimizing global liver function in radiation therapy treatment planning
NASA Astrophysics Data System (ADS)
Wu, Victor W.; Epelman, Marina A.; Wang, Hesheng; Romeijn, H. Edwin; Feng, Mary; Cao, Yue; Ten Haken, Randall K.; Matuszak, Martha M.
2016-09-01
Liver stereotactic body radiation therapy (SBRT) patients differ in both pre-treatment liver function (e.g. due to degree of cirrhosis and/or prior treatment) and radiosensitivity, leading to high variability in potential liver toxicity with similar doses. This work investigates three treatment planning optimization models that minimize risk of toxicity: two consider both voxel-based pre-treatment liver function and local-function-based radiosensitivity with dose; one considers only dose. Each model optimizes different objective functions (varying in complexity of capturing the influence of dose on liver function) subject to the same dose constraints and are tested on 2D synthesized and 3D clinical cases. The normal-liver-based objective functions are the linearized equivalent uniform dose (\\ell \\text{EUD} ) (conventional ‘\\ell \\text{EUD} model’), the so-called perfusion-weighted \\ell \\text{EUD} (\\text{fEUD} ) (proposed ‘fEUD model’), and post-treatment global liver function (GLF) (proposed ‘GLF model’), predicted by a new liver-perfusion-based dose-response model. The resulting \\ell \\text{EUD} , fEUD, and GLF plans delivering the same target \\ell \\text{EUD} are compared with respect to their post-treatment function and various dose-based metrics. Voxel-based portal venous liver perfusion, used as a measure of local function, is computed using DCE-MRI. In cases used in our experiments, the GLF plan preserves up to 4.6 % ≤ft(7.5 % \\right) more liver function than the fEUD (\\ell \\text{EUD} ) plan does in 2D cases, and up to 4.5 % ≤ft(5.6 % \\right) in 3D cases. The GLF and fEUD plans worsen in \\ell \\text{EUD} of functional liver on average by 1.0 Gy and 0.5 Gy in 2D and 3D cases, respectively. Liver perfusion information can be used during treatment planning to minimize the risk of toxicity by improving expected GLF; the degree of benefit varies with perfusion pattern. Although fEUD model optimization is computationally inexpensive and often achieves better GLF than \\ell \\text{EUD} model optimization does, the GLF model directly optimizes a more clinically relevant metric and can further improve fEUD plan quality.
Benkert, Pascal; Schwede, Torsten; Tosatto, Silvio Ce
2009-05-20
The selection of the most accurate protein model from a set of alternatives is a crucial step in protein structure prediction both in template-based and ab initio approaches. Scoring functions have been developed which can either return a quality estimate for a single model or derive a score from the information contained in the ensemble of models for a given sequence. Local structural features occurring more frequently in the ensemble have a greater probability of being correct. Within the context of the CASP experiment, these so called consensus methods have been shown to perform considerably better in selecting good candidate models, but tend to fail if the best models are far from the dominant structural cluster. In this paper we show that model selection can be improved if both approaches are combined by pre-filtering the models used during the calculation of the structural consensus. Our recently published QMEAN composite scoring function has been improved by including an all-atom interaction potential term. The preliminary model ranking based on the new QMEAN score is used to select a subset of reliable models against which the structural consensus score is calculated. This scoring function called QMEANclust achieves a correlation coefficient of predicted quality score and GDT_TS of 0.9 averaged over the 98 CASP7 targets and perform significantly better in selecting good models from the ensemble of server models than any other groups participating in the quality estimation category of CASP7. Both scoring functions are also benchmarked on the MOULDER test set consisting of 20 target proteins each with 300 alternatives models generated by MODELLER. QMEAN outperforms all other tested scoring functions operating on individual models, while the consensus method QMEANclust only works properly on decoy sets containing a certain fraction of near-native conformations. We also present a local version of QMEAN for the per-residue estimation of model quality (QMEANlocal) and compare it to a new local consensus-based approach. Improved model selection is obtained by using a composite scoring function operating on single models in order to enrich higher quality models which are subsequently used to calculate the structural consensus. The performance of consensus-based methods such as QMEANclust highly depends on the composition and quality of the model ensemble to be analysed. Therefore, performance estimates for consensus methods based on large meta-datasets (e.g. CASP) might overrate their applicability in more realistic modelling situations with smaller sets of models based on individual methods.
Park, Hahnbeom; Lee, Gyu Rie; Heo, Lim; Seok, Chaok
2014-01-01
Protein loop modeling is a tool for predicting protein local structures of particular interest, providing opportunities for applications involving protein structure prediction and de novo protein design. Until recently, the majority of loop modeling methods have been developed and tested by reconstructing loops in frameworks of experimentally resolved structures. In many practical applications, however, the protein loops to be modeled are located in inaccurate structural environments. These include loops in model structures, low-resolution experimental structures, or experimental structures of different functional forms. Accordingly, discrepancies in the accuracy of the structural environment assumed in development of the method and that in practical applications present additional challenges to modern loop modeling methods. This study demonstrates a new strategy for employing a hybrid energy function combining physics-based and knowledge-based components to help tackle this challenge. The hybrid energy function is designed to combine the strengths of each energy component, simultaneously maintaining accurate loop structure prediction in a high-resolution framework structure and tolerating minor environmental errors in low-resolution structures. A loop modeling method based on global optimization of this new energy function is tested on loop targets situated in different levels of environmental errors, ranging from experimental structures to structures perturbed in backbone as well as side chains and template-based model structures. The new method performs comparably to force field-based approaches in loop reconstruction in crystal structures and better in loop prediction in inaccurate framework structures. This result suggests that higher-accuracy predictions would be possible for a broader range of applications. The web server for this method is available at http://galaxy.seoklab.org/loop with the PS2 option for the scoring function.
NASA Astrophysics Data System (ADS)
Pavlick, R.; Schimel, D.
2014-12-01
Dynamic Global Vegetation Models (DGVMs) typically employ only a small set of Plant Functional Types (PFTs) to represent the vast diversity of observed vegetation forms and functioning. There is growing evidence, however, that this abstraction may not adequately represent the observed variation in plant functional traits, which is thought to play an important role for many ecosystem functions and for ecosystem resilience to environmental change. The geographic distribution of PFTs in these models is also often based on empirical relationships between present-day climate and vegetation patterns. Projections of future climate change, however, point toward the possibility of novel regional climates, which could lead to no-analog vegetation compositions incompatible with the PFT paradigm. Here, we present results from the Jena Diversity-DGVM (JeDi-DGVM), a novel traits-based vegetation model, which simulates a large number of hypothetical plant growth strategies constrained by functional tradeoffs, thereby allowing for a more flexible temporal and spatial representation of the terrestrial biosphere. First, we compare simulated present-day geographical patterns of functional traits with empirical trait observations (in-situ and from airborne imaging spectroscopy). The observed trait patterns are then used to improve the tradeoff parameterizations of JeDi-DGVM. Finally, focusing primarily on the simulated leaf traits, we run the model with various amounts of trait diversity. We quantify the effects of these modeled biodiversity manipulations on simulated ecosystem fluxes and stocks for both present-day conditions and transient climate change scenarios. The simulation results reveal that the coarse treatment of plant functional traits by current PFT-based vegetation models may contribute substantial uncertainty regarding carbon-climate feedbacks. Further development of trait-based models and further investment in global in-situ and spectroscopic plant trait observations are needed.
Influence Function Learning in Information Diffusion Networks.
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2014-06-01
Can we learn the influence of a set of people in a social network from cascades of information diffusion? This question is often addressed by a two-stage approach: first learn a diffusion model, and then calculate the influence based on the learned model. Thus, the success of this approach relies heavily on the correctness of the diffusion model which is hard to verify for real world data. In this paper, we exploit the insight that the influence functions in many diffusion models are coverage functions, and propose a novel parameterization of such functions using a convex combination of random basis functions. Moreover, we propose an efficient maximum likelihood based algorithm to learn such functions directly from cascade data, and hence bypass the need to specify a particular diffusion model in advance. We provide both theoretical and empirical analysis for our approach, showing that the proposed approach can provably learn the influence function with low sample complexity, be robust to the unknown diffusion models, and significantly outperform existing approaches in both synthetic and real world data.
NASA Astrophysics Data System (ADS)
Curceac, S.; Ternynck, C.; Ouarda, T.
2015-12-01
Over the past decades, a substantial amount of research has been conducted to model and forecast climatic variables. In this study, Nonparametric Functional Data Analysis (NPFDA) methods are applied to forecast air temperature and wind speed time series in Abu Dhabi, UAE. The dataset consists of hourly measurements recorded for a period of 29 years, 1982-2010. The novelty of the Functional Data Analysis approach is in expressing the data as curves. In the present work, the focus is on daily forecasting and the functional observations (curves) express the daily measurements of the above mentioned variables. We apply a non-linear regression model with a functional non-parametric kernel estimator. The computation of the estimator is performed using an asymmetrical quadratic kernel function for local weighting based on the bandwidth obtained by a cross validation procedure. The proximities between functional objects are calculated by families of semi-metrics based on derivatives and Functional Principal Component Analysis (FPCA). Additionally, functional conditional mode and functional conditional median estimators are applied and the advantages of combining their results are analysed. A different approach employs a SARIMA model selected according to the minimum Akaike (AIC) and Bayessian (BIC) Information Criteria and based on the residuals of the model. The performance of the models is assessed by calculating error indices such as the root mean square error (RMSE), relative RMSE, BIAS and relative BIAS. The results indicate that the NPFDA models provide more accurate forecasts than the SARIMA models. Key words: Nonparametric functional data analysis, SARIMA, time series forecast, air temperature, wind speed
Choi, Young Joon; Constantino, Jason; Vedula, Vijay; Trayanova, Natalia; Mittal, Rajat
2015-01-01
A methodology for the simulation of heart function that combines an MRI-based model of cardiac electromechanics (CE) with a Navier–Stokes-based hemodynamics model is presented. The CE model consists of two coupled components that simulate the electrical and the mechanical functions of the heart. Accurate representations of ventricular geometry and fiber orientations are constructed from the structural magnetic resonance and the diffusion tensor MR images, respectively. The deformation of the ventricle obtained from the electromechanical model serves as input to the hemodynamics model in this one-way coupled approach via imposed kinematic wall velocity boundary conditions and at the same time, governs the blood flow into and out of the ventricular volume. The time-dependent endocardial surfaces are registered using a diffeomorphic mapping algorithm, while the intraventricular blood flow patterns are simulated using a sharp-interface immersed boundary method-based flow solver. The utility of the combined heart-function model is demonstrated by comparing the hemodynamic characteristics of a normal canine heart beating in sinus rhythm against that of the dyssynchronously beating failing heart. We also discuss the potential of coupled CE and hemodynamics models for various clinical applications. PMID:26442254
Houston, Simon; Lithgow, Karen Vivien; Osbak, Kara Krista; Kenyon, Chris Richard; Cameron, Caroline E
2018-05-16
Syphilis continues to be a major global health threat with 11 million new infections each year, and a global burden of 36 million cases. The causative agent of syphilis, Treponema pallidum subspecies pallidum, is a highly virulent bacterium, however the molecular mechanisms underlying T. pallidum pathogenesis remain to be definitively identified. This is due to the fact that T. pallidum is currently uncultivatable, inherently fragile and thus difficult to work with, and phylogenetically distinct with no conventional virulence factor homologs found in other pathogens. In fact, approximately 30% of its predicted protein-coding genes have no known orthologs or assigned functions. Here we employed a structural bioinformatics approach using Phyre2-based tertiary structure modeling to improve our understanding of T. pallidum protein function on a proteome-wide scale. Phyre2-based tertiary structure modeling generated high-confidence predictions for 80% of the T. pallidum proteome (780/978 predicted proteins). Tertiary structure modeling also inferred the same function as primary structure-based annotations from genome sequencing pipelines for 525/605 proteins (87%), which represents 54% (525/978) of all T. pallidum proteins. Of the 175 T. pallidum proteins modeled with high confidence that were not assigned functions in the previously annotated published proteome, 167 (95%) were able to be assigned predicted functions. Twenty-one of the 175 hypothetical proteins modeled with high confidence were also predicted to exhibit significant structural similarity with proteins experimentally confirmed to be required for virulence in other pathogens. Phyre2-based structural modeling is a powerful bioinformatics tool that has provided insight into the potential structure and function of the majority of T. pallidum proteins and helped validate the primary structure-based annotation of more than 50% of all T. pallidum proteins with high confidence. This work represents the first T. pallidum proteome-wide structural modeling study and is one of few studies to apply this approach for the functional annotation of a whole proteome.
Model Based Mission Assurance: Emerging Opportunities for Robotic Systems
NASA Technical Reports Server (NTRS)
Evans, John W.; DiVenti, Tony
2016-01-01
The emergence of Model Based Systems Engineering (MBSE) in a Model Based Engineering framework has created new opportunities to improve effectiveness and efficiencies across the assurance functions. The MBSE environment supports not only system architecture development, but provides for support of Systems Safety, Reliability and Risk Analysis concurrently in the same framework. Linking to detailed design will further improve assurance capabilities to support failures avoidance and mitigation in flight systems. This also is leading new assurance functions including model assurance and management of uncertainty in the modeling environment. Further, the assurance cases, a structured hierarchal argument or model, are emerging as a basis for supporting a comprehensive viewpoint in which to support Model Based Mission Assurance (MBMA).
Froissart bound and self-similarity based models of proton structure functions
NASA Astrophysics Data System (ADS)
Choudhury, D. K.; Saikia, Baishali
2018-03-01
Froissart bound implies that the total proton-proton cross-section (or equivalently proton structure function) cannot rise faster than log2s ˜log2 1 x. Compatibility of such behavior with the notion of self-similarity in proton structure function was suggested by us sometime back. In the present work, we generalize and improve it further by considering more recent self-similarity based models of proton structure functions and compare with recent data as well as with the model of Block, Durand, Ha and McKay.
Towards a model-based cognitive neuroscience of stopping - a neuroimaging perspective.
Sebastian, Alexandra; Forstmann, Birte U; Matzke, Dora
2018-07-01
Our understanding of the neural correlates of response inhibition has greatly advanced over the last decade. Nevertheless the specific function of regions within this stopping network remains controversial. The traditional neuroimaging approach cannot capture many processes affecting stopping performance. Despite the shortcomings of the traditional neuroimaging approach and a great progress in mathematical and computational models of stopping, model-based cognitive neuroscience approaches in human neuroimaging studies are largely lacking. To foster model-based approaches to ultimately gain a deeper understanding of the neural signature of stopping, we outline the most prominent models of response inhibition and recent advances in the field. We highlight how a model-based approach in clinical samples has improved our understanding of altered cognitive functions in these disorders. Moreover, we show how linking evidence-accumulation models and neuroimaging data improves the identification of neural pathways involved in the stopping process and helps to delineate these from neural networks of related but distinct functions. In conclusion, adopting a model-based approach is indispensable to identifying the actual neural processes underlying stopping. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Functional status predicts acute care readmission in the traumatic spinal cord injury population.
Huang, Donna; Slocum, Chloe; Silver, Julie K; Morgan, James W; Goldstein, Richard; Zafonte, Ross; Schneider, Jeffrey C
2018-03-29
Context/objective Acute care readmission has been identified as an important marker of healthcare quality. Most previous models assessing risk prediction of readmission incorporate variables for medical comorbidity. We hypothesized that functional status is a more robust predictor of readmission in the spinal cord injury population than medical comorbidities. Design Retrospective cross-sectional analysis. Setting Inpatient rehabilitation facilities, Uniform Data System for Medical Rehabilitation data from 2002 to 2012 Participants traumatic spinal cord injury patients. Outcome measures A logistic regression model for predicting acute care readmission based on demographic variables and functional status (Functional Model) was compared with models incorporating demographics, functional status, and medical comorbidities (Functional-Plus) or models including demographics and medical comorbidities (Demographic-Comorbidity). The primary outcomes were 3- and 30-day readmission, and the primary measure of model performance was the c-statistic. Results There were a total of 68,395 patients with 1,469 (2.15%) readmitted at 3 days and 7,081 (10.35%) readmitted at 30 days. The c-statistics for the Functional Model were 0.703 and 0.654 for 3 and 30 days. The Functional Model outperformed Demographic-Comorbidity models at 3 days (c-statistic difference: 0.066-0.096) and outperformed two of the three Demographic-Comorbidity models at 30 days (c-statistic difference: 0.029-0.056). The Functional-Plus models exhibited negligible improvements (0.002-0.010) in model performance compared to the Functional models. Conclusion Readmissions are used as a marker of hospital performance. Function-based readmission models in the spinal cord injury population outperform models incorporating medical comorbidities. Readmission risk models for this population would benefit from the inclusion of functional status.
Improving predicted protein loop structure ranking using a Pareto-optimality consensus method.
Li, Yaohang; Rata, Ionel; Chiu, See-wing; Jakobsson, Eric
2010-07-20
Accurate protein loop structure models are important to understand functions of many proteins. Identifying the native or near-native models by distinguishing them from the misfolded ones is a critical step in protein loop structure prediction. We have developed a Pareto Optimal Consensus (POC) method, which is a consensus model ranking approach to integrate multiple knowledge- or physics-based scoring functions. The procedure of identifying the models of best quality in a model set includes: 1) identifying the models at the Pareto optimal front with respect to a set of scoring functions, and 2) ranking them based on the fuzzy dominance relationship to the rest of the models. We apply the POC method to a large number of decoy sets for loops of 4- to 12-residue in length using a functional space composed of several carefully-selected scoring functions: Rosetta, DOPE, DDFIRE, OPLS-AA, and a triplet backbone dihedral potential developed in our lab. Our computational results show that the sets of Pareto-optimal decoys, which are typically composed of approximately 20% or less of the overall decoys in a set, have a good coverage of the best or near-best decoys in more than 99% of the loop targets. Compared to the individual scoring function yielding best selection accuracy in the decoy sets, the POC method yields 23%, 37%, and 64% less false positives in distinguishing the native conformation, indentifying a near-native model (RMSD < 0.5A from the native) as top-ranked, and selecting at least one near-native model in the top-5-ranked models, respectively. Similar effectiveness of the POC method is also found in the decoy sets from membrane protein loops. Furthermore, the POC method outperforms the other popularly-used consensus strategies in model ranking, such as rank-by-number, rank-by-rank, rank-by-vote, and regression-based methods. By integrating multiple knowledge- and physics-based scoring functions based on Pareto optimality and fuzzy dominance, the POC method is effective in distinguishing the best loop models from the other ones within a loop model set.
Improving predicted protein loop structure ranking using a Pareto-optimality consensus method
2010-01-01
Background Accurate protein loop structure models are important to understand functions of many proteins. Identifying the native or near-native models by distinguishing them from the misfolded ones is a critical step in protein loop structure prediction. Results We have developed a Pareto Optimal Consensus (POC) method, which is a consensus model ranking approach to integrate multiple knowledge- or physics-based scoring functions. The procedure of identifying the models of best quality in a model set includes: 1) identifying the models at the Pareto optimal front with respect to a set of scoring functions, and 2) ranking them based on the fuzzy dominance relationship to the rest of the models. We apply the POC method to a large number of decoy sets for loops of 4- to 12-residue in length using a functional space composed of several carefully-selected scoring functions: Rosetta, DOPE, DDFIRE, OPLS-AA, and a triplet backbone dihedral potential developed in our lab. Our computational results show that the sets of Pareto-optimal decoys, which are typically composed of ~20% or less of the overall decoys in a set, have a good coverage of the best or near-best decoys in more than 99% of the loop targets. Compared to the individual scoring function yielding best selection accuracy in the decoy sets, the POC method yields 23%, 37%, and 64% less false positives in distinguishing the native conformation, indentifying a near-native model (RMSD < 0.5A from the native) as top-ranked, and selecting at least one near-native model in the top-5-ranked models, respectively. Similar effectiveness of the POC method is also found in the decoy sets from membrane protein loops. Furthermore, the POC method outperforms the other popularly-used consensus strategies in model ranking, such as rank-by-number, rank-by-rank, rank-by-vote, and regression-based methods. Conclusions By integrating multiple knowledge- and physics-based scoring functions based on Pareto optimality and fuzzy dominance, the POC method is effective in distinguishing the best loop models from the other ones within a loop model set. PMID:20642859
NASA Astrophysics Data System (ADS)
Verma, Manish K.
Terrestrial gross primary productivity (GPP) is the largest and most variable component of the carbon cycle and is strongly influenced by phenology. Realistic characterization of spatio-temporal variation in GPP and phenology is therefore crucial for understanding dynamics in the global carbon cycle. In the last two decades, remote sensing has become a widely-used tool for this purpose. However, no study has comprehensively examined how well remote sensing models capture spatiotemporal patterns in GPP, and validation of remote sensing-based phenology models is limited. Using in-situ data from 144 eddy covariance towers located in all major biomes, I assessed the ability of 10 remote sensing-based methods to capture spatio-temporal variation in GPP at annual and seasonal scales. The models are based on different hypotheses regarding ecophysiological controls on GPP and span a range of structural and computational complexity. The results lead to four main conclusions: (i) at annual time scale, models were more successful capturing spatial variability than temporal variability; (ii) at seasonal scale, models were more successful in capturing average seasonal variability than interannual variability; (iii) simpler models performed as well or better than complex models; and (iv) models that were best at explaining seasonal variability in GPP were different from those that were best able to explain variability in annual scale GPP. Seasonal phenology of vegetation follows bounded growth and decay, and is widely modeled using growth functions. However, the specific form of the growth function affects how phenological dynamics are represented in ecosystem and remote sensing-base models. To examine this, four different growth functions (the logistic, Gompertz, Mirror-Gompertz and Richards function) were assessed using remotely sensed and in-situ data collected at several deciduous forest sites. All of the growth functions provided good statistical representation of in-situ and remote sensing time series. However, the Richards function captured observed asymmetric dynamics that were not captured by the other functions. The timing of key phenophase transitions derived using the Richards function therefore agreed best with observations. This suggests that ecosystem models and remote-sensing algorithms would benefit from using the Richards function to represent phenological dynamics.
Variability in Dopamine Genes Dissociates Model-Based and Model-Free Reinforcement Learning
Bath, Kevin G.; Daw, Nathaniel D.; Frank, Michael J.
2016-01-01
Considerable evidence suggests that multiple learning systems can drive behavior. Choice can proceed reflexively from previous actions and their associated outcomes, as captured by “model-free” learning algorithms, or flexibly from prospective consideration of outcomes that might occur, as captured by “model-based” learning algorithms. However, differential contributions of dopamine to these systems are poorly understood. Dopamine is widely thought to support model-free learning by modulating plasticity in striatum. Model-based learning may also be affected by these striatal effects, or by other dopaminergic effects elsewhere, notably on prefrontal working memory function. Indeed, prominent demonstrations linking striatal dopamine to putatively model-free learning did not rule out model-based effects, whereas other studies have reported dopaminergic modulation of verifiably model-based learning, but without distinguishing a prefrontal versus striatal locus. To clarify the relationships between dopamine, neural systems, and learning strategies, we combine a genetic association approach in humans with two well-studied reinforcement learning tasks: one isolating model-based from model-free behavior and the other sensitive to key aspects of striatal plasticity. Prefrontal function was indexed by a polymorphism in the COMT gene, differences of which reflect dopamine levels in the prefrontal cortex. This polymorphism has been associated with differences in prefrontal activity and working memory. Striatal function was indexed by a gene coding for DARPP-32, which is densely expressed in the striatum where it is necessary for synaptic plasticity. We found evidence for our hypothesis that variations in prefrontal dopamine relate to model-based learning, whereas variations in striatal dopamine function relate to model-free learning. SIGNIFICANCE STATEMENT Decisions can stem reflexively from their previously associated outcomes or flexibly from deliberative consideration of potential choice outcomes. Research implicates a dopamine-dependent striatal learning mechanism in the former type of choice. Although recent work has indicated that dopamine is also involved in flexible, goal-directed decision-making, it remains unclear whether it also contributes via striatum or via the dopamine-dependent working memory function of prefrontal cortex. We examined genetic indices of dopamine function in these regions and their relation to the two choice strategies. We found that striatal dopamine function related most clearly to the reflexive strategy, as previously shown, and that prefrontal dopamine related most clearly to the flexible strategy. These findings suggest that dissociable brain regions support dissociable choice strategies. PMID:26818509
van Eijk, Ruben PA; Eijkemans, Marinus JC; Rizopoulos, Dimitris
2018-01-01
Objective Amyotrophic lateral sclerosis (ALS) clinical trials based on single end points only partially capture the full treatment effect when both function and mortality are affected, and may falsely dismiss efficacious drugs as futile. We aimed to investigate the statistical properties of several strategies for the simultaneous analysis of function and mortality in ALS clinical trials. Methods Based on the Pooled Resource Open-Access ALS Clinical Trials (PRO-ACT) database, we simulated longitudinal patterns of functional decline, defined by the revised amyotrophic lateral sclerosis functional rating scale (ALSFRS-R) and conditional survival time. Different treatment scenarios with varying effect sizes were simulated with follow-up ranging from 12 to 18 months. We considered the following analytical strategies: 1) Cox model; 2) linear mixed effects (LME) model; 3) omnibus test based on Cox and LME models; 4) composite time-to-6-point decrease or death; 5) combined assessment of function and survival (CAFS); and 6) test based on joint modeling framework. For each analytical strategy, we calculated the empirical power and sample size. Results Both Cox and LME models have increased false-negative rates when treatment exclusively affects either function or survival. The joint model has superior power compared to other strategies. The composite end point increases false-negative rates among all treatment scenarios. To detect a 15% reduction in ALSFRS-R decline and 34% decline in hazard with 80% power after 18 months, the Cox model requires 524 patients, the LME model 794 patients, the omnibus test 526 patients, the composite end point 1,274 patients, the CAFS 576 patients and the joint model 464 patients. Conclusion Joint models have superior statistical power to analyze simultaneous effects on survival and function and may circumvent pitfalls encountered by other end points. Optimizing trial end points is essential, as selecting suboptimal outcomes may disguise important treatment clues. PMID:29593436
A Comparison of Functional Models for Use in the Function-Failure Design Method
NASA Technical Reports Server (NTRS)
Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.
2006-01-01
When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based on NTSB accident reports. To best record this data, standardized functional and failure mode vocabularies are used. Two separate function-failure knowledge bases are then created aid compared. Results indicate that encoding failure data using more detailed functional models allows for a more robust knowledge base. Interestingly however, when applying the EFDM, high level descriptions continue to produce useful results when using the knowledge base generated from the detailed functional models.
Graphic comparison of reserve-growth models for conventional oil and accumulation
Klett, T.R.
2003-01-01
The U.S. Geological Survey (USGS) periodically assesses crude oil, natural gas, and natural gas liquids resources of the world. The assessment procedure requires estimated recover-able oil and natural gas volumes (field size, cumulative production plus remaining reserves) in discovered fields. Because initial reserves are typically conservative, subsequent estimates increase through time as these fields are developed and produced. The USGS assessment of petroleum resources makes estimates, or forecasts, of the potential additions to reserves in discovered oil and gas fields resulting from field development, and it also estimates the potential fully developed sizes of undiscovered fields. The term ?reserve growth? refers to the commonly observed upward adjustment of reserve estimates. Because such additions are related to increases in the total size of a field, the USGS uses field sizes to model reserve growth. Future reserve growth in existing fields is a major component of remaining U.S. oil and natural gas resources and has therefore become a necessary element of U.S. petroleum resource assessments. Past and currently proposed reserve-growth models compared herein aid in the selection of a suitable set of forecast functions to provide an estimate of potential additions to reserves from reserve growth in the ongoing National Oil and Gas Assessment Project (NOGA). Reserve growth is modeled by construction of a curve that represents annual fractional changes of recoverable oil and natural gas volumes (for fields and reservoirs), which provides growth factors. Growth factors are used to calculate forecast functions, which are sets of field- or reservoir-size multipliers. Comparisons of forecast functions were made based on datasets used to construct the models, field type, modeling method, and length of forecast span. Comparisons were also made between forecast functions based on field-level and reservoir- level growth, and between forecast functions based on older and newer data. The reserve-growth model used in the 1995 USGS National Assessment and the model currently used in the NOGA project provide forecast functions that yield similar estimates of potential additions to reserves. Both models are based on the Oil and Gas Integrated Field File from the Energy Information Administration (EIA), but different vintages of data (from 1977 through 1991 and 1977 through 1996, respectively). The model based on newer data can be used in place of the previous model, providing similar estimates of potential additions to reserves. Fore-cast functions for oil fields vary little from those for gas fields in these models; therefore, a single function may be used for both oil and gas fields, like that used in the USGS World Petroleum Assessment 2000. Forecast functions based on the field-level reserve growth model derived from the NRG Associates databases (from 1982 through 1998) differ from those derived from EIA databases (from 1977 through 1996). However, the difference may not be enough to preclude the use of the forecast functions derived from NRG data in place of the forecast functions derived from EIA data. Should the model derived from NRG data be used, separate forecast functions for oil fields and gas fields must be employed. The forecast function for oil fields from the model derived from NRG data varies significantly from that for gas fields, and a single function for both oil and gas fields may not be appropriate.
Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.
Schneider, Martin; Iskander, D Robert; Collins, Michael J
2009-02-01
High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.
Fattebert, Julien; Robinson, Hugh S; Balme, Guy; Slotow, Rob; Hunter, Luke
2015-10-01
Natal dispersal promotes inter-population linkage, and is key to spatial distribution of populations. Degradation of suitable landscape structures beyond the specific threshold of an individual's ability to disperse can therefore lead to disruption of functional landscape connectivity and impact metapopulation function. Because it ignores behavioral responses of individuals, structural connectivity is easier to assess than functional connectivity and is often used as a surrogate for landscape connectivity modeling. However using structural resource selection models as surrogate for modeling functional connectivity through dispersal could be erroneous. We tested how well a second-order resource selection function (RSF) models (structural connectivity), based on GPS telemetry data from resident adult leopard (Panthera pardus L.), could predict subadult habitat use during dispersal (functional connectivity). We created eight non-exclusive subsets of the subadult data based on differing definitions of dispersal to assess the predictive ability of our adult-based RSF model extrapolated over a broader landscape. Dispersing leopards used habitats in accordance with adult selection patterns, regardless of the definition of dispersal considered. We demonstrate that, for a wide-ranging apex carnivore, functional connectivity through natal dispersal corresponds to structural connectivity as modeled by a second-order RSF. Mapping of the adult-based habitat classes provides direct visualization of the potential linkages between populations, without the need to model paths between a priori starting and destination points. The use of such landscape scale RSFs may provide insight into predicting suitable dispersal habitat peninsulas in human-dominated landscapes where mitigation of human-wildlife conflict should be focused. We recommend the use of second-order RSFs for landscape conservation planning and propose a similar approach to the conservation of other wide-ranging large carnivore species where landscape-scale resource selection data already exist.
Katsube, Takayuki; Wajima, Toshihiro; Ishibashi, Toru; Arjona Ferreira, Juan Camilo; Echols, Roger
2017-01-01
Cefiderocol, a novel parenteral siderophore cephalosporin, exhibits potent efficacy against most Gram-negative bacteria, including carbapenem-resistant strains. Since cefiderocol is excreted primarily via the kidneys, this study was conducted to develop a population pharmacokinetics (PK) model to determine dose adjustment based on renal function. Population PK models were developed based on data for cefiderocol concentrations in plasma, urine, and dialysate with a nonlinear mixed-effects model approach. Monte-Carlo simulations were conducted to calculate the probability of target attainment (PTA) of fraction of time during the dosing interval where the free drug concentration in plasma exceeds the MIC (T f >MIC ) for an MIC range of 0.25 to 16 μg/ml. For the simulations, dose regimens were selected to compare cefiderocol exposure among groups with different levels of renal function. The developed models well described the PK of cefiderocol for each renal function group. A dose of 2 g every 8 h with 3-h infusions provided >90% PTA for 75% T f >MIC for an MIC of ≤4 μg/ml for patients with normal renal function, while a more frequent dose (every 6 h) could be used for patients with augmented renal function. A reduced dose and/or extended dosing interval was selected for patients with impaired renal function. A supplemental dose immediately after intermittent hemodialysis was proposed for patients requiring intermittent hemodialysis. The PK of cefiderocol could be adequately modeled, and the modeling-and-simulation approach suggested dose regimens based on renal function, ensuring drug exposure with adequate bactericidal effect. Copyright © 2016 American Society for Microbiology.
Characterizing attention with predictive network models
Rosenberg, M. D.; Finn, E. S.; Scheinost, D.; Constable, R. T.; Chun, M. M.
2017-01-01
Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals’ attentional abilities. Some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that (1) attention is a network property of brain computation, (2) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task, and (3) this architecture supports a general attentional ability common to several lab-based tasks and impaired in attention deficit hyperactivity disorder. Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. PMID:28238605
Adolescent brain development in normality and psychopathology
LUCIANA, MONICA
2014-01-01
Since this journal’s inception, the field of adolescent brain development has flourished, as researchers have investigated the underpinnings of adolescent risk-taking behaviors. Explanations based on translational models initially attributed such behaviors to executive control deficiencies and poor frontal lobe function. This conclusion was bolstered by evidence that the prefrontal cortex and its interconnections are among the last brain regions to structurally and functionally mature. As substantial heterogeneity of prefrontal function was revealed, applications of neuroeconomic theory to adolescent development led to dual systems models of behavior. Current epidemiological trends, behavioral observations, and functional magnetic resonance imaging based brain activity patterns suggest a quadratic increase in limbically mediated incentive motivation from childhood to adolescence and a decline thereafter. This elevation occurs in the context of immature prefrontal function, so motivational strivings may be difficult to regulate. Theoretical models explain this patterning through brain-based accounts of subcortical–cortical integration, puberty-based models of adolescent sensation seeking, and neurochemical dynamics. Empirically sound tests of these mechanisms, as well as investigations of biology–context interactions, represent the field’s most challenging future goals, so that applications to psychopathology can be refined and so that developmental cascades that incorporate neurobiological variables can be modeled. PMID:24342843
Adolescent brain development in normality and psychopathology.
Luciana, Monica
2013-11-01
Since this journal's inception, the field of adolescent brain development has flourished, as researchers have investigated the underpinnings of adolescent risk-taking behaviors. Explanations based on translational models initially attributed such behaviors to executive control deficiencies and poor frontal lobe function. This conclusion was bolstered by evidence that the prefrontal cortex and its interconnections are among the last brain regions to structurally and functionally mature. As substantial heterogeneity of prefrontal function was revealed, applications of neuroeconomic theory to adolescent development led to dual systems models of behavior. Current epidemiological trends, behavioral observations, and functional magnetic resonance imaging based brain activity patterns suggest a quadratic increase in limbically mediated incentive motivation from childhood to adolescence and a decline thereafter. This elevation occurs in the context of immature prefrontal function, so motivational strivings may be difficult to regulate. Theoretical models explain this patterning through brain-based accounts of subcortical-cortical integration, puberty-based models of adolescent sensation seeking, and neurochemical dynamics. Empirically sound tests of these mechanisms, as well as investigations of biology-context interactions, represent the field's most challenging future goals, so that applications to psychopathology can be refined and so that developmental cascades that incorporate neurobiological variables can be modeled.
Park, Sung Hee; Min, Sang-Gi; Jo, Yeon-Ji; Chun, Ji-Yeon
2015-01-01
In the dairy industry, natural plant-based powders are widely used to develop flavor and functionality. However, most of these ingredients are water-insoluble; therefore, emulsification is essential. In this study, the efficacy of high pressure homogenization (HPH) on natural plant (chocolate or vanilla)-based model emulsions was investigated. The particle size, electrical conductivity, Brix, pH, and color were analyzed after HPH. HPH significantly decreased the particle size of chocolate-based emulsions as a function of elevated pressures (20-100 MPa). HPH decreased the mean particle size of chocolate-based emulsions from 29.01 μm to 5.12 μm, and that of vanilla-based emulsions from 4.18 μm to 2.44 μm. Electrical conductivity increased as a function of the elevated pressures after HPH, for both chocolate- and vanilla-based model emulsions. HPH at 100 MPa increased the electrical conductivity of chocolate-based model emulsions from 0.570 S/m to 0.680 S/m, and that of vanilla-based model emulsions from 0.573 S/m to 0.601 S/m. Increased electrical conductivity would be attributed to colloidal phase modification and dispersion of oil globules. Brix of both chocolate- and vanilla-based model emulsions gradually increased as a function of the HPH pressure. Thus, HPH increased the solubility of plant-based powders by decreasing the particle size. This study demonstrated the potential use of HPH for enhancing the emulsification process and stability of the natural plant powders for applications with dairy products. PMID:26761891
Application of a water quality model in the White Cart water catchment, Glasgow, UK.
Liu, S; Tucker, P; Mansell, M; Hursthouse, A
2003-03-01
Water quality models of urban systems have previously focused on point source (sewerage system) inputs. Little attention has been given to diffuse inputs and research into diffuse pollution has been largely confined to agriculture sources. This paper reports on new research that is aimed at integrating diffuse inputs into an urban water quality model. An integrated model is introduced that is made up of four modules: hydrology, contaminant point sources, nutrient cycling and leaching. The hydrology module, T&T consists of a TOPMODEL (a TOPography-based hydrological MODEL), which simulates runoff from pervious areas and a two-tank model, which simulates runoff from impervious urban areas. Linked into the two-tank model, the contaminant point source module simulates the overflow from the sewerage system in heavy rain. The widely known SOILN (SOIL Nitrate model) is the basis of nitrogen cycle module. Finally, the leaching module consists of two functions: the production function and the transfer function. The production function is based on SLIM (Solute Leaching Intermediate Model) while the transfer function is based on the 'flushing hypothesis' which postulates a relationship between contaminant concentrations in the receiving water course and the extent to which the catchment is saturated. This paper outlines the modelling methodology and the model structures that have been developed. An application of this model in the White Cart catchment (Glasgow) is also included.
Variability in Dopamine Genes Dissociates Model-Based and Model-Free Reinforcement Learning.
Doll, Bradley B; Bath, Kevin G; Daw, Nathaniel D; Frank, Michael J
2016-01-27
Considerable evidence suggests that multiple learning systems can drive behavior. Choice can proceed reflexively from previous actions and their associated outcomes, as captured by "model-free" learning algorithms, or flexibly from prospective consideration of outcomes that might occur, as captured by "model-based" learning algorithms. However, differential contributions of dopamine to these systems are poorly understood. Dopamine is widely thought to support model-free learning by modulating plasticity in striatum. Model-based learning may also be affected by these striatal effects, or by other dopaminergic effects elsewhere, notably on prefrontal working memory function. Indeed, prominent demonstrations linking striatal dopamine to putatively model-free learning did not rule out model-based effects, whereas other studies have reported dopaminergic modulation of verifiably model-based learning, but without distinguishing a prefrontal versus striatal locus. To clarify the relationships between dopamine, neural systems, and learning strategies, we combine a genetic association approach in humans with two well-studied reinforcement learning tasks: one isolating model-based from model-free behavior and the other sensitive to key aspects of striatal plasticity. Prefrontal function was indexed by a polymorphism in the COMT gene, differences of which reflect dopamine levels in the prefrontal cortex. This polymorphism has been associated with differences in prefrontal activity and working memory. Striatal function was indexed by a gene coding for DARPP-32, which is densely expressed in the striatum where it is necessary for synaptic plasticity. We found evidence for our hypothesis that variations in prefrontal dopamine relate to model-based learning, whereas variations in striatal dopamine function relate to model-free learning. Decisions can stem reflexively from their previously associated outcomes or flexibly from deliberative consideration of potential choice outcomes. Research implicates a dopamine-dependent striatal learning mechanism in the former type of choice. Although recent work has indicated that dopamine is also involved in flexible, goal-directed decision-making, it remains unclear whether it also contributes via striatum or via the dopamine-dependent working memory function of prefrontal cortex. We examined genetic indices of dopamine function in these regions and their relation to the two choice strategies. We found that striatal dopamine function related most clearly to the reflexive strategy, as previously shown, and that prefrontal dopamine related most clearly to the flexible strategy. These findings suggest that dissociable brain regions support dissociable choice strategies. Copyright © 2016 the authors 0270-6474/16/361211-12$15.00/0.
ERIC Educational Resources Information Center
Okawa, Yayoi; Nakamura, Shigemi; Kudo, Minako; Ueda, Satoshi
2009-01-01
The purpose of this study is to confirm the working hypothesis on two major models of functioning decline and two corresponding models of rehabilitation program in an older population through detailed interviews with the persons who have functioning declines and on-the-spot observations of key activities on home visits. A total of 542…
Stepwise Analysis of Differential Item Functioning Based on Multiple-Group Partial Credit Model.
ERIC Educational Resources Information Center
Muraki, Eiji
1999-01-01
Extended an Item Response Theory (IRT) method for detection of differential item functioning to the partial credit model and applied the method to simulated data using a stepwise procedure. Then applied the stepwise DIF analysis based on the multiple-group partial credit model to writing trend data from the National Assessment of Educational…
A cost-performance model for ground-based optical communications receiving telescopes
NASA Technical Reports Server (NTRS)
Lesh, J. R.; Robinson, D. L.
1986-01-01
An analytical cost-performance model for a ground-based optical communications receiving telescope is presented. The model considers costs of existing telescopes as a function of diameter and field of view. This, coupled with communication performance as a function of receiver diameter and field of view, yields the appropriate telescope cost versus communication performance curve.
Steen Magnussen; Ronald E. McRoberts; Erkki O. Tomppo
2009-01-01
New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar 'Single Index Model' transformations of X. Variance functions generated...
ERIC Educational Resources Information Center
Schweizer, Karl
2006-01-01
A model with fixed relations between manifest and latent variables is presented for investigating choice reaction time data. The numbers for fixation originate from the polynomial function. Two options are considered: the component-based (1 latent variable for each component of the polynomial function) and composite-based options (1 latent…
Functional Behavioral Assessment: A School Based Model.
ERIC Educational Resources Information Center
Asmus, Jennifer M.; Vollmer, Timothy R.; Borrero, John C.
2002-01-01
This article begins by discussing requirements for functional behavioral assessment under the Individuals with Disabilities Education Act and then describes a comprehensive model for the application of behavior analysis in the schools. The model includes descriptive assessment, functional analysis, and intervention and involves the participation…
A probabilistic framework to infer brain functional connectivity from anatomical connections.
Deligianni, Fani; Varoquaux, Gael; Thirion, Bertrand; Robinson, Emma; Sharp, David J; Edwards, A David; Rueckert, Daniel
2011-01-01
We present a novel probabilistic framework to learn across several subjects a mapping from brain anatomical connectivity to functional connectivity, i.e. the covariance structure of brain activity. This prediction problem must be formulated as a structured-output learning task, as the predicted parameters are strongly correlated. We introduce a model selection framework based on cross-validation with a parametrization-independent loss function suitable to the manifold of covariance matrices. Our model is based on constraining the conditional independence structure of functional activity by the anatomical connectivity. Subsequently, we learn a linear predictor of a stationary multivariate autoregressive model. This natural parameterization of functional connectivity also enforces the positive-definiteness of the predicted covariance and thus matches the structure of the output space. Our results show that functional connectivity can be explained by anatomical connectivity on a rigorous statistical basis, and that a proper model of functional connectivity is essential to assess this link.
Interpreting experimental data on egg production--applications of dynamic differential equations.
France, J; Lopez, S; Kebreab, E; Dijkstra, J
2013-09-01
This contribution focuses on applying mathematical models based on systems of ordinary first-order differential equations to synthesize and interpret data from egg production experiments. Models based on linear systems of differential equations are contrasted with those based on nonlinear systems. Regression equations arising from analytical solutions to linear compartmental schemes are considered as candidate functions for describing egg production curves, together with aspects of parameter estimation. Extant candidate functions are reviewed, a role for growth functions such as the Gompertz equation suggested, and a function based on a simple new model outlined. Structurally, the new model comprises a single pool with an inflow and an outflow. Compartmental simulation models based on nonlinear systems of differential equations, and thus requiring numerical solution, are next discussed, and aspects of parameter estimation considered. This type of model is illustrated in relation to development and evaluation of a dynamic model of calcium and phosphorus flows in layers. The model consists of 8 state variables representing calcium and phosphorus pools in the crop, stomachs, plasma, and bone. The flow equations are described by Michaelis-Menten or mass action forms. Experiments that measure Ca and P uptake in layers fed different calcium concentrations during shell-forming days are used to evaluate the model. In addition to providing a useful management tool, such a simulation model also provides a means to evaluate feeding strategies aimed at reducing excretion of potential pollutants in poultry manure to the environment.
NASA Astrophysics Data System (ADS)
Javadi, Maryam; Shahrabi, Jamal
2014-03-01
The problems of facility location and the allocation of demand points to facilities are crucial research issues in spatial data analysis and urban planning. It is very important for an organization or governments to best locate its resources and facilities and efficiently manage resources to ensure that all demand points are covered and all the needs are met. Most of the recent studies, which focused on solving facility location problems by performing spatial clustering, have used the Euclidean distance between two points as the dissimilarity function. Natural obstacles, such as mountains and rivers, can have drastic impacts on the distance that needs to be traveled between two geographical locations. While calculating the distance between various supply chain entities (including facilities and demand points), it is necessary to take such obstacles into account to obtain better and more realistic results regarding location-allocation. In this article, new models were presented for location of urban facilities while considering geographical obstacles at the same time. In these models, three new distance functions were proposed. The first function was based on the analysis of shortest path in linear network, which was called SPD function. The other two functions, namely PD and P2D, were based on the algorithms that deal with robot geometry and route-based robot navigation in the presence of obstacles. The models were implemented in ArcGIS Desktop 9.2 software using the visual basic programming language. These models were evaluated using synthetic and real data sets. The overall performance was evaluated based on the sum of distance from demand points to their corresponding facilities. Because of the distance between the demand points and facilities becoming more realistic in the proposed functions, results indicated desired quality of the proposed models in terms of quality of allocating points to centers and logistic cost. Obtained results show promising improvements of the allocation, the logistics costs and the response time. It can also be inferred from this study that the P2D-based model and the SPD-based model yield similar results in terms of the facility location and the demand allocation. It is noted that the P2D-based model showed better execution time than the SPD-based model. Considering logistic costs, facility location and response time, the P2D-based model was appropriate choice for urban facility location problem considering the geographical obstacles.
4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties
NASA Astrophysics Data System (ADS)
Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.
2018-05-01
4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated the most biased parametric maps. Inclusion of a temporal roughness penalty function improved the performance of 4D reconstruction based on the cubic B-spline, spectral and spline-residue models.
Barycentric parameterizations for isotropic BRDFs.
Stark, Michael M; Arvo, James; Smits, Brian
2005-01-01
A bidirectional reflectance distribution function (BRDF) is often expressed as a function of four real variables: two spherical coordinates in each of the the "incoming" and "outgoing" directions. However, many BRDFs reduce to functions of fewer variables. For example, isotropic reflection can be represented by a function of three variables. Some BRDF models can be reduced further. In this paper, we introduce new sets of coordinates which we use to reduce the dimensionality of several well-known analytic BRDFs as well as empirically measured BRDF data. The proposed coordinate systems are barycentric with respect to a triangular support with a direct physical interpretation. One coordinate set is based on the BRDF model proposed by Lafortune. Another set, based on a model of Ward, is associated with the "halfway" vector common in analytical BRDF formulas. Through these coordinate sets we establish lower bounds on the approximation error inherent in the models on which they are based. We present a third set of coordinates, not based on any analytical model, that performs well in approximating measured data. Finally, our proposed variables suggest novel ways of constructing and visualizing BRDFs.
Influence Function Learning in Information Diffusion Networks
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2015-01-01
Can we learn the influence of a set of people in a social network from cascades of information diffusion? This question is often addressed by a two-stage approach: first learn a diffusion model, and then calculate the influence based on the learned model. Thus, the success of this approach relies heavily on the correctness of the diffusion model which is hard to verify for real world data. In this paper, we exploit the insight that the influence functions in many diffusion models are coverage functions, and propose a novel parameterization of such functions using a convex combination of random basis functions. Moreover, we propose an efficient maximum likelihood based algorithm to learn such functions directly from cascade data, and hence bypass the need to specify a particular diffusion model in advance. We provide both theoretical and empirical analysis for our approach, showing that the proposed approach can provably learn the influence function with low sample complexity, be robust to the unknown diffusion models, and significantly outperform existing approaches in both synthetic and real world data. PMID:25973445
Johnson, Timothy R; Kuhn, Kristine M
2015-12-01
This paper introduces the ltbayes package for R. This package includes a suite of functions for investigating the posterior distribution of latent traits of item response models. These include functions for simulating realizations from the posterior distribution, profiling the posterior density or likelihood function, calculation of posterior modes or means, Fisher information functions and observed information, and profile likelihood confidence intervals. Inferences can be based on individual response patterns or sets of response patterns such as sum scores. Functions are included for several common binary and polytomous item response models, but the package can also be used with user-specified models. This paper introduces some background and motivation for the package, and includes several detailed examples of its use.
Isabelle, Boulangeat; Pauline, Philippe; Sylvain, Abdulhak; Roland, Douzet; Luc, Garraud; Sébastien, Lavergne; Sandra, Lavorel; Jérémie, Van Es; Pascal, Vittoz; Wilfried, Thuiller
2013-01-01
The pace of on-going climate change calls for reliable plant biodiversity scenarios. Traditional dynamic vegetation models use plant functional types that are summarized to such an extent that they become meaningless for biodiversity scenarios. Hybrid dynamic vegetation models of intermediate complexity (hybrid-DVMs) have recently been developed to address this issue. These models, at the crossroads between phenomenological and process-based models, are able to involve an intermediate number of well-chosen plant functional groups (PFGs). The challenge is to build meaningful PFGs that are representative of plant biodiversity, and consistent with the parameters and processes of hybrid-DVMs. Here, we propose and test a framework based on few selected traits to define a limited number of PFGs, which are both representative of the diversity (functional and taxonomic) of the flora in the Ecrins National Park, and adapted to hybrid-DVMs. This new classification scheme, together with recent advances in vegetation modeling, constitutes a step forward for mechanistic biodiversity modeling. PMID:24403847
Representing Operational Modes for Situation Awareness
NASA Astrophysics Data System (ADS)
Kirchhübel, Denis; Lind, Morten; Ravn, Ole
2017-01-01
Operating complex plants is an increasingly demanding task for human operators. Diagnosis of and reaction to on-line events requires the interpretation of real time data. Vast amounts of sensor data as well as operational knowledge about the state and design of the plant are necessary to deduct reasonable reactions to abnormal situations. Intelligent computational support tools can make the operator’s task easier, but they require knowledge about the overall system in form of some model. While tools used for fault-tolerant control design based on physical principles and relations are valuable tools for designing robust systems, the models become too complex when considering the interactions on a plant-wide level. The alarm systems meant to support human operators in the diagnosis of the plant-wide situation on the other hand fail regularly in situations where these interactions of systems lead to many related alarms overloading the operator with alarm floods. Functional modelling can provide a middle way to reduce the complexity of plant-wide models by abstracting from physical details to more general functions and behaviours. Based on functional models the propagation of failures through the interconnected systems can be inferred and alarm floods can potentially be reduced to their root-cause. However, the desired behaviour of a complex system changes due to operating procedures that require more than one physical and functional configuration. In this paper a consistent representation of possible configurations is deduced from the analysis of an exemplary start-up procedure by functional models. The proposed interpretation of the modelling concepts simplifies the functional modelling of distinct modes. The analysis further reveals relevant links between the quantitative sensor data and the qualitative perspective of the diagnostics tool based on functional models. This will form the basis for the ongoing development of a novel real-time diagnostics system based on the on-line adaptation of the underlying MFM model.
Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform
Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong
2016-01-01
We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features. PMID:27304979
Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform.
Wu, Hau-Tieng; Wu, Han-Kuei; Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong
2016-01-01
We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features.
NASA Technical Reports Server (NTRS)
Nieten, Joseph L.; Seraphine, Kathleen M.
1991-01-01
Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.
NASA Astrophysics Data System (ADS)
Hibbard, Bill
2012-05-01
Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.
Adding ecosystem function to agent-based land use models
USDA-ARS?s Scientific Manuscript database
The objective of this paper is to examine issues in the inclusion of simulations of ecosystem functions in agent-based models of land use decision-making. The reasons for incorporating these simulations include local interests in land fertility and global interests in carbon sequestration. Biogeoche...
NASA Astrophysics Data System (ADS)
Dobronets, Boris S.; Popova, Olga A.
2018-05-01
The paper considers a new approach of regression modeling that uses aggregated data presented in the form of density functions. Approaches to Improving the reliability of aggregation of empirical data are considered: improving accuracy and estimating errors. We discuss the procedures of data aggregation as a preprocessing stage for subsequent to regression modeling. An important feature of study is demonstration of the way how represent the aggregated data. It is proposed to use piecewise polynomial models, including spline aggregate functions. We show that the proposed approach to data aggregation can be interpreted as the frequency distribution. To study its properties density function concept is used. Various types of mathematical models of data aggregation are discussed. For the construction of regression models, it is proposed to use data representation procedures based on piecewise polynomial models. New approaches to modeling functional dependencies based on spline aggregations are proposed.
Connectotyping: Model Based Fingerprinting of the Functional Connectome
Miranda-Dominguez, Oscar; Mills, Brian D.; Carpenter, Samuel D.; Grant, Kathleen A.; Kroenke, Christopher D.; Nigg, Joel T.; Fair, Damien A.
2014-01-01
A better characterization of how an individual’s brain is functionally organized will likely bring dramatic advances to many fields of study. Here we show a model-based approach toward characterizing resting state functional connectivity MRI (rs-fcMRI) that is capable of identifying a so-called “connectotype”, or functional fingerprint in individual participants. The approach rests on a simple linear model that proposes the activity of a given brain region can be described by the weighted sum of its functional neighboring regions. The resulting coefficients correspond to a personalized model-based connectivity matrix that is capable of predicting the timeseries of each subject. Importantly, the model itself is subject specific and has the ability to predict an individual at a later date using a limited number of non-sequential frames. While we show that there is a significant amount of shared variance between models across subjects, the model’s ability to discriminate an individual is driven by unique connections in higher order control regions in frontal and parietal cortices. Furthermore, we show that the connectotype is present in non-human primates as well, highlighting the translational potential of the approach. PMID:25386919
Resolving Microzooplankton Functional Groups In A Size-Structured Planktonic Model
NASA Astrophysics Data System (ADS)
Taniguchi, D.; Dutkiewicz, S.; Follows, M. J.; Jahn, O.; Menden-Deuer, S.
2016-02-01
Microzooplankton are important marine grazers, often consuming a large fraction of primary productivity. They consist of a great diversity of organisms with different behaviors, characteristics, and rates. This functional diversity, and its consequences, are not currently reflected in large-scale ocean ecological simulations. How should these organisms be represented, and what are the implications for their biogeography? We develop a size-structured, trait-based model to characterize a diversity of microzooplankton functional groups. We compile and examine size-based laboratory data on the traits, revealing some patterns with size and functional group that we interpret with mechanistic theory. Fitting the model to the data provides parameterizations of key rates and properties, which we employ in a numerical ocean model. The diversity of grazing preference, rates, and trophic strategies enables the coexistence of different functional groups of micro-grazers under various environmental conditions, and the model produces testable predictions of the biogeography.
Coelho, Antonio Augusto Rodrigues
2016-01-01
This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723
Oliveira-Maia, Albino J; Mendonça, Carina; Pessoa, Maria J; Camacho, Marta; Gago, Joaquim
2016-01-01
Within clinical psychiatry, recovery from severe mental illness (SMI) has classically been defined according to symptoms and function (service-based recovery). However, service-users have argued that recovery should be defined as the process of overcoming mental illness, regaining self-control and establishing a meaningful life (customer-based recovery). Here, we aimed to compare customer-based and service-based recovery and clarify their differential relationship with other constructs, namely needs and quality of life. The study was conducted in 101 patients suffering from SMI, recruited from a rural community mental health setting in Portugal. Customer-based recovery and function-related service-based recovery were assessed, respectively, using a shortened version of the Mental Health Recovery Measure (MHRM-20) and the Global Assessment of Functioning score. The Camberwell Assessment of Need scale was used to objectively assess needs, while subjective quality of life was measured with the TL-30s scale. Using multiple linear regression models, we found that the Global Assessment of Functioning score was incrementally predictive of the MHRM-20 score, when added to a model including only clinical and demographic factors, and that this model was further incremented by the score for quality of life. However, in an alternate model using the Global Assessment of Functioning score as the dependent variable, while the MHRM-20 score contributed significantly to the model when added to clinical and demographic factors, the model was not incremented by the score for quality of life. These results suggest that, while a more global concept of recovery from SMI may be assessed using measures for service-based and customer-based recovery, the latter, namely the MHRM-20, also provides information about subjective well-being. Pending confirmation of these findings in other populations, this instrument could thus be useful for comprehensive assessment of recovery and subjective well-being in patients suffering from SMI.
Oliveira-Maia, Albino J.; Mendonça, Carina; Pessoa, Maria J.; Camacho, Marta; Gago, Joaquim
2016-01-01
Within clinical psychiatry, recovery from severe mental illness (SMI) has classically been defined according to symptoms and function (service-based recovery). However, service-users have argued that recovery should be defined as the process of overcoming mental illness, regaining self-control and establishing a meaningful life (customer-based recovery). Here, we aimed to compare customer-based and service-based recovery and clarify their differential relationship with other constructs, namely needs and quality of life. The study was conducted in 101 patients suffering from SMI, recruited from a rural community mental health setting in Portugal. Customer-based recovery and function-related service-based recovery were assessed, respectively, using a shortened version of the Mental Health Recovery Measure (MHRM-20) and the Global Assessment of Functioning score. The Camberwell Assessment of Need scale was used to objectively assess needs, while subjective quality of life was measured with the TL-30s scale. Using multiple linear regression models, we found that the Global Assessment of Functioning score was incrementally predictive of the MHRM-20 score, when added to a model including only clinical and demographic factors, and that this model was further incremented by the score for quality of life. However, in an alternate model using the Global Assessment of Functioning score as the dependent variable, while the MHRM-20 score contributed significantly to the model when added to clinical and demographic factors, the model was not incremented by the score for quality of life. These results suggest that, while a more global concept of recovery from SMI may be assessed using measures for service-based and customer-based recovery, the latter, namely the MHRM-20, also provides information about subjective well-being. Pending confirmation of these findings in other populations, this instrument could thus be useful for comprehensive assessment of recovery and subjective well-being in patients suffering from SMI. PMID:27857698
Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles
2011-06-01
Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.
A Bayesian spatial model for neuroimaging data based on biologically informed basis functions.
Huertas, Ismael; Oldehinkel, Marianne; van Oort, Erik S B; Garcia-Solis, David; Mir, Pablo; Beckmann, Christian F; Marquand, Andre F
2017-11-01
The dominant approach to neuroimaging data analysis employs the voxel as the unit of computation. While convenient, voxels lack biological meaning and their size is arbitrarily determined by the resolution of the image. Here, we propose a multivariate spatial model in which neuroimaging data are characterised as a linearly weighted combination of multiscale basis functions which map onto underlying brain nuclei or networks or nuclei. In this model, the elementary building blocks are derived to reflect the functional anatomy of the brain during the resting state. This model is estimated using a Bayesian framework which accurately quantifies uncertainty and automatically finds the most accurate and parsimonious combination of basis functions describing the data. We demonstrate the utility of this framework by predicting quantitative SPECT images of striatal dopamine function and we compare a variety of basis sets including generic isotropic functions, anatomical representations of the striatum derived from structural MRI, and two different soft functional parcellations of the striatum derived from resting-state fMRI (rfMRI). We found that a combination of ∼50 multiscale functional basis functions accurately represented the striatal dopamine activity, and that functional basis functions derived from an advanced parcellation technique known as Instantaneous Connectivity Parcellation (ICP) provided the most parsimonious models of dopamine function. Importantly, functional basis functions derived from resting fMRI were more accurate than both structural and generic basis sets in representing dopamine function in the striatum for a fixed model order. We demonstrate the translational validity of our framework by constructing classification models for discriminating parkinsonian disorders and their subtypes. Here, we show that ICP approach is the only basis set that performs well across all comparisons and performs better overall than the classical voxel-based approach. This spatial model constitutes an elegant alternative to voxel-based approaches in neuroimaging studies; not only are their atoms biologically informed, they are also adaptive to high resolutions, represent high dimensions efficiently, and capture long-range spatial dependencies, which are important and challenging objectives for neuroimaging data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Structural reliability analysis under evidence theory using the active learning kriging model
NASA Astrophysics Data System (ADS)
Yang, Xufeng; Liu, Yongshou; Ma, Panke
2017-11-01
Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.
Local gravity field modeling using spherical radial basis functions and a genetic algorithm
NASA Astrophysics Data System (ADS)
Mahbuby, Hany; Safari, Abdolreza; Foroughi, Ismael
2017-05-01
Spherical Radial Basis Functions (SRBFs) can express the local gravity field model of the Earth if they are parameterized optimally on or below the Bjerhammar sphere. This parameterization is generally defined as the shape of the base functions, their number, center locations, bandwidths, and scale coefficients. The number/location and bandwidths of the base functions are the most important parameters for accurately representing the gravity field; once they are determined, the scale coefficients can then be computed accordingly. In this study, the point-mass kernel, as the simplest shape of SRBFs, is chosen to evaluate the synthesized free-air gravity anomalies over the rough area in Auvergne and GNSS/Leveling points (synthetic height anomalies) are used to validate the results. A two-step automatic approach is proposed to determine the optimum distribution of the base functions. First, the location of the base functions and their bandwidths are found using the genetic algorithm; second, the conjugate gradient least squares method is employed to estimate the scale coefficients. The proposed methodology shows promising results. On the one hand, when using the genetic algorithm, the base functions do not need to be set to a regular grid and they can move according to the roughness of topography. In this way, the models meet the desired accuracy with a low number of base functions. On the other hand, the conjugate gradient method removes the bias between derived quasigeoid heights from the model and from the GNSS/leveling points; this means there is no need for a corrector surface. The numerical test on the area of interest revealed an RMS of 0.48 mGal for the differences between predicted and observed gravity anomalies, and a corresponding 9 cm for the differences in GNSS/leveling points.
NASA Astrophysics Data System (ADS)
Morozov, Andrew; Poggiale, Jean-Christophe; Cordoleani, Flora
2012-09-01
The conventional way of describing grazing in plankton models is based on a zooplankton functional response framework, according to which the consumption rate is computed as the product of a certain function of food (the functional response) and the density/biomass of herbivorous zooplankton. A large amount of literature on experimental feeding reports the existence of a zooplankton functional response in microcosms and small mesocosms, which goes a long way towards explaining the popularity of this framework both in mean-field (e.g. NPZD models) and spatially resolved models. On the other hand, the complex foraging behaviour of zooplankton (feeding cycles) as well as spatial heterogeneity of food and grazer distributions (plankton patchiness) across time and space scales raise questions as to the existence of a functional response of herbivores in vivo. In the current review, we discuss limitations of the ‘classical’ zooplankton functional response and consider possible ways to amend this framework to cope with the complexity of real planktonic ecosystems. Our general conclusion is that although the functional response of herbivores often does not exist in real ecosystems (especially in the form observed in the laboratory), this framework can be rather useful in modelling - but it does need some amendment which can be made based on various techniques of model reduction. We also show that the shape of the functional response depends on the spatial resolution (‘frame’) of the model. We argue that incorporating foraging behaviour and spatial heterogeneity in plankton models would not necessarily require the use of individual based modelling - an approach which is now becoming dominant in the literature. Finally, we list concrete future directions and challenges and emphasize the importance of a closer collaboration between plankton biologists and modellers in order to make further progress towards better descriptions of zooplankton grazing.
Parametric Model Based On Imputations Techniques for Partly Interval Censored Data
NASA Astrophysics Data System (ADS)
Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah
2017-12-01
The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.
Functional Results-Oriented Healthcare Leadership: A Novel Leadership Model
Al-Touby, Salem Said
2012-01-01
This article modifies the traditional functional leadership model to accommodate contemporary needs in healthcare leadership based on two findings. First, the article argues that it is important that the ideal healthcare leadership emphasizes the outcomes of the patient care more than processes and structures used to deliver such care; and secondly, that the leadership must strive to attain effectiveness of their care provision and not merely targeting the attractive option of efficient operations. Based on these premises, the paper reviews the traditional Functional Leadership Model and the three elements that define the type of leadership an organization has namely, the tasks, the individuals, and the team. The article argues that concentrating on any one of these elements is not ideal and proposes adding a new element to the model to construct a novel Functional Result-Oriented healthcare leadership model. The recommended Functional-Results Oriented leadership model embosses the results element on top of the other three elements so that every effort on healthcare leadership is directed towards attaining excellent patient outcomes. PMID:22496933
Functional results-oriented healthcare leadership: a novel leadership model.
Al-Touby, Salem Said
2012-03-01
This article modifies the traditional functional leadership model to accommodate contemporary needs in healthcare leadership based on two findings. First, the article argues that it is important that the ideal healthcare leadership emphasizes the outcomes of the patient care more than processes and structures used to deliver such care; and secondly, that the leadership must strive to attain effectiveness of their care provision and not merely targeting the attractive option of efficient operations. Based on these premises, the paper reviews the traditional Functional Leadership Model and the three elements that define the type of leadership an organization has namely, the tasks, the individuals, and the team. The article argues that concentrating on any one of these elements is not ideal and proposes adding a new element to the model to construct a novel Functional Result-Oriented healthcare leadership model. The recommended Functional-Results Oriented leadership model embosses the results element on top of the other three elements so that every effort on healthcare leadership is directed towards attaining excellent patient outcomes.
Dynamics of functional failures and recovery in complex road networks
NASA Astrophysics Data System (ADS)
Zhan, Xianyuan; Ukkusuri, Satish V.; Rao, P. Suresh C.
2017-11-01
We propose a new framework for modeling the evolution of functional failures and recoveries in complex networks, with traffic congestion on road networks as the case study. Differently from conventional approaches, we transform the evolution of functional states into an equivalent dynamic structural process: dual-vertex splitting and coalescing embedded within the original network structure. The proposed model successfully explains traffic congestion and recovery patterns at the city scale based on high-resolution data from two megacities. Numerical analysis shows that certain network structural attributes can amplify or suppress cascading functional failures. Our approach represents a new general framework to model functional failures and recoveries in flow-based networks and allows understanding of the interplay between structure and function for flow-induced failure propagation and recovery.
AptRank: an adaptive PageRank model for protein function prediction on bi-relational graphs.
Jiang, Biaobin; Kloster, Kyle; Gleich, David F; Gribskov, Michael
2017-06-15
Diffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood-based and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model. We first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is a direct application of traditional PageRank with fixed decay parameters. In contrast, AptRank utilizes an adaptive diffusion mechanism to improve the performance of BirgRank. We evaluate the ability of both methods to predict protein function on yeast, fly and human protein datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design four different validation strategies: missing function prediction, de novo function prediction, guided function prediction and newly discovered function prediction to comprehensively evaluate predictability of all six methods. We find that both BirgRank and AptRank outperform the previous methods, especially in missing function prediction when using only 10% of the data for training. The MATLAB code is available at https://github.rcac.purdue.edu/mgribsko/aptrank . gribskov@purdue.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Di Maggio, Jimena; Fernández, Carolina; Parodi, Elisa R; Diaz, M Soledad; Estrada, Vanina
2016-01-01
In this paper we address the formulation of two mechanistic water quality models that differ in the way the phytoplankton community is described. We carry out parameter estimation subject to differential-algebraic constraints and validation for each model and comparison between models performance. The first approach aggregates phytoplankton species based on their phylogenetic characteristics (Taxonomic group model) and the second one, on their morpho-functional properties following Reynolds' classification (Functional group model). The latter approach takes into account tolerance and sensitivity to environmental conditions. The constrained parameter estimation problems are formulated within an equation oriented framework, with a maximum likelihood objective function. The study site is Paso de las Piedras Reservoir (Argentina), which supplies water for consumption for 450,000 population. Numerical results show that phytoplankton morpho-functional groups more closely represent each species growth requirements within the group. Each model performance is quantitatively assessed by three diagnostic measures. Parameter estimation results for seasonal dynamics of the phytoplankton community and main biogeochemical variables for a one-year time horizon are presented and compared for both models, showing the functional group model enhanced performance. Finally, we explore increasing nutrient loading scenarios and predict their effect on phytoplankton dynamics throughout a one-year time horizon. Copyright © 2015 Elsevier Ltd. All rights reserved.
Unified Modeling Language (UML) for hospital-based cancer registration processes.
Shiki, Naomi; Ohno, Yuko; Fujii, Ayumi; Murata, Taizo; Matsumura, Yasushi
2008-01-01
Hospital-based cancer registry involves complex processing steps that span across multiple departments. In addition, management techniques and registration procedures differ depending on each medical facility. Establishing processes for hospital-based cancer registry requires clarifying specific functions and labor needed. In recent years, the business modeling technique, in which management evaluation is done by clearly spelling out processes and functions, has been applied to business process analysis. However, there are few analytical reports describing the applications of these concepts to medical-related work. In this study, we initially sought to model hospital-based cancer registration processes using the Unified Modeling Language (UML), to clarify functions. The object of this study was the cancer registry of Osaka University Hospital. We organized the hospital-based cancer registration processes based on interview and observational surveys, and produced an As-Is model using activity, use-case, and class diagrams. After drafting every UML model, it was fed-back to practitioners to check its validity and improved. We were able to define the workflow for each department using activity diagrams. In addition, by using use-case diagrams we were able to classify each department within the hospital as a system, and thereby specify the core processes and staff that were responsible for each department. The class diagrams were effective in systematically organizing the information to be used for hospital-based cancer registries. Using UML modeling, hospital-based cancer registration processes were broadly classified into three separate processes, namely, registration tasks, quality control, and filing data. An additional 14 functions were also extracted. Many tasks take place within the hospital-based cancer registry office, but the process of providing information spans across multiple departments. Moreover, additional tasks were required in comparison to using a standardized system because the hospital-based cancer registration system was constructed with the pre-existing computer system in Osaka University Hospital. Difficulty of utilization of useful information for cancer registration processes was shown to increase the task workload. By using UML, we were able to clarify functions and extract the typical processes for a hospital-based cancer registry. Modeling can provide a basis of process analysis for establishment of efficient hospital-based cancer registration processes in each institute.
Characterizing Attention with Predictive Network Models.
Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M
2017-04-01
Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.
Spatial Copula Model for Imputing Traffic Flow Data from Remote Microwave Sensors.
Ma, Xiaolei; Luan, Sen; Du, Bowen; Yu, Bin
2017-09-21
Issues of missing data have become increasingly serious with the rapid increase in usage of traffic sensors. Analyses of the Beijing ring expressway have showed that up to 50% of microwave sensors pose missing values. The imputation of missing traffic data must be urgently solved although a precise solution that cannot be easily achieved due to the significant number of missing portions. In this study, copula-based models are proposed for the spatial interpolation of traffic flow from remote traffic microwave sensors. Most existing interpolation methods only rely on covariance functions to depict spatial correlation and are unsuitable for coping with anomalies due to Gaussian consumption. Copula theory overcomes this issue and provides a connection between the correlation function and the marginal distribution function of traffic flow. To validate copula-based models, a comparison with three kriging methods is conducted. Results indicate that copula-based models outperform kriging methods, especially on roads with irregular traffic patterns. Copula-based models demonstrate significant potential to impute missing data in large-scale transportation networks.
Profile-Based LC-MS Data Alignment—A Bayesian Approach
Tsai, Tsung-Heng; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.
2014-01-01
A Bayesian alignment model (BAM) is proposed for alignment of liquid chromatography-mass spectrometry (LC-MS) data. BAM belongs to the category of profile-based approaches, which are composed of two major components: a prototype function and a set of mapping functions. Appropriate estimation of these functions is crucial for good alignment results. BAM uses Markov chain Monte Carlo (MCMC) methods to draw inference on the model parameters and improves on existing MCMC-based alignment methods through 1) the implementation of an efficient MCMC sampler and 2) an adaptive selection of knots. A block Metropolis-Hastings algorithm that mitigates the problem of the MCMC sampler getting stuck at local modes of the posterior distribution is used for the update of the mapping function coefficients. In addition, a stochastic search variable selection (SSVS) methodology is used to determine the number and positions of knots. We applied BAM to a simulated data set, an LC-MS proteomic data set, and two LC-MS metabolomic data sets, and compared its performance with the Bayesian hierarchical curve registration (BHCR) model, the dynamic time-warping (DTW) model, and the continuous profile model (CPM). The advantage of applying appropriate profile-based retention time correction prior to performing a feature-based approach is also demonstrated through the metabolomic data sets. PMID:23929872
NASA Astrophysics Data System (ADS)
Ye, Hong-Ling; Wang, Wei-Wei; Chen, Ning; Sui, Yun-Kang
2017-10-01
The purpose of the present work is to study the buckling problem with plate/shell topology optimization of orthotropic material. A model of buckling topology optimization is established based on the independent, continuous, and mapping method, which considers structural mass as objective and buckling critical loads as constraints. Firstly, composite exponential function (CEF) and power function (PF) as filter functions are introduced to recognize the element mass, the element stiffness matrix, and the element geometric stiffness matrix. The filter functions of the orthotropic material stiffness are deduced. Then these filter functions are put into buckling topology optimization of a differential equation to analyze the design sensitivity. Furthermore, the buckling constraints are approximately expressed as explicit functions with respect to the design variables based on the first-order Taylor expansion. The objective function is standardized based on the second-order Taylor expansion. Therefore, the optimization model is translated into a quadratic program. Finally, the dual sequence quadratic programming (DSQP) algorithm and the global convergence method of moving asymptotes algorithm with two different filter functions (CEF and PF) are applied to solve the optimal model. Three numerical results show that DSQP&CEF has the best performance in the view of structural mass and discretion.
Modelling protein functional domains in signal transduction using Maude
NASA Technical Reports Server (NTRS)
Sriram, M. G.
2003-01-01
Modelling of protein-protein interactions in signal transduction is receiving increased attention in computational biology. This paper describes recent research in the application of Maude, a symbolic language founded on rewriting logic, to the modelling of functional domains within signalling proteins. Protein functional domains (PFDs) are a critical focus of modern signal transduction research. In general, Maude models can simulate biological signalling networks and produce specific testable hypotheses at various levels of abstraction. Developing symbolic models of signalling proteins containing functional domains is important because of the potential to generate analyses of complex signalling networks based on structure-function relationships.
The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.
Olivier, Brett G; Bergmann, Frank T
2015-09-04
Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).
The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.
Olivier, Brett G; Bergmann, Frank T
2015-06-01
Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).
State-space model with deep learning for functional dynamics estimation in resting-state fMRI.
Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang
2016-04-01
Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach. Copyright © 2016 Elsevier Inc. All rights reserved.
State-space model with deep learning for functional dynamics estimation in resting-state fMRI
Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang
2017-01-01
Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach. PMID:26774612
García-Betances, Rebeca I.; Cabrera-Umpiérrez, María Fernanda; Ottaviano, Manuel; Pastorino, Matteo; Arredondo, María T.
2016-01-01
Despite the speedy evolution of Information and Computer Technology (ICT), and the growing recognition of the importance of the concept of universal design in all domains of daily living, mainstream ICT-based product designers and developers still work without any truly structured tools, guidance or support to effectively adapt their products and services to users’ real needs. This paper presents the approach used to define and evaluate parametric cognitive models that describe interaction and usage of ICT by people with aging- and disability-derived functional impairments. A multisensorial training platform was used to train, based on real user measurements in real conditions, the virtual parameterized user models that act as subjects of the test-bed during all stages of simulated disabilities-friendly ICT-based products design. An analytical study was carried out to identify the relevant cognitive functions involved, together with their corresponding parameters as related to aging- and disability-derived functional impairments. Evaluation of the final cognitive virtual user models in a real application has confirmed that the use of these models produce concrete valuable benefits to the design and testing process of accessible ICT-based applications and services. Parameterization of cognitive virtual user models allows incorporating cognitive and perceptual aspects during the design process. PMID:26907296
Mulgrew, Kate E; Tiggemann, Marika
2018-01-01
We examined whether shifting young women's ( N =322) attention toward functionality components of media-portrayed idealized images would protect against body dissatisfaction. Image type was manipulated via images of models in either an objectified body-as-object form or active body-as-process form; viewing focus was manipulated via questions about the appearance or functionality of the models. Social comparison was examined as a moderator. Negative outcomes were most pronounced within the process-related conditions (body-as-process images or functionality viewing focus) and for women who reported greater functionality comparison. Results suggest that functionality-based depictions, reflections, and comparisons may actually produce worse outcomes than those based on appearance.
Tree-Based Global Model Tests for Polytomous Rasch Models
ERIC Educational Resources Information Center
Komboz, Basil; Strobl, Carolin; Zeileis, Achim
2018-01-01
Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…
Aerodynamic parameter estimation via Fourier modulating function techniques
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1995-01-01
Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.
Derivation of Hunt equation for suspension distribution using Shannon entropy theory
NASA Astrophysics Data System (ADS)
Kundu, Snehasis
2017-12-01
In this study, the Hunt equation for computing suspension concentration in sediment-laden flows is derived using Shannon entropy theory. Considering the inverse of the void ratio as a random variable and using principle of maximum entropy, probability density function and cumulative distribution function of suspension concentration is derived. A new and more general cumulative distribution function for the flow domain is proposed which includes several specific other models of CDF reported in literature. This general form of cumulative distribution function also helps to derive the Rouse equation. The entropy based approach helps to estimate model parameters using suspension data of sediment concentration which shows the advantage of using entropy theory. Finally model parameters in the entropy based model are also expressed as functions of the Rouse number to establish a link between the parameters of the deterministic and probabilistic approaches.
NASA Technical Reports Server (NTRS)
Breckenridge, Jonathan T.; Johnson, Stephen B.
2013-01-01
This paper describes the core framework used to implement a Goal-Function Tree (GFT) based systems engineering process using the Systems Modeling Language. It defines a set of principles built upon by the theoretical approach described in the InfoTech 2013 ISHM paper titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management" presented by Dr. Stephen B. Johnson. Using the SysML language, the principles in this paper describe the expansion of the SysML language as a baseline in order to: hierarchically describe a system, describe that system functionally within success space, and allocate detection mechanisms to success functions for system protection.
Structure refinement of membrane proteins via molecular dynamics simulations.
Dutagaci, Bercem; Heo, Lim; Feig, Michael
2018-07-01
A refinement protocol based on physics-based techniques established for water soluble proteins is tested for membrane protein structures. Initial structures were generated by homology modeling and sampled via molecular dynamics simulations in explicit lipid bilayer and aqueous solvent systems. Snapshots from the simulations were selected based on scoring with either knowledge-based or implicit membrane-based scoring functions and averaged to obtain refined models. The protocol resulted in consistent and significant refinement of the membrane protein structures similar to the performance of refinement methods for soluble proteins. Refinement success was similar between sampling in the presence of lipid bilayers and aqueous solvent but the presence of lipid bilayers may benefit the improvement of lipid-facing residues. Scoring with knowledge-based functions (DFIRE and RWplus) was found to be as good as scoring using implicit membrane-based scoring functions suggesting that differences in internal packing is more important than orientations relative to the membrane during the refinement of membrane protein homology models. © 2018 Wiley Periodicals, Inc.
On the Relationship between Variational Level Set-Based and SOM-Based Active Contours
Abdelsamea, Mohammed M.; Gnecco, Giorgio; Gaber, Mohamed Medhat; Elyan, Eyad
2015-01-01
Most Active Contour Models (ACMs) deal with the image segmentation problem as a functional optimization problem, as they work on dividing an image into several regions by optimizing a suitable functional. Among ACMs, variational level set methods have been used to build an active contour with the aim of modeling arbitrarily complex shapes. Moreover, they can handle also topological changes of the contours. Self-Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly in modeling an active contour based on the idea of utilizing the prototypes (weights) of a SOM to control the evolution of the contour. SOM-based models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey, we illustrate the main concepts of variational level set-based ACMs, SOM-based ACMs, and their relationship and review in a comprehensive fashion the development of their state-of-the-art models from a machine learning perspective, with a focus on their strengths and weaknesses. PMID:25960736
Tang, Dalin; Yang, Chun; Geva, Tal; Gaudette, Glenn; del Nido, Pedro J.
2011-01-01
Multi-physics right and left ventricle (RV/LV) fluid-structure interaction (FSI) models were introduced to perform mechanical stress analysis and evaluate the effect of patch materials on RV function. The FSI models included three different patch materials (Dacron scaffold, treated pericardium, and contracting myocardium), two-layer construction, fiber orientation, and active anisotropic material properties. The models were constructed based on cardiac magnetic resonance (CMR) images acquired from a patient with severe RV dilatation and solved by ADINA. Our results indicate that the patch model with contracting myocardium leads to decreased stress level in the patch area, improved RV function and patch area contractility. PMID:21765559
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
NASA Astrophysics Data System (ADS)
Smith, S. L.; Chen, B.; Vallina, S. M.
2017-12-01
Biodiversity-Ecosystem Function (BEF) relationships, which are most commonly quantified in terms of productivity or total biomass yield, are known to depend on the timescale of the experiment or field study, both for terrestrial plants and phytoplankton, which have each been widely studied as model ecosystems. Although many BEF relationships are positive (i.e., increasing biodiversity enhances function), in some cases there is an optimal intermediate diversity level (i.e., a uni-modal relationship), and in other cases productivity decreases with certain measures of biodiversity. These differences in BEF relationships cannot be reconciled merely by differences in the timescale of experiments. We will present results from simulation experiments applying recently developed trait-based models of phytoplankton communities and ecosystems, using the `adaptive dynamics' framework to represent continuous distributions of size and other key functional traits. Controlled simulation experiments were conducted with different levels of phytoplankton size-diversity, which through trait-size correlations implicitly represents functional-diversity. One recent study applied a theoretical box model for idealized simulations at different frequencies of disturbance. This revealed how the shapes of BEF relationships depend systematically on the frequency of disturbance and associated nutrient supply. We will also present more recent results obtained using a trait-based plankton ecosystem model embedded in a three-dimensional ocean model applied to the North Pacific. This reveals essentially the same pattern in a spatially explicit model with more realistic environmental forcing. In the relatively more variable subarctic, productivity tends to increase with the size (and hence functional) diversity of phytoplankton, whereas productivity tends to decrease slightly with increasing size-diversity in the relatively calm subtropics. Continuous trait-based models can capture essential features of BEF relationships, while requiring far fewer calculations compared to typical plankton diversity models that explicitly simulate a great many idealized species.
The functional IME: A linkage of expertise across the disability continuum.
Clifton, David W
2006-01-01
Disability assessment remains a significant challenge especially in welfare systems like workers' compensation and disability insurance. Many of today's managed care strategies do not impact on the seminal issue of return to gainful employment. Employers, insurers, attorneys and case managers routinely request independent medical examinations (IMEs) as a means of determining degree of disability, functional limitations, work restrictions and "estimated" physical capacities. However, this approach is limited because physicians are not trained in the functional model of disability assessment. IMEs address pathology and impairments which represent a portion of the disability continuum described by the World Health Organization, Nagi, Guccione and others [e.g. pathology-impairment-disability-handicap]. Functional capacity evaluations or FCEs are often performed by physical and occupational therapists who are trained in a function-based model of disability assessment. Unlike an IME physician who completes "Estimated Physical Capacities", therapists measure actual physical functioning. The value of both IMEs and FCEs can be enhanced through a "functional IME" that combines both models; medical-based examination and a function-based disability evaluation. This combination enhances the assessment of the relationship of pathology to impairment and impairment to disability status especially, in musculoskeletal disorders which tend to drive costs in workers' compensation.
Quantum random oracle model for quantum digital signature
NASA Astrophysics Data System (ADS)
Shang, Tao; Lei, Qi; Liu, Jianwei
2016-10-01
The goal of this work is to provide a general security analysis tool, namely, the quantum random oracle (QRO), for facilitating the security analysis of quantum cryptographic protocols, especially protocols based on quantum one-way function. QRO is used to model quantum one-way function and different queries to QRO are used to model quantum attacks. A typical application of quantum one-way function is the quantum digital signature, whose progress has been hampered by the slow pace of the experimental realization. Alternatively, we use the QRO model to analyze the provable security of a quantum digital signature scheme and elaborate the analysis procedure. The QRO model differs from the prior quantum-accessible random oracle in that it can output quantum states as public keys and give responses to different queries. This tool can be a test bed for the cryptanalysis of more quantum cryptographic protocols based on the quantum one-way function.
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
Da, Yang
2015-12-18
The amount of functional genomic information has been growing rapidly but remains largely unused in genomic selection. Genomic prediction and estimation using haplotypes in genome regions with functional elements such as all genes of the genome can be an approach to integrate functional and structural genomic information for genomic selection. Towards this goal, this article develops a new haplotype approach for genomic prediction and estimation. A multi-allelic haplotype model treating each haplotype as an 'allele' was developed for genomic prediction and estimation based on the partition of a multi-allelic genotypic value into additive and dominance values. Each additive value is expressed as a function of h - 1 additive effects, where h = number of alleles or haplotypes, and each dominance value is expressed as a function of h(h - 1)/2 dominance effects. For a sample of q individuals, the limit number of effects is 2q - 1 for additive effects and is the number of heterozygous genotypes for dominance effects. Additive values are factorized as a product between the additive model matrix and the h - 1 additive effects, and dominance values are factorized as a product between the dominance model matrix and the h(h - 1)/2 dominance effects. Genomic additive relationship matrix is defined as a function of the haplotype model matrix for additive effects, and genomic dominance relationship matrix is defined as a function of the haplotype model matrix for dominance effects. Based on these results, a mixed model implementation for genomic prediction and variance component estimation that jointly use haplotypes and single markers is established, including two computing strategies for genomic prediction and variance component estimation with identical results. The multi-allelic genetic partition fills a theoretical gap in genetic partition by providing general formulations for partitioning multi-allelic genotypic values and provides a haplotype method based on the quantitative genetics model towards the utilization of functional and structural genomic information for genomic prediction and estimation.
Assessment of Differential Item Functioning in Testlet-Based Items Using the Rasch Testlet Model
ERIC Educational Resources Information Center
Wang, Wen-Chung; Wilson, Mark
2005-01-01
This study presents a procedure for detecting differential item functioning (DIF) for dichotomous and polytomous items in testlet-based tests, whereby DIF is taken into account by adding DIF parameters into the Rasch testlet model. Simulations were conducted to assess recovery of the DIF and other parameters. Two independent variables, test type…
The Feasibility of Quality Function Deployment (QFD) as an Assessment and Quality Assurance Model
ERIC Educational Resources Information Center
Matorera, D.; Fraser, W. J.
2016-01-01
Business schools are globally often seen as structured, purpose-driven, multi-sector and multi-perspective organisations. This article is based on the response of a graduate school to an innovative industrial Quality Function Deployment-based model (QFD), which was to be adopted initially in a Master's degree programme for quality assurance…
Verifying the functional ability of microstructured surfaces by model-based testing
NASA Astrophysics Data System (ADS)
Hartmann, Wito; Weckenmann, Albert
2014-09-01
Micro- and nanotechnology enables the use of new product features such as improved light absorption, self-cleaning or protection, which are based, on the one hand, on the size of functional nanostructures and the other hand, on material-specific properties. With the need to reliably measure progressively smaller geometric features, coordinate and surface-measuring instruments have been refined and now allow high-resolution topography and structure measurements down to the sub-nanometre range. Nevertheless, in many cases it is not possible to make a clear statement about the functional ability of the workpiece or its topography because conventional concepts of dimensioning and tolerancing are solely geometry oriented and standardized surface parameters are not sufficient to consider interaction with non-geometric parameters, which are dominant for functions such as sliding, wetting, sealing and optical reflection. To verify the functional ability of microstructured surfaces, a method was developed based on a parameterized mathematical-physical model of the function. From this model, function-related properties can be identified and geometric parameters can be derived, which may be different for the manufacturing and verification processes. With this method it is possible to optimize the definition of the shape of the workpiece regarding the intended function by applying theoretical and experimental knowledge, as well as modelling and simulation. Advantages of this approach will be discussed and demonstrated by the example of a microstructured inking roll.
A Method for the Alignment of Heterogeneous Macromolecules from Electron Microscopy
Shatsky, Maxim; Hall, Richard J.; Brenner, Steven E.; Glaeser, Robert M.
2009-01-01
We propose a feature-based image alignment method for single-particle electron microscopy that is able to accommodate various similarity scoring functions while efficiently sampling the two-dimensional transformational space. We use this image alignment method to evaluate the performance of a scoring function that is based on the Mutual Information (MI) of two images rather than one that is based on the cross-correlation function. We show that alignment using MI for the scoring function has far less model-dependent bias than is found with cross-correlation based alignment. We also demonstrate that MI improves the alignment of some types of heterogeneous data, provided that the signal to noise ratio is relatively high. These results indicate, therefore, that use of MI as the scoring function is well suited for the alignment of class-averages computed from single particle images. Our method is tested on data from three model structures and one real dataset. PMID:19166941
Liu, Hesen; Zhu, Lin; Pan, Zhuohong; ...
2015-09-14
One of the main drawbacks of the existing oscillation damping controllers that are designed based on offline dynamic models is adaptivity to the power system operating condition. With the increasing availability of wide-area measurements and the rapid development of system identification techniques, it is possible to identify a measurement-based transfer function model online that can be used to tune the oscillation damping controller. Such a model could capture all dominant oscillation modes for adaptive and coordinated oscillation damping control. our paper describes a comprehensive approach to identify a low-order transfer function model of a power system using a multi-input multi-outputmore » (MIMO) autoregressive moving average exogenous (ARMAX) model. This methodology consists of five steps: 1) input selection; 2) output selection; 3) identification trigger; 4) model estimation; and 5) model validation. The proposed method is validated by using ambient data and ring-down data in the 16-machine 68-bus Northeast Power Coordinating Council system. Our results demonstrate that the measurement-based model using MIMO ARMAX can capture all the dominant oscillation modes. Compared with the MIMO subspace state space model, the MIMO ARMAX model has equivalent accuracy but lower order and improved computational efficiency. The proposed model can be applied for adaptive and coordinated oscillation damping control.« less
Recalculating the quasar luminosity function of the extended Baryon Oscillation Spectroscopic Survey
NASA Astrophysics Data System (ADS)
Caditz, David M.
2017-12-01
Aims: The extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey provides a uniform sample of over 13 000 variability selected quasi-stellar objects (QSOs) in the redshift range 0.68
Transposons As Tools for Functional Genomics in Vertebrate Models.
Kawakami, Koichi; Largaespada, David A; Ivics, Zoltán
2017-11-01
Genetic tools and mutagenesis strategies based on transposable elements are currently under development with a vision to link primary DNA sequence information to gene functions in vertebrate models. By virtue of their inherent capacity to insert into DNA, transposons can be developed into powerful tools for chromosomal manipulations. Transposon-based forward mutagenesis screens have numerous advantages including high throughput, easy identification of mutated alleles, and providing insight into genetic networks and pathways based on phenotypes. For example, the Sleeping Beauty transposon has become highly instrumental to induce tumors in experimental animals in a tissue-specific manner with the aim of uncovering the genetic basis of diverse cancers. Here, we describe a battery of mutagenic cassettes that can be applied in conjunction with transposon vectors to mutagenize genes, and highlight versatile experimental strategies for the generation of engineered chromosomes for loss-of-function as well as gain-of-function mutagenesis for functional gene annotation in vertebrate models, including zebrafish, mice, and rats. Copyright © 2017 Elsevier Ltd. All rights reserved.
Intent inferencing with a model-based operator's associate
NASA Technical Reports Server (NTRS)
Jones, Patricia M.; Mitchell, Christine M.; Rubin, Kenneth S.
1989-01-01
A portion of the Operator Function Model Expert System (OFMspert) research project is described. OFMspert is an architecture for an intelligent operator's associate or assistant that can aid the human operator of a complex, dynamic system. Intelligent aiding requires both understanding and control. The understanding (i.e., intent inferencing) ability of the operator's associate is discussed. Understanding or intent inferencing requires a model of the human operator; the usefulness of an intelligent aid depends directly on the fidelity and completeness of its underlying model. The model chosen for this research is the operator function model (OFM). The OFM represents operator functions, subfunctions, tasks, and actions as a heterarchic-hierarchic network of finite state automata, where the arcs in the network are system triggering events. The OFM provides the structure for intent inferencing in that operator functions and subfunctions correspond to likely operator goals and plans. A blackboard system similar to that of Human Associative Processor (HASP) is proposed as the implementation of intent inferencing function. This system postulates operator intentions based on current system state and attempts to interpret observed operator actions in light of these hypothesized intentions.
AN INDIVIDUAL-BASED MODEL OF COTTUS POPULATION DYNAMICS
We explored population dynamics of a southern Appalachian population of Cottus bairdi using a spatially-explicit, individual-based model. The model follows daily growth, mortality, and spawning of individuals as a function of flow and temperature. We modeled movement of juveniles...
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
A micro-computer-based system to compute magnetic variation
NASA Technical Reports Server (NTRS)
Kaul, Rajan
1987-01-01
A mathematical model of magnetic variation in the continental United States was implemented in the Ohio University Loran-C receiver. The model is based on a least squares fit of a polynomial function. The implementation on the microprocessor based Loran-C receiver is possible with the help of a math chip which performs 32 bit floating point mathematical operations. A Peripheral Interface Adapter is used to communicate between the 6502 based microcomputer and the 9511 math chip. The implementation provides magnetic variation data to the pilot as a function of latitude and longitude. The model and the real time implementation in the receiver are described.
Testing the Digital Thread in Support of Model-Based Manufacturing and Inspection
Hedberg, Thomas; Lubell, Joshua; Fischer, Lyle; Maggiano, Larry; Feeney, Allison Barnard
2016-01-01
A number of manufacturing companies have reported anecdotal evidence describing the benefits of Model-Based Enterprise (MBE). Based on this evidence, major players in industry have embraced a vision to deploy MBE. In our view, the best chance of realizing this vision is the creation of a single “digital thread.” Under MBE, there exists a Model-Based Definition (MBD), created by the Engineering function, that downstream functions reuse to complete Model-Based Manufacturing and Model-Based Inspection activities. The ensemble of data that enables the combination of model-based definition, manufacturing, and inspection defines this digital thread. Such a digital thread would enable real-time design and analysis, collaborative process-flow development, automated artifact creation, and full-process traceability in a seamless real-time collaborative development among project participants. This paper documents the strengths and weaknesses in the current, industry strategies for implementing MBE. It also identifies gaps in the transition and/or exchange of data between various manufacturing processes. Lastly, this paper presents measured results from a study of model-based processes compared to drawing-based processes and provides evidence to support the anecdotal evidence and vision made by industry. PMID:27325911
Finite-fault source inversion using adjoint methods in 3D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-04-01
Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Finite-fault source inversion using adjoint methods in 3-D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-07-01
Accounting for lateral heterogeneities in the 3-D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1-D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3-D heterogeneity in source inversion involves pre-computing 3-D Green's functions, which requires a number of 3-D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense data sets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3-D heterogeneous velocity model. The velocity model comprises a uniform background and a 3-D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3-D velocity model are performed for two different station configurations, a dense and a sparse network with 1 and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak-slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3-D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3-D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Secure and Resilient Functional Modeling for Navy Cyber-Physical Systems
2017-05-24
Functional Modeling Compiler (SCCT) FM Compiler and Key Performance Indicators (KPI) May 2018 Pending. Model Management Backbone (SCCT) MMB Demonstration...implement the agent- based distributed runtime. - KPIs for single/multicore controllers and temporal/spatial domains. - Integration of the model management ...Distributed Runtime (UCI) Not started. Model Management Backbone (SCCT) Not started. Siemens Corporation Corporate Technology Unrestricted
Using a Functional Model to Develop a Mathematical Formula
ERIC Educational Resources Information Center
Otto, Charlotte A.; Everett, Susan A.; Luera, Gail R.
2008-01-01
The unifying theme of models was incorporated into a required Science Capstone course for pre-service elementary teachers based on national standards in science and mathematics. A model of a teeter-totter was selected for use as an example of a functional model for gathering data as well as a visual model of a mathematical equation for developing…
ProbOnto: ontology and knowledge base of probability distributions.
Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala
2016-09-01
Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Prestat, Emmanuel; David, Maude M.; Hultman, Jenni; ...
2014-09-26
A new functional gene database, FOAM (Functional Ontology Assignments for Metagenomes), was developed to screen environmental metagenomic sequence datasets. FOAM provides a new functional ontology dedicated to classify gene functions relevant to environmental microorganisms based on Hidden Markov Models (HMMs). Sets of aligned protein sequences (i.e. ‘profiles’) were tailored to a large group of target KEGG Orthologs (KOs) from which HMMs were trained. The alignments were checked and curated to make them specific to the targeted KO. Within this process, sequence profiles were enriched with the most abundant sequences available to maximize the yield of accurate classifier models. An associatedmore » functional ontology was built to describe the functional groups and hierarchy. FOAM allows the user to select the target search space before HMM-based comparison steps and to easily organize the results into different functional categories and subcategories. FOAM is publicly available at http://portal.nersc.gov/project/m1317/FOAM/.« less
NASA Astrophysics Data System (ADS)
Echavarria, E.; Tomiyama, T.; van Bussel, G. J. W.
2007-07-01
The objective of this on-going research is to develop a design methodology to increase the availability for offshore wind farms, by means of an intelligent maintenance system capable of responding to faults by reconfiguring the system or subsystems, without increasing service visits, complexity, or costs. The idea is to make use of the existing functional redundancies within the system and sub-systems to keep the wind turbine operational, even at a reduced capacity if necessary. Re-configuration is intended to be a built-in capability to be used as a repair strategy, based on these existing functionalities provided by the components. The possible solutions can range from using information from adjacent wind turbines, such as wind speed and direction, to setting up different operational modes, for instance re-wiring, re-connecting, changing parameters or control strategy. The methodology described in this paper is based on qualitative physics and consists of a fault diagnosis system based on a model-based reasoner (MBR), and on a functional redundancy designer (FRD). Both design tools make use of a function-behaviour-state (FBS) model. A design methodology based on the re-configuration concept to achieve self-maintained wind turbines is an interesting and promising approach to reduce stoppage rate, failure events, maintenance visits, and to maintain energy output possibly at reduced rate until the next scheduled maintenance.
Working-memory capacity protects model-based learning from stress.
Otto, A Ross; Raio, Candace M; Chiang, Alice; Phelps, Elizabeth A; Daw, Nathaniel D
2013-12-24
Accounts of decision-making have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental advances suggest that this classic distinction between habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning, called model-free and model-based learning. Popular neurocomputational accounts of reward processing emphasize the involvement of the dopaminergic system in model-free learning and prefrontal, central executive-dependent control systems in model-based choice. Here we hypothesized that the hypothalamic-pituitary-adrenal (HPA) axis stress response--believed to have detrimental effects on prefrontal cortex function--should selectively attenuate model-based contributions to behavior. To test this, we paired an acute stressor with a sequential decision-making task that affords distinguishing the relative contributions of the two learning strategies. We assessed baseline working-memory (WM) capacity and used salivary cortisol levels to measure HPA axis stress response. We found that stress response attenuates the contribution of model-based, but not model-free, contributions to behavior. Moreover, stress-induced behavioral changes were modulated by individual WM capacity, such that low-WM-capacity individuals were more susceptible to detrimental stress effects than high-WM-capacity individuals. These results enrich existing accounts of the interplay between acute stress, working memory, and prefrontal function and suggest that executive function may be protective against the deleterious effects of acute stress.
Validation of a Node-Centered Wall Function Model for the Unstructured Flow Code FUN3D
NASA Technical Reports Server (NTRS)
Carlson, Jan-Renee; Vasta, Veer N.; White, Jeffery
2015-01-01
In this paper, the implementation of two wall function models in the Reynolds averaged Navier-Stokes (RANS) computational uid dynamics (CFD) code FUN3D is described. FUN3D is a node centered method for solving the three-dimensional Navier-Stokes equations on unstructured computational grids. The first wall function model, based on the work of Knopp et al., is used in conjunction with the one-equation turbulence model of Spalart-Allmaras. The second wall function model, also based on the work of Knopp, is used in conjunction with the two-equation k-! turbulence model of Menter. The wall function models compute the wall momentum and energy flux, which are used to weakly enforce the wall velocity and pressure flux boundary conditions in the mean flow momentum and energy equations. These wall conditions are implemented in an implicit form where the contribution of the wall function model to the Jacobian are also included. The boundary conditions of the turbulence transport equations are enforced explicitly (strongly) on all solid boundaries. The use of the wall function models is demonstrated on four test cases: a at plate boundary layer, a subsonic di user, a 2D airfoil, and a 3D semi-span wing. Where possible, different near-wall viscous spacing tactics are examined. Iterative residual convergence was obtained in most cases. Solution results are compared with theoretical and experimental data for several variations of grid spacing. In general, very good comparisons with data were achieved.
Studies on combined model based on functional objectives of large scale complex engineering
NASA Astrophysics Data System (ADS)
Yuting, Wang; Jingchun, Feng; Jiabao, Sun
2018-03-01
As various functions were included in large scale complex engineering, and each function would be conducted with completion of one or more projects, combined projects affecting their functions should be located. Based on the types of project portfolio, the relationship of projects and their functional objectives were analyzed. On that premise, portfolio projects-technics based on their functional objectives were introduced, then we studied and raised the principles of portfolio projects-technics based on the functional objectives of projects. In addition, The processes of combined projects were also constructed. With the help of portfolio projects-technics based on the functional objectives of projects, our research findings laid a good foundation for management of large scale complex engineering portfolio management.
Paul, Keryn I; Roxburgh, Stephen H; Chave, Jerome; England, Jacqueline R; Zerihun, Ayalsew; Specht, Alison; Lewis, Tom; Bennett, Lauren T; Baker, Thomas G; Adams, Mark A; Huxtable, Dan; Montagu, Kelvin D; Falster, Daniel S; Feller, Mike; Sochacki, Stan; Ritson, Peter; Bastin, Gary; Bartle, John; Wildy, Dan; Hobbs, Trevor; Larmour, John; Waterworth, Rob; Stewart, Hugh T L; Jonson, Justin; Forrester, David I; Applegate, Grahame; Mendham, Daniel; Bradford, Matt; O'Grady, Anthony; Green, Daryl; Sudmeyer, Rob; Rance, Stan J; Turner, John; Barton, Craig; Wenk, Elizabeth H; Grove, Tim; Attiwill, Peter M; Pinkard, Elizabeth; Butler, Don; Brooksbank, Kim; Spencer, Beren; Snowdon, Peter; O'Brien, Nick; Battaglia, Michael; Cameron, David M; Hamilton, Steve; McAuthur, Geoff; Sinclair, Jenny
2016-06-01
Accurate ground-based estimation of the carbon stored in terrestrial ecosystems is critical to quantifying the global carbon budget. Allometric models provide cost-effective methods for biomass prediction. But do such models vary with ecoregion or plant functional type? We compiled 15 054 measurements of individual tree or shrub biomass from across Australia to examine the generality of allometric models for above-ground biomass prediction. This provided a robust case study because Australia includes ecoregions ranging from arid shrublands to tropical rainforests, and has a rich history of biomass research, particularly in planted forests. Regardless of ecoregion, for five broad categories of plant functional type (shrubs; multistemmed trees; trees of the genus Eucalyptus and closely related genera; other trees of high wood density; and other trees of low wood density), relationships between biomass and stem diameter were generic. Simple power-law models explained 84-95% of the variation in biomass, with little improvement in model performance when other plant variables (height, bole wood density), or site characteristics (climate, age, management) were included. Predictions of stand-based biomass from allometric models of varying levels of generalization (species-specific, plant functional type) were validated using whole-plot harvest data from 17 contrasting stands (range: 9-356 Mg ha(-1) ). Losses in efficiency of prediction were <1% if generalized models were used in place of species-specific models. Furthermore, application of generalized multispecies models did not introduce significant bias in biomass prediction in 92% of the 53 species tested. Further, overall efficiency of stand-level biomass prediction was 99%, with a mean absolute prediction error of only 13%. Hence, for cost-effective prediction of biomass across a wide range of stands, we recommend use of generic allometric models based on plant functional types. Development of new species-specific models is only warranted when gains in accuracy of stand-based predictions are relatively high (e.g. high-value monocultures). © 2015 John Wiley & Sons Ltd.
Spatial Copula Model for Imputing Traffic Flow Data from Remote Microwave Sensors
Ma, Xiaolei; Du, Bowen; Yu, Bin
2017-01-01
Issues of missing data have become increasingly serious with the rapid increase in usage of traffic sensors. Analyses of the Beijing ring expressway have showed that up to 50% of microwave sensors pose missing values. The imputation of missing traffic data must be urgently solved although a precise solution that cannot be easily achieved due to the significant number of missing portions. In this study, copula-based models are proposed for the spatial interpolation of traffic flow from remote traffic microwave sensors. Most existing interpolation methods only rely on covariance functions to depict spatial correlation and are unsuitable for coping with anomalies due to Gaussian consumption. Copula theory overcomes this issue and provides a connection between the correlation function and the marginal distribution function of traffic flow. To validate copula-based models, a comparison with three kriging methods is conducted. Results indicate that copula-based models outperform kriging methods, especially on roads with irregular traffic patterns. Copula-based models demonstrate significant potential to impute missing data in large-scale transportation networks. PMID:28934164
Image-Based Predictive Modeling of Heart Mechanics.
Wang, V Y; Nielsen, P M F; Nash, M P
2015-01-01
Personalized biophysical modeling of the heart is a useful approach for noninvasively analyzing and predicting in vivo cardiac mechanics. Three main developments support this style of analysis: state-of-the-art cardiac imaging technologies, modern computational infrastructure, and advanced mathematical modeling techniques. In vivo measurements of cardiac structure and function can be integrated using sophisticated computational methods to investigate mechanisms of myocardial function and dysfunction, and can aid in clinical diagnosis and developing personalized treatment. In this article, we review the state-of-the-art in cardiac imaging modalities, model-based interpretation of 3D images of cardiac structure and function, and recent advances in modeling that allow personalized predictions of heart mechanics. We discuss how using such image-based modeling frameworks can increase the understanding of the fundamental biophysics behind cardiac mechanics, and assist with diagnosis, surgical guidance, and treatment planning. Addressing the challenges in this field will require a coordinated effort from both the clinical-imaging and modeling communities. We also discuss future directions that can be taken to bridge the gap between basic science and clinical translation.
Connectivity-based neurofeedback: Dynamic causal modeling for real-time fMRI☆
Koush, Yury; Rosa, Maria Joao; Robineau, Fabien; Heinen, Klaartje; W. Rieger, Sebastian; Weiskopf, Nikolaus; Vuilleumier, Patrik; Van De Ville, Dimitri; Scharnowski, Frank
2013-01-01
Neurofeedback based on real-time fMRI is an emerging technique that can be used to train voluntary control of brain activity. Such brain training has been shown to lead to behavioral effects that are specific to the functional role of the targeted brain area. However, real-time fMRI-based neurofeedback so far was limited to mainly training localized brain activity within a region of interest. Here, we overcome this limitation by presenting near real-time dynamic causal modeling in order to provide feedback information based on connectivity between brain areas rather than activity within a single brain area. Using a visual–spatial attention paradigm, we show that participants can voluntarily control a feedback signal that is based on the Bayesian model comparison between two predefined model alternatives, i.e. the connectivity between left visual cortex and left parietal cortex vs. the connectivity between right visual cortex and right parietal cortex. Our new approach thus allows for training voluntary control over specific functional brain networks. Because most mental functions and most neurological disorders are associated with network activity rather than with activity in a single brain region, this novel approach is an important methodological innovation in order to more directly target functionally relevant brain networks. PMID:23668967
Does Functional Neuroimaging Solve the Questions of Neurolinguistics?
ERIC Educational Resources Information Center
Sidtis, Diana Van Lancker
2006-01-01
Neurolinguistic research has been engaged in evaluating models of language using measures from brain structure and function, and/or in investigating brain structure and function with respect to language representation using proposed models of language. While the aphasiological strategy, which classifies aphasias based on performance modality and a…
Insights into DNA-mediated interparticle interactions from a coarse-grained model
NASA Astrophysics Data System (ADS)
Ding, Yajun; Mittal, Jeetain
2014-11-01
DNA-functionalized particles have great potential for the design of complex self-assembled materials. The major hurdle in realizing crystal structures from DNA-functionalized particles is expected to be kinetic barriers that trap the system in metastable amorphous states. Therefore, it is vital to explore the molecular details of particle assembly processes in order to understand the underlying mechanisms. Molecular simulations based on coarse-grained models can provide a convenient route to explore these details. Most of the currently available coarse-grained models of DNA-functionalized particles ignore key chemical and structural details of DNA behavior. These models therefore are limited in scope for studying experimental phenomena. In this paper, we present a new coarse-grained model of DNA-functionalized particles which incorporates some of the desired features of DNA behavior. The coarse-grained DNA model used here provides explicit DNA representation (at the nucleotide level) and complementary interactions between Watson-Crick base pairs, which lead to the formation of single-stranded hairpin and double-stranded DNA. Aggregation between multiple complementary strands is also prevented in our model. We study interactions between two DNA-functionalized particles as a function of DNA grafting density, lengths of the hybridizing and non-hybridizing parts of DNA, and temperature. The calculated free energies as a function of pair distance between particles qualitatively resemble experimental measurements of DNA-mediated pair interactions.
Adams, J; Adler, C; Ahammed, Z; Allgower, C; Amonett, J; Anderson, B D; Anderson, M; Averichev, G S; Balewski, J; Barannikova, O; Barnby, L S; Baudot, J; Bekele, S; Belaga, V V; Bellwied, R; Berger, J; Bichsel, H; Billmeier, A; Bland, L C; Blyth, C O; Bonner, B E; Boucham, A; Brandin, A; Bravar, A; Cadman, R V; Caines, H; Calderónde la Barca Sánchez, M; Cardenas, A; Carroll, J; Castillo, J; Castro, M; Cebra, D; Chaloupka, P; Chattopadhyay, S; Chen, Y; Chernenko, S P; Cherney, M; Chikanian, A; Choi, B; Christie, W; Coffin, J P; Cormier, T M; Corral, M M; Cramer, J G; Crawford, H J; Derevschikov, A A; Didenko, L; Dietel, T; Draper, J E; Dunin, V B; Dunlop, J C; Eckardt, V; Efimov, L G; Emelianov, V; Engelage, J; Eppley, G; Erazmus, B; Fachini, P; Faine, V; Faivre, J; Fatemi, R; Filimonov, K; Finch, E; Fisyak, Y; Flierl, D; Foley, K J; Fu, J; Gagliardi, C A; Gagunashvili, N; Gans, J; Gaudichet, L; Germain, M; Geurts, F; Ghazikhanian, V; Grachov, O; Grigoriev, V; Guedon, M; Guertin, S M; Gushin, E; Hallman, T J; Hardtke, D; Harris, J W; Heinz, M; Henry, T W; Heppelmann, S; Herston, T; Hippolyte, B; Hirsch, A; Hjort, E; Hoffmann, G W; Horsley, M; Huang, H Z; Humanic, T J; Igo, G; Ishihara, A; Ivanshin, Yu I; Jacobs, P; Jacobs, W W; Janik, M; Johnson, I; Jones, P G; Judd, E G; Kaneta, M; Kaplan, M; Keane, D; Kiryluk, J; Kisiel, A; Klay, J; Klein, S R; Klyachko, A; Kollegger, T; Konstantinov, A S; Kopytine, M; Kotchenda, L; Kovalenko, A D; Kramer, M; Kravtsov, P; Krueger, K; Kuhn, C; Kulikov, A I; Kunde, G J; Kunz, C L; Kutuev, R Kh; Kuznetsov, A A; Lamont, M A C; Landgraf, J M; Lange, S; Lansdell, C P; Lasiuk, B; Laue, F; Lauret, J; Lebedev, A; Lednický, R; Leontiev, V M; LeVine, M J; Li, Q; Lindenbaum, S J; Lisa, M A; Liu, F; Liu, L; Liu, Z; Liu, Q J; Ljubicic, T; Llope, W J; Long, H; Longacre, R S; Lopez-Noriega, M; Love, W A; Ludlam, T; Lynn, D; Ma, J; Magestro, D; Majka, R; Margetis, S; Markert, C; Martin, L; Marx, J; Matis, H S; Matulenko, Yu A; McShane, T S; Meissner, F; Melnick, Yu; Meschanin, A; Messer, M; Miller, M L; Milosevich, Z; Minaev, N G; Mitchell, J; Moore, C F; Morozov, V; de Moura, M M; Munhoz, M G; Nelson, J M; Nevski, P; Nikitin, V A; Nogach, L V; Norman, B; Nurushev, S B; Odyniec, G; Ogawa, A; Okorokov, V; Oldenburg, M; Olson, D; Paic, G; Pandey, S U; Panebratsev, Y; Panitkin, S Y; Pavlinov, A I; Pawlak, T; Perevoztchikov, V; Peryt, W; Petrov, V A; Planinic, M; Pluta, J; Porile, N; Porter, J; Poskanzer, A M; Potrebenikova, E; Prindle, D; Pruneau, C; Putschke, J; Rai, G; Rakness, G; Ravel, O; Ray, R L; Razin, S V; Reichhold, D; Reid, J G; Renault, G; Retiere, F; Ridiger, A; Ritter, H G; Roberts, J B; Rogachevski, O V; Romero, J L; Rose, A; Roy, C; Rykov, V; Sakrejda, I; Salur, S; Sandweiss, J; Savin, I; Schambach, J; Scharenberg, R P; Schmitz, N; Schroeder, L S; Schüttauf, A; Schweda, K; Seger, J; Seliverstov, D; Seyboth, P; Shahaliev, E; Shestermanov, K E; Shimanskii, S S; Simon, F; Skoro, G; Smirnov, N; Snellings, R; Sorensen, P; Sowinski, J; Spinka, H M; Srivastava, B; Stephenson, E J; Stock, R; Stolpovsky, A; Strikhanov, M; Stringfellow, B; Struck, C; Suaide, A A P; Sugarbaker, E; Suire, C; Sumbera, M; Surrow, B; Symons, T J M; de Toledo, A Szanto; Szarwas, P; Tai, A; Takahashi, J; Tang, A H; Thein, D; Thomas, J H; Thompson, M; Tikhomirov, V; Tokarev, M; Tonjes, M B; Trainor, T A; Trentalange, S; Tribble, R E; Trofimov, V; Tsai, O; Ullrich, T; Underwood, D G; Van Buren, G; Vander Molen, A M; Vasilevski, I M; Vasiliev, A N; Vigdor, S E; Voloshin, S A; Wang, F; Ward, H; Watson, J W; Wells, R; Westfall, G D; Whitten, C; Wieman, H; Willson, R; Wissink, S W; Witt, R; Wood, J; Xu, N; Xu, Z; Yakutin, A E; Yamamoto, E; Yang, J; Yepes, P; Yurevich, V I; Zanevski, Y V; Zborovský, I; Zhang, H; Zhang, W M; Zoulkarneev, R; Zubarev, A N
2003-05-02
The balance function is a new observable based on the principle that charge is locally conserved when particles are pair produced. Balance functions have been measured for charged particle pairs and identified charged pion pairs in Au+Au collisions at the square root of SNN = 130 GeV at the Relativistic Heavy Ion Collider using STAR. Balance functions for peripheral collisions have widths consistent with model predictions based on a superposition of nucleon-nucleon scattering. Widths in central collisions are smaller, consistent with trends predicted by models incorporating late hadronization.
Rank-preserving regression: a more robust rank regression model against outliers.
Chen, Tian; Kowalski, Jeanne; Chen, Rui; Wu, Pan; Zhang, Hui; Feng, Changyong; Tu, Xin M
2016-08-30
Mean-based semi-parametric regression models such as the popular generalized estimating equations are widely used to improve robustness of inference over parametric models. Unfortunately, such models are quite sensitive to outlying observations. The Wilcoxon-score-based rank regression (RR) provides more robust estimates over generalized estimating equations against outliers. However, the RR and its extensions do not sufficiently address missing data arising in longitudinal studies. In this paper, we propose a new approach to address outliers under a different framework based on the functional response models. This functional-response-model-based alternative not only addresses limitations of the RR and its extensions for longitudinal data, but, with its rank-preserving property, even provides more robust estimates than these alternatives. The proposed approach is illustrated with both real and simulated data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Requirements Modeling with the Aspect-oriented User Requirements Notation (AoURN): A Case Study
NASA Astrophysics Data System (ADS)
Mussbacher, Gunter; Amyot, Daniel; Araújo, João; Moreira, Ana
The User Requirements Notation (URN) is a recent ITU-T standard that supports requirements engineering activities. The Aspect-oriented URN (AoURN) adds aspect-oriented concepts to URN, creating a unified framework that allows for scenario-based, goal-oriented, and aspect-oriented modeling. AoURN is applied to the car crash crisis management system (CCCMS), modeling its functional and non-functional requirements (NFRs). AoURN generally models all use cases, NFRs, and stakeholders as individual concerns and provides general guidelines for concern identification. AoURN handles interactions between concerns, capturing their dependencies and conflicts as well as the resolutions. We present a qualitative comparison of aspect-oriented techniques for scenario-based and goal-oriented requirements engineering. An evaluation carried out based on the metrics adapted from literature and a task-based evaluation suggest that AoURN models are more scalable than URN models and exhibit better modularity, reusability, and maintainability.
Calibration and prediction of removal function in magnetorheological finishing.
Dai, Yifan; Song, Ci; Peng, Xiaoqiang; Shi, Feng
2010-01-20
A calibrated and predictive model of the removal function has been established based on the analysis of a magnetorheological finishing (MRF) process. By introducing an efficiency coefficient of the removal function, the model can be used to calibrate the removal function in a MRF figuring process and to accurately predict the removal function of a workpiece to be polished whose material is different from the spot part. Its correctness and feasibility have been validated by simulations. Furthermore, applying this model to the MRF figuring experiments, the efficiency coefficient of the removal function can be identified accurately to make the MRF figuring process deterministic and controllable. Therefore, all the results indicate that the calibrated and predictive model of the removal function can improve the finishing determinacy and increase the model applicability in a MRF process.
Lee, Hasup; Baek, Minkyung; Lee, Gyu Rie; Park, Sangwoo; Seok, Chaok
2017-03-01
Many proteins function as homo- or hetero-oligomers; therefore, attempts to understand and regulate protein functions require knowledge of protein oligomer structures. The number of available experimental protein structures is increasing, and oligomer structures can be predicted using the experimental structures of related proteins as templates. However, template-based models may have errors due to sequence differences between the target and template proteins, which can lead to functional differences. Such structural differences may be predicted by loop modeling of local regions or refinement of the overall structure. In CAPRI (Critical Assessment of PRotein Interactions) round 30, we used recently developed features of the GALAXY protein modeling package, including template-based structure prediction, loop modeling, model refinement, and protein-protein docking to predict protein complex structures from amino acid sequences. Out of the 25 CAPRI targets, medium and acceptable quality models were obtained for 14 and 1 target(s), respectively, for which proper oligomer or monomer templates could be detected. Symmetric interface loop modeling on oligomer model structures successfully improved model quality, while loop modeling on monomer model structures failed. Overall refinement of the predicted oligomer structures consistently improved the model quality, in particular in interface contacts. Proteins 2017; 85:399-407. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Resting-State Functional Connectivity Predicts Cognitive Impairment Related to Alzheimer's Disease.
Lin, Qi; Rosenberg, Monica D; Yoo, Kwangsun; Hsu, Tiffany W; O'Connell, Thomas P; Chun, Marvin M
2018-01-01
Resting-state functional connectivity (rs-FC) is a promising neuromarker for cognitive decline in aging population, based on its ability to reveal functional differences associated with cognitive impairment across individuals, and because rs-fMRI may be less taxing for participants than task-based fMRI or neuropsychological tests. Here, we employ an approach that uses rs-FC to predict the Alzheimer's Disease Assessment Scale (11 items; ADAS11) scores, which measure overall cognitive functioning, in novel individuals. We applied this technique, connectome-based predictive modeling, to a heterogeneous sample of 59 subjects from the Alzheimer's Disease Neuroimaging Initiative, including normal aging, mild cognitive impairment, and AD subjects. First, we built linear regression models to predict ADAS11 scores from rs-FC measured with Pearson's r correlation. The positive network model tested with leave-one-out cross validation (LOOCV) significantly predicted individual differences in cognitive function from rs-FC. In a second analysis, we considered other functional connectivity features, accordance and discordance, which disentangle the correlation and anticorrelation components of activity timecourses between brain areas. Using partial least square regression and LOOCV, we again built models to successfully predict ADAS11 scores in novel individuals. Our study provides promising evidence that rs-FC can reveal cognitive impairment in an aging population, although more development is needed for clinical application.
Vanreusel, Wouter; Maes, Dirk; Van Dyck, Hans
2007-02-01
Numerous models for predicting species distribution have been developed for conservation purposes. Most of them make use of environmental data (e.g., climate, topography, land use) at a coarse grid resolution (often kilometres). Such approaches are useful for conservation policy issues including reserve-network selection. The efficiency of predictive models for species distribution is usually tested on the area for which they were developed. Although highly interesting from the point of view of conservation efficiency, transferability of such models to independent areas is still under debate. We tested the transferability of habitat-based predictive distribution models for two regionally threatened butterflies, the green hairstreak (Callophrys rubi) and the grayling (Hipparchia semele), within and among three nature reserves in northeastern Belgium. We built predictive models based on spatially detailed maps of area-wide distribution and density of ecological resources. We used resources directly related to ecological functions (host plants, nectar sources, shelter, microclimate) rather than environmental surrogate variables. We obtained models that performed well with few resource variables. All models were transferable--although to different degrees--among the independent areas within the same broad geographical region. We argue that habitat models based on essential functional resources could transfer better in space than models that use indirect environmental variables. Because functional variables can easily be interpreted and even be directly affected by terrain managers, these models can be useful tools to guide species-adapted reserve management.
ERIC Educational Resources Information Center
Kunnavatana, S. Shanun; Bloom, Sarah E.; Samaha, Andrew L.; Lignugaris/Kraft, Benjamin; Dayton, Elizabeth; Harris, Shannon K.
2013-01-01
Functional behavioral assessments are commonly used in school settings to assess and develop interventions for problem behavior. The trial-based functional analysis is an approach that teachers can use in their classrooms to identify the function of problem behavior. The current study evaluates the effectiveness of a modified pyramidal training…
Yang, Xiaowei; Nie, Kun
2008-03-15
Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.
Mehri, Mehran
2014-07-01
The optimization algorithm of a model may have significant effects on the final optimal values of nutrient requirements in poultry enterprises. In poultry nutrition, the optimal values of dietary essential nutrients are very important for feed formulation to optimize profit through minimizing feed cost and maximizing bird performance. This study was conducted to introduce a novel multi-objective algorithm, desirability function, for optimization the bird response models based on response surface methodology (RSM) and artificial neural network (ANN). The growth databases on the central composite design (CCD) were used to construct the RSM and ANN models and optimal values for 3 essential amino acids including lysine, methionine, and threonine in broiler chicks have been reevaluated using the desirable function in both analytical approaches from 3 to 16 d of age. Multi-objective optimization results showed that the most desirable function was obtained for ANN-based model (D = 0.99) where the optimal levels of digestible lysine (dLys), digestible methionine (dMet), and digestible threonine (dThr) for maximum desirability were 13.2, 5.0, and 8.3 g/kg of diet, respectively. However, the optimal levels of dLys, dMet, and dThr in the RSM-based model were estimated at 11.2, 5.4, and 7.6 g/kg of diet, respectively. This research documented that the application of ANN in the broiler chicken model along with a multi-objective optimization algorithm such as desirability function could be a useful tool for optimization of dietary amino acids in fractional factorial experiments, in which the use of the global desirability function may be able to overcome the underestimations of dietary amino acids resulting from the RSM model. © 2014 Poultry Science Association Inc.
Frequency response function (FRF) based updating of a laser spot welded structure
NASA Astrophysics Data System (ADS)
Zin, M. S. Mohd; Rani, M. N. Abdul; Yunus, M. A.; Sani, M. S. M.; Wan Iskandar Mirza, W. I. I.; Mat Isa, A. A.
2018-04-01
The objective of this paper is to present frequency response function (FRF) based updating as a method for matching the finite element (FE) model of a laser spot welded structure with a physical test structure. The FE model of the welded structure was developed using CQUAD4 and CWELD element connectors, and NASTRAN was used to calculate the natural frequencies, mode shapes and FRF. Minimization of the discrepancies between the finite element and experimental FRFs was carried out using the exceptional numerical capability of NASTRAN Sol 200. The experimental work was performed under free-free boundary conditions using LMS SCADAS. Avast improvement in the finite element FRF was achieved using the frequency response function (FRF) based updating with two different objective functions proposed.
AgBase: supporting functional modeling in agricultural organisms
McCarthy, Fiona M.; Gresham, Cathy R.; Buza, Teresia J.; Chouvarine, Philippe; Pillai, Lakshmi R.; Kumar, Ranjit; Ozkan, Seval; Wang, Hui; Manda, Prashanti; Arick, Tony; Bridges, Susan M.; Burgess, Shane C.
2011-01-01
AgBase (http://www.agbase.msstate.edu/) provides resources to facilitate modeling of functional genomics data and structural and functional annotation of agriculturally important animal, plant, microbe and parasite genomes. The website is redesigned to improve accessibility and ease of use, including improved search capabilities. Expanded capabilities include new dedicated pages for horse, cat, dog, cotton, rice and soybean. We currently provide 590 240 Gene Ontology (GO) annotations to 105 454 gene products in 64 different species, including GO annotations linked to transcripts represented on agricultural microarrays. For many of these arrays, this provides the only functional annotation available. GO annotations are available for download and we provide comprehensive, species-specific GO annotation files for 18 different organisms. The tools available at AgBase have been expanded and several existing tools improved based upon user feedback. One of seven new tools available at AgBase, GOModeler, supports hypothesis testing from functional genomics data. We host several associated databases and provide genome browsers for three agricultural pathogens. Moreover, we provide comprehensive training resources (including worked examples and tutorials) via links to Educational Resources at the AgBase website. PMID:21075795
ERIC Educational Resources Information Center
Majeika, Caitlyn E.; Walder, Jessica P.; Hubbard, Jessica P.; Steeb, Kelly M.; Ferris, Geoffrey J.; Oakes, Wendy P.; Lane, Kathleen Lynne
2011-01-01
A comprehensive, integrated, three-tiered model (CI3T) of prevention is a framework for proactively meeting students' academic, behavioral, and social skills. At the tertiary (Tier 3) level of prevention, functional-assessment based interventions (FABIs) may be used to identify, develop, and implement supports based on the function, or purpose, of…
Delay functions in trip assignment for transport planning process
NASA Astrophysics Data System (ADS)
Leong, Lee Vien
2017-10-01
In transportation planning process, volume-delay and turn-penalty functions are the functions needed in traffic assignment to determine travel time on road network links. Volume-delay function is the delay function describing speed-flow relationship while turn-penalty function is the delay function associated to making a turn at intersection. The volume-delay function used in this study is the revised Bureau of Public Roads (BPR) function with the constant parameters, α and β values of 0.8298 and 3.361 while the turn-penalty functions for signalized intersection were developed based on uniform, random and overflow delay models. Parameters such as green time, cycle time and saturation flow were used in the development of turn-penalty functions. In order to assess the accuracy of the delay functions, road network in areas of Nibong Tebal, Penang and Parit Buntar, Perak was developed and modelled using transportation demand forecasting software. In order to calibrate the models, phase times and traffic volumes at fourteen signalised intersections within the study area were collected during morning and evening peak hours. The prediction of assigned volumes using the revised BPR function and the developed turn-penalty functions show close agreement to actual recorded traffic volume with the lowest percentage of accuracy, 80.08% and the highest, 93.04% for the morning peak model. As for the evening peak model, they were 75.59% and 95.33% respectively for lowest and highest percentage of accuracy. As for the yield left-turn lanes, the lowest percentage of accuracy obtained for the morning and evening peak models were 60.94% and 69.74% respectively while the highest percentage of accuracy obtained for both models were 100%. Therefore, can be concluded that the development and utilisation of delay functions based on local road conditions are important as localised delay functions can produce better estimate of link travel times and hence better planning for future scenarios.
Dislich, Claudia; Hettig, Elisabeth; Salecker, Jan; Heinonen, Johannes; Lay, Jann; Meyer, Katrin M; Wiegand, Kerstin; Tarigan, Suria
2018-01-01
Land-use changes have dramatically transformed tropical landscapes. We describe an ecological-economic land-use change model as an integrated, exploratory tool used to analyze how tropical land-use change affects ecological and socio-economic functions. The model analysis seeks to determine what kind of landscape mosaic can improve the ensemble of ecosystem functioning, biodiversity, and economic benefit based on the synergies and trade-offs that we have to account for. More specifically, (1) how do specific ecosystem functions, such as carbon storage, and economic functions, such as household consumption, relate to each other? (2) How do external factors, such as the output prices of crops, affect these relationships? (3) How do these relationships change when production inefficiency differs between smallholder farmers and learning is incorporated? We initialize the ecological-economic model with artificially generated land-use maps parameterized to our study region. The economic sub-model simulates smallholder land-use management decisions based on a profit maximization assumption. Each household determines factor inputs for all household fields and decides on land-use change based on available wealth. The ecological sub-model includes a simple account of carbon sequestration in above-ground and below-ground vegetation. We demonstrate model capabilities with results on household consumption and carbon sequestration from different output price and farming efficiency scenarios. The overall results reveal complex interactions between the economic and ecological spheres. For instance, model scenarios with heterogeneous crop-specific household productivity reveal a comparatively high inertia of land-use change. Our model analysis even shows such an increased temporal stability in landscape composition and carbon stocks of the agricultural area under dynamic price trends. These findings underline the utility of ecological-economic models, such as ours, to act as exploratory tools which can advance our understanding of the mechanisms underlying the trade-offs and synergies of ecological and economic functions in tropical landscapes.
Dislich, Claudia; Hettig, Elisabeth; Heinonen, Johannes; Lay, Jann; Meyer, Katrin M.; Wiegand, Kerstin; Tarigan, Suria
2018-01-01
Land-use changes have dramatically transformed tropical landscapes. We describe an ecological-economic land-use change model as an integrated, exploratory tool used to analyze how tropical land-use change affects ecological and socio-economic functions. The model analysis seeks to determine what kind of landscape mosaic can improve the ensemble of ecosystem functioning, biodiversity, and economic benefit based on the synergies and trade-offs that we have to account for. More specifically, (1) how do specific ecosystem functions, such as carbon storage, and economic functions, such as household consumption, relate to each other? (2) How do external factors, such as the output prices of crops, affect these relationships? (3) How do these relationships change when production inefficiency differs between smallholder farmers and learning is incorporated? We initialize the ecological-economic model with artificially generated land-use maps parameterized to our study region. The economic sub-model simulates smallholder land-use management decisions based on a profit maximization assumption. Each household determines factor inputs for all household fields and decides on land-use change based on available wealth. The ecological sub-model includes a simple account of carbon sequestration in above-ground and below-ground vegetation. We demonstrate model capabilities with results on household consumption and carbon sequestration from different output price and farming efficiency scenarios. The overall results reveal complex interactions between the economic and ecological spheres. For instance, model scenarios with heterogeneous crop-specific household productivity reveal a comparatively high inertia of land-use change. Our model analysis even shows such an increased temporal stability in landscape composition and carbon stocks of the agricultural area under dynamic price trends. These findings underline the utility of ecological-economic models, such as ours, to act as exploratory tools which can advance our understanding of the mechanisms underlying the trade-offs and synergies of ecological and economic functions in tropical landscapes. PMID:29351290
Bhongsatiern, Jiraganya; Stockmann, Chris; Yu, Tian; Constance, Jonathan E; Moorthy, Ganesh; Spigarelli, Michael G; Desai, Pankaj B; Sherwin, Catherine M T
2016-05-01
Growth and maturational changes have been identified as significant covariates in describing variability in clearance of renally excreted drugs such as vancomycin. Because of immaturity of clearance mechanisms, quantification of renal function in neonates is of importance. Several serum creatinine (SCr)-based renal function descriptors have been developed in adults and children, but none are selectively derived for neonates. This review summarizes development of the neonatal kidney and discusses assessment of the renal function regarding estimation of glomerular filtration rate using renal function descriptors. Furthermore, identification of the renal function descriptors that best describe the variability of vancomycin clearance was performed in a sample study of a septic neonatal cohort. Population pharmacokinetic models were developed applying a combination of age-weight, renal function descriptors, or SCr alone. In addition to age and weight, SCr or renal function descriptors significantly reduced variability of vancomycin clearance. The population pharmacokinetic models with Léger and modified Schwartz formulas were selected as the optimal final models, although the other renal function descriptors and SCr provided reasonably good fit to the data, suggesting further evaluation of the final models using external data sets and cross validation. The present study supports incorporation of renal function descriptors in the estimation of vancomycin clearance in neonates. © 2015, The American College of Clinical Pharmacology.
An MBO Scheme for Minimizing the Graph Ohta-Kawasaki Functional
NASA Astrophysics Data System (ADS)
van Gennip, Yves
2018-06-01
We study a graph-based version of the Ohta-Kawasaki functional, which was originally introduced in a continuum setting to model pattern formation in diblock copolymer melts and has been studied extensively as a paradigmatic example of a variational model for pattern formation. Graph-based problems inspired by partial differential equations (PDEs) and variational methods have been the subject of many recent papers in the mathematical literature, because of their applications in areas such as image processing and data classification. This paper extends the area of PDE inspired graph-based problems to pattern-forming models, while continuing in the tradition of recent papers in the field. We introduce a mass conserving Merriman-Bence-Osher (MBO) scheme for minimizing the graph Ohta-Kawasaki functional with a mass constraint. We present three main results: (1) the Lyapunov functionals associated with this MBO scheme Γ -converge to the Ohta-Kawasaki functional (which includes the standard graph-based MBO scheme and total variation as a special case); (2) there is a class of graphs on which the Ohta-Kawasaki MBO scheme corresponds to a standard MBO scheme on a transformed graph and for which generalized comparison principles hold; (3) this MBO scheme allows for the numerical computation of (approximate) minimizers of the graph Ohta-Kawasaki functional with a mass constraint.
A hypothetical universal model of cerebellar function: reconsideration of the current dogma.
Magal, Ari
2013-10-01
The cerebellum is commonly studied in the context of the classical eyeblink conditioning model, which attributes an adaptive motor function to cerebellar learning processes. This model of cerebellar function has quite a few shortcomings and may in fact be somewhat deficient in explaining the myriad functions attributed to the cerebellum, functions ranging from motor sequencing to emotion and cognition. The involvement of the cerebellum in these motor and non-motor functions has been demonstrated in both animals and humans in electrophysiological, behavioral, tracing, functional neuroimaging, and PET studies, as well as in clinical human case studies. A closer look at the cerebellum's evolutionary origin provides a clue to its underlying purpose as a tool which evolved to aid predation rather than as a tool for protection. Based upon this evidence, an alternative model of cerebellar function is proposed, one which might more comprehensively account both for the cerebellum's involvement in a myriad of motor, affective, and cognitive functions and for the relative simplicity and ubiquitous repetitiveness of its circuitry. This alternative model suggests that the cerebellum has the ability to detect coincidences of events, be they sensory, motor, affective, or cognitive in nature, and, after having learned to associate these, it can then trigger (or "mirror") these events after having temporally adjusted their onset based on positive/negative reinforcement. The model also provides for the cerebellum's direction of the proper and uninterrupted sequence of events resulting from this learning through the inhibition of efferent structures (as demonstrated in our lab).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro
2016-07-01
This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.
Rational functional representation of flap noise spectra including correction for reflection effects
NASA Technical Reports Server (NTRS)
Miles, J. H.
1974-01-01
A rational function is presented for the acoustic spectra generated by deflection of engine exhaust jets for under-the-wing and over-the-wing versions of externally blown flaps. The functional representation is intended to provide a means for compact storage of data and for data analysis. The expressions are based on Fourier transform functions for the Strouhal normalized pressure spectral density, and on a correction for reflection effects based on Thomas' (1969) N-independent-source model extended by use of a reflected ray transfer function. Curve fit comparisons are presented for blown-flap data taken from turbofan engine tests and from large-scale cold-flow model tests. Application of the rational function to scrubbing noise theory is also indicated.
PDF-based heterogeneous multiscale filtration model.
Gong, Jian; Rutland, Christopher J
2015-04-21
Motivated by modeling of gasoline particulate filters (GPFs), a probability density function (PDF) based heterogeneous multiscale filtration (HMF) model is developed to calculate filtration efficiency of clean particulate filters. A new methodology based on statistical theory and classic filtration theory is developed in the HMF model. Based on the analysis of experimental porosimetry data, a pore size probability density function is introduced to represent heterogeneity and multiscale characteristics of the porous wall. The filtration efficiency of a filter can be calculated as the sum of the contributions of individual collectors. The resulting HMF model overcomes the limitations of classic mean filtration models which rely on tuning of the mean collector size. Sensitivity analysis shows that the HMF model recovers the classical mean model when the pore size variance is very small. The HMF model is validated by fundamental filtration experimental data from different scales of filter samples. The model shows a good agreement with experimental data at various operating conditions. The effects of the microstructure of filters on filtration efficiency as well as the most penetrating particle size are correctly predicted by the model.
NASA Astrophysics Data System (ADS)
Lifton, Nathaniel; Sato, Tatsuhiko; Dunai, Tibor J.
2014-01-01
Several models have been proposed for scaling in situ cosmogenic nuclide production rates from the relatively few sites where they have been measured to other sites of interest. Two main types of models are recognized: (1) those based on data from nuclear disintegrations in photographic emulsions combined with various neutron detectors, and (2) those based largely on neutron monitor data. However, stubborn discrepancies between these model types have led to frequent confusion when calculating surface exposure ages from production rates derived from the models. To help resolve these discrepancies and identify the sources of potential biases in each model, we have developed a new scaling model based on analytical approximations to modeled fluxes of the main atmospheric cosmic-ray particles responsible for in situ cosmogenic nuclide production. Both the analytical formulations and the Monte Carlo model fluxes on which they are based agree well with measured atmospheric fluxes of neutrons, protons, and muons, indicating they can serve as a robust estimate of the atmospheric cosmic-ray flux based on first principles. We are also using updated records for quantifying temporal and spatial variability in geomagnetic and solar modulation effects on the fluxes. A key advantage of this new model (herein termed LSD) over previous Monte Carlo models of cosmogenic nuclide production is that it allows for faster estimation of scaling factors based on time-varying geomagnetic and solar inputs. Comparing scaling predictions derived from the LSD model with those of previously published models suggest potential sources of bias in the latter can be largely attributed to two factors: different energy responses of the secondary neutron detectors used in developing the models, and different geomagnetic parameterizations. Given that the LSD model generates flux spectra for each cosmic-ray particle of interest, it is also relatively straightforward to generate nuclide-specific scaling factors based on recently updated neutron and proton excitation functions (probability of nuclide production in a given nuclear reaction as a function of energy) for commonly measured in situ cosmogenic nuclides. Such scaling factors reflect the influence of the energy distribution of the flux folded with the relevant excitation functions. Resulting scaling factors indicate 3He shows the strongest positive deviation from the flux-based scaling, while 14C exhibits a negative deviation. These results are consistent with a recent Monte Carlo-based study using a different cosmic-ray physics code package but the same excitation functions.
Tactile Teaching: Exploring Protein Structure/Function Using Physical Models
ERIC Educational Resources Information Center
Herman, Tim; Morris, Jennifer; Colton, Shannon; Batiza, Ann; Patrick, Michael; Franzen, Margaret; Goodsell, David S.
2006-01-01
The technology now exists to construct physical models of proteins based on atomic coordinates of solved structures. We review here our recent experiences in using physical models to teach concepts of protein structure and function at both the high school and the undergraduate levels. At the high school level, physical models are used in a…
Response Surface Modeling Using Multivariate Orthogonal Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2001-01-01
A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.
Descriptive vs. mechanistic network models in plant development in the post-genomic era.
Davila-Velderrain, J; Martinez-Garcia, J C; Alvarez-Buylla, E R
2015-01-01
Network modeling is now a widespread practice in systems biology, as well as in integrative genomics, and it constitutes a rich and diverse scientific research field. A conceptually clear understanding of the reasoning behind the main existing modeling approaches, and their associated technical terminologies, is required to avoid confusions and accelerate the transition towards an undeniable necessary more quantitative, multidisciplinary approach to biology. Herein, we focus on two main network-based modeling approaches that are commonly used depending on the information available and the intended goals: inference-based methods and system dynamics approaches. As far as data-based network inference methods are concerned, they enable the discovery of potential functional influences among molecular components. On the other hand, experimentally grounded network dynamical models have been shown to be perfectly suited for the mechanistic study of developmental processes. How do these two perspectives relate to each other? In this chapter, we describe and compare both approaches and then apply them to a given specific developmental module. Along with the step-by-step practical implementation of each approach, we also focus on discussing their respective goals, utility, assumptions, and associated limitations. We use the gene regulatory network (GRN) involved in Arabidopsis thaliana Root Stem Cell Niche patterning as our illustrative example. We show that descriptive models based on functional genomics data can provide important background information consistent with experimentally supported functional relationships integrated in mechanistic GRN models. The rationale of analysis and modeling can be applied to any other well-characterized functional developmental module in multicellular organisms, like plants and animals.
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.
A micro-computer based system to compute magnetic variation
NASA Technical Reports Server (NTRS)
Kaul, R.
1984-01-01
A mathematical model of magnetic variation in the continental United States (COT48) was implemented in the Ohio University LORAN C receiver. The model is based on a least squares fit of a polynomial function. The implementation on the microprocessor based LORAN C receiver is possible with the help of a math chip, Am9511 which performs 32 bit floating point mathematical operations. A Peripheral Interface Adapter (M6520) is used to communicate between the 6502 based micro-computer and the 9511 math chip. The implementation provides magnetic variation data to the pilot as a function of latitude and longitude. The model and the real time implementation in the receiver are described.
Modeling the microstructure of surface by applying BRDF function
NASA Astrophysics Data System (ADS)
Plachta, Kamil
2017-06-01
The paper presents the modeling of surface microstructure using a bidirectional reflectance distribution function. This function contains full information about the reflectance properties of the flat surfaces - it is possible to determine the share of the specular, directional and diffuse components in the reflected luminous stream. The software is based on the authorial algorithm that uses selected elements of this function models, which allows to determine the share of each component. Basing on obtained data, the surface microstructure of each material can be modeled, which allows to determine the properties of this materials. The concentrator directs the reflected solar radiation onto the photovoltaic surface, increasing, at the same time, the value of the incident luminous stream. The paper presents an analysis of selected materials that can be used to construct the solar concentrator system. The use of concentrator increases the power output of the photovoltaic system by up to 17% as compared to the standard solution.
Exploring the critical quality attributes and models of smart homes.
Ted Luor, Tainyi; Lu, Hsi-Peng; Yu, Hueiju; Lu, Yinshiu
2015-12-01
Research on smart homes has significantly increased in recent years owing to their considerably improved affordability and simplicity. However, the challenge is that people have different needs (or attitudes toward smart homes), and provision should be tailored to individuals. A few studies have classified the functions of smart homes. Therefore, the Kano model is first adopted as a theoretical base to explore whether the functional classifications of smart homes are attractive or necessary, or both. Second, three models and test user attitudes toward three function types of smart homes are proposed. Based on the Kano model, the principal results, namely, two "Attractive Quality" and nine "Indifferent Quality" items, are found. Verification of the hypotheses also indicates that the entertainment, security, and automation functions are significantly correlated with the variables "perceive useful" and "attitude." Cost consideration is negatively correlated with attitudes toward entertainment and automation. Results suggest that smart home providers should survey user needs for their product instead of merely producing smart homes based on the design of the builder or engineer. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Miller, J; Fuller, M; Vinod, S; Suchowerska, N; Holloway, L
2009-06-01
A Clinician's discrimination between radiation therapy treatment plans is traditionally a subjective process, based on experience and existing protocols. A more objective and quantitative approach to distinguish between treatment plans is to use radiobiological or dosimetric objective functions, based on radiobiological or dosimetric models. The efficacy of models is not well understood, nor is the correlation of the rank of plans resulting from the use of models compared to the traditional subjective approach. One such radiobiological model is the Normal Tissue Complication Probability (NTCP). Dosimetric models or indicators are more accepted in clinical practice. In this study, three radiobiological models, Lyman NTCP, critical volume NTCP and relative seriality NTCP, and three dosimetric models, Mean Lung Dose (MLD) and the Lung volumes irradiated at 10Gy (V10) and 20Gy (V20), were used to rank a series of treatment plans using, harm to normal (Lung) tissue as the objective criterion. None of the models considered in this study showed consistent correlation with the Radiation Oncologists plan ranking. If radiobiological or dosimetric models are to be used in objective functions for lung treatments, based on this study it is recommended that the Lyman NTCP model be used because it will provide most consistency with traditional clinician ranking.
Site-occupation embedding theory using Bethe ansatz local density approximations
NASA Astrophysics Data System (ADS)
Senjean, Bruno; Nakatani, Naoki; Tsuchiizu, Masahisa; Fromager, Emmanuel
2018-06-01
Site-occupation embedding theory (SOET) is an alternative formulation of density functional theory (DFT) for model Hamiltonians where the fully interacting Hubbard problem is mapped, in principle exactly, onto an impurity-interacting (rather than a noninteracting) one. It provides a rigorous framework for combining wave-function (or Green function)-based methods with DFT. In this work, exact expressions for the per-site energy and double occupation of the uniform Hubbard model are derived in the context of SOET. As readily seen from these derivations, the so-called bath contribution to the per-site correlation energy is, in addition to the latter, the key density functional quantity to model in SOET. Various approximations based on Bethe ansatz and perturbative solutions to the Hubbard and single-impurity Anderson models are constructed and tested on a one-dimensional ring. The self-consistent calculation of the embedded impurity wave function has been performed with the density-matrix renormalization group method. It has been shown that promising results are obtained in specific regimes of correlation and density. Possible further developments have been proposed in order to provide reliable embedding functionals and potentials.
Katherine A. Zeller; Kevin McGarigal; Paul Beier; Samuel A. Cushman; T. Winston Vickers; Walter M. Boyce
2014-01-01
Estimating landscape resistance to animal movement is the foundation for connectivity modeling, and resource selection functions based on point data are commonly used to empirically estimate resistance. In this study, we used GPS data points acquired at 5-min intervals from radiocollared pumas in southern California to model context-dependent point selection...
Accident/Mishap Investigation System
NASA Technical Reports Server (NTRS)
Keller, Richard; Wolfe, Shawn; Gawdiak, Yuri; Carvalho, Robert; Panontin, Tina; Williams, James; Sturken, Ian
2007-01-01
InvestigationOrganizer (IO) is a Web-based collaborative information system that integrates the generic functionality of a database, a document repository, a semantic hypermedia browser, and a rule-based inference system with specialized modeling and visualization functionality to support accident/mishap investigation teams. This accessible, online structure is designed to support investigators by allowing them to make explicit, shared, and meaningful links among evidence, causal models, findings, and recommendations.
Haptics-based dynamic implicit solid modeling.
Hua, Jing; Qin, Hong
2004-01-01
This paper systematically presents a novel, interactive solid modeling framework, Haptics-based Dynamic Implicit Solid Modeling, which is founded upon volumetric implicit functions and powerful physics-based modeling. In particular, we augment our modeling framework with a haptic mechanism in order to take advantage of additional realism associated with a 3D haptic interface. Our dynamic implicit solids are semi-algebraic sets of volumetric implicit functions and are governed by the principles of dynamics, hence responding to sculpting forces in a natural and predictable manner. In order to directly manipulate existing volumetric data sets as well as point clouds, we develop a hierarchical fitting algorithm to reconstruct and represent discrete data sets using our continuous implicit functions, which permit users to further design and edit those existing 3D models in real-time using a large variety of haptic and geometric toolkits, and visualize their interactive deformation at arbitrary resolution. The additional geometric and physical constraints afford more sophisticated control of the dynamic implicit solids. The versatility of our dynamic implicit modeling enables the user to easily modify both the geometry and the topology of modeled objects, while the inherent physical properties can offer an intuitive haptic interface for direct manipulation with force feedback.
Regional TEC dynamic modeling based on Slepian functions
NASA Astrophysics Data System (ADS)
Sharifi, Mohammad Ali; Farzaneh, Saeed
2015-09-01
In this work, the three-dimensional state of the ionosphere has been estimated by integrating the spherical Slepian harmonic function and Kalman filter. The spherical Slepian harmonic functions have been used to establish the observation equations because of their properties in local modeling. Spherical harmonics are poor choices to represent or analyze geophysical processes without perfect global coverage but the Slepian functions afford spatial and spectral selectivity. The Kalman filter has been utilized to perform the parameter estimation due to its suitable properties in processing the GPS measurements in the real-time mode. The proposed model has been applied to the real data obtained from the ground-based GPS observations across some portion of the IGS network in Europe. Results have been compared with the estimated TECs by the CODE, ESA, IGS centers and IRI-2012 model. The results indicated that the proposed model which takes advantage of the Slepian basis and Kalman filter is efficient and allows for the generation of the near-real-time regional TEC map.
NASA Astrophysics Data System (ADS)
Liu, Yande; Ying, Yibin; Lu, Huishan; Fu, Xiaping
2005-11-01
A new method is proposed to eliminate the varying background and noise simultaneously for multivariate calibration of Fourier transform near infrared (FT-NIR) spectral signals. An ideal spectrum signal prototype was constructed based on the FT-NIR spectrum of fruit sugar content measurement. The performances of wavelet based threshold de-noising approaches via different combinations of wavelet base functions were compared. Three families of wavelet base function (Daubechies, Symlets and Coiflets) were applied to estimate the performance of those wavelet bases and threshold selection rules by a series of experiments. The experimental results show that the best de-noising performance is reached via the combinations of Daubechies 4 or Symlet 4 wavelet base function. Based on the optimization parameter, wavelet regression models for sugar content of pear were also developed and result in a smaller prediction error than a traditional Partial Least Squares Regression (PLSR) mode.
Schad, Daniel J.; Jünger, Elisabeth; Sebold, Miriam; Garbusow, Maria; Bernhardt, Nadine; Javadi, Amir-Homayoun; Zimmermann, Ulrich S.; Smolka, Michael N.; Heinz, Andreas; Rapp, Michael A.; Huys, Quentin J. M.
2014-01-01
Theories of decision-making and its neural substrates have long assumed the existence of two distinct and competing valuation systems, variously described as goal-directed vs. habitual, or, more recently and based on statistical arguments, as model-free vs. model-based reinforcement-learning. Though both have been shown to control choices, the cognitive abilities associated with these systems are under ongoing investigation. Here we examine the link to cognitive abilities, and find that individual differences in processing speed covary with a shift from model-free to model-based choice control in the presence of above-average working memory function. This suggests shared cognitive and neural processes; provides a bridge between literatures on intelligence and valuation; and may guide the development of process models of different valuation components. Furthermore, it provides a rationale for individual differences in the tendency to deploy valuation systems, which may be important for understanding the manifold neuropsychiatric diseases associated with malfunctions of valuation. PMID:25566131
Chakraborty, Arindom
2016-12-01
A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.
An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
LI, Weixuan; Lin, Guang; Zhang, Dongxiao
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
NASA Astrophysics Data System (ADS)
Teeples, Ronald; Glyer, David
1987-05-01
Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.
A robust and fast active contour model for image segmentation with intensity inhomogeneity
NASA Astrophysics Data System (ADS)
Ding, Keyan; Weng, Guirong
2018-04-01
In this paper, a robust and fast active contour model is proposed for image segmentation in the presence of intensity inhomogeneity. By introducing the local image intensities fitting functions before the evolution of curve, the proposed model can effectively segment images with intensity inhomogeneity. And the computation cost is low because the fitting functions do not need to be updated in each iteration. Experiments have shown that the proposed model has a higher segmentation efficiency compared to some well-known active contour models based on local region fitting energy. In addition, the proposed model is robust to initialization, which allows the initial level set function to be a small constant function.
Study on Capturing Functional Requirements of the New Product Based on Evolution
NASA Astrophysics Data System (ADS)
Liu, Fang; Song, Liya; Bai, Zhonghang; Zhang, Peng
In order to exist in an increasingly competitive global marketplace, it is important for corporations to forecast the evolutionary direction of new products rapidly and effectively. Most products in the world are developed based on the design of existing products. In the product design, capturing functional requirements is a key step. Function is continuously evolving, which is driven by the evolution of needs and technologies. So the functional requirements of new product can be forecasted based on the functions of existing product. Eight laws of function evolution are put forward in this paper. The process model of capturing the functional requirements of new product based on function evolution is proposed. An example illustrates the design process.
Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry
2008-05-01
AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Simulations were performed on tomato plants to demonstrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment.
Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry
2008-01-01
Background and Aims AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. Methods The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Key Results Simulations were performed on tomato plants to demostrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. Conclusions The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment. PMID:17766310
Dutagaci, Bercem; Wittayanarakul, Kitiyaporn; Mori, Takaharu; Feig, Michael
2017-06-13
A scoring protocol based on implicit membrane-based scoring functions and a new protocol for optimizing the positioning of proteins inside the membrane was evaluated for its capacity to discriminate native-like states from misfolded decoys. A decoy set previously established by the Baker lab (Proteins: Struct., Funct., Genet. 2006, 62, 1010-1025) was used along with a second set that was generated to cover higher resolution models. The Implicit Membrane Model 1 (IMM1), IMM1 model with CHARMM 36 parameters (IMM1-p36), generalized Born with simple switching (GBSW), and heterogeneous dielectric generalized Born versions 2 (HDGBv2) and 3 (HDGBv3) were tested along with the new HDGB van der Waals (HDGBvdW) model that adds implicit van der Waals contributions to the solvation free energy. For comparison, scores were also calculated with the distance-scaled finite ideal-gas reference (DFIRE) scoring function. Z-scores for native state discrimination, energy vs root-mean-square deviation (RMSD) correlations, and the ability to select the most native-like structures as top-scoring decoys were evaluated to assess the performance of the scoring functions. Ranking of the decoys in the Baker set that were relatively far from the native state was challenging and dominated largely by packing interactions that were captured best by DFIRE with less benefit of the implicit membrane-based models. Accounting for the membrane environment was much more important in the second decoy set where especially the HDGB-based scoring functions performed very well in ranking decoys and providing significant correlations between scores and RMSD, which shows promise for improving membrane protein structure prediction and refinement applications. The new membrane structure scoring protocol was implemented in the MEMScore web server ( http://feiglab.org/memscore ).
On parameters identification of computational models of vibrations during quiet standing of humans
NASA Astrophysics Data System (ADS)
Barauskas, R.; Krušinskienė, R.
2007-12-01
Vibration of the center of pressure (COP) of human body on the base of support during quiet standing is a very popular clinical research, which provides useful information about the physical and health condition of an individual. In this work, vibrations of COP of a human body in forward-backward direction during still standing are generated using controlled inverted pendulum (CIP) model with a single degree of freedom (dof) supplied with proportional, integral and differential (PID) controller, which represents the behavior of the central neural system of a human body and excited by cumulative disturbance vibration, generated within the body due to breathing or any other physical condition. The identification of the model and disturbance parameters is an important stage while creating a close-to-reality computational model able to evaluate features of disturbance. The aim of this study is to present the CIP model parameters identification approach based on the information captured by time series of the COP signal. The identification procedure is based on an error function minimization. Error function is formulated in terms of time laws of computed and experimentally measured COP vibrations. As an alternative, error function is formulated in terms of the stabilogram diffusion function (SDF). The minimization of error functions is carried out by employing methods based on sensitivity functions of the error with respect to model and excitation parameters. The sensitivity functions are obtained by using the variational techniques. The inverse dynamic problem approach has been employed in order to establish the properties of the disturbance time laws ensuring the satisfactory coincidence of measured and computed COP vibration laws. The main difficulty of the investigated problem is encountered during the model validation stage. Generally, neither the PID controller parameter set nor the disturbance time law are known in advance. In this work, an error function formulated in terms of time derivative of disturbance torque has been proposed in order to obtain PID controller parameters, as well as the reference time law of the disturbance. The disturbance torque is calculated from experimental data using the inverse dynamic approach. Experiments presented in this study revealed that vibrations of disturbance torque and PID controller parameters identified by the method may be qualified as feasible in humans. Presented approach may be easily extended to structural models with any number of dof or higher structural complexity.
ESTABLISHING VERBAL REPERTOIRES IN CHILDREN WITH AUTISM USING FUNCTION-BASED VIDEO MODELING
Plavnick, Joshua B; Ferreri, Summer J
2011-01-01
Previous research suggests that language-training procedures for children with autism might be enhanced following an assessment of conditions that evoke emerging verbal behavior. The present investigation examined a methodology to teach recognizable mands based on environmental variables known to evoke participants' idiosyncratic communicative responses in the natural environment. An alternating treatments design was used during Experiment 1 to identify the variables that were functionally related to gestures emitted by 4 children with autism. Results showed that gestures functioned as requests for attention for 1 participant and as requests for assistance to obtain a preferred item or event for 3 participants. Video modeling was used during Experiment 2 to compare mand acquisition when video sequences were either related or unrelated to the results of the functional analysis. An alternating treatments within multiple probe design showed that participants repeatedly acquired mands during the function-based condition but not during the nonfunction-based condition. In addition, generalization of the response was observed during the former but not the latter condition. PMID:22219527
Establishing verbal repertoires in children with autism using function-based video modeling.
Plavnick, Joshua B; Ferreri, Summer J
2011-01-01
Previous research suggests that language-training procedures for children with autism might be enhanced following an assessment of conditions that evoke emerging verbal behavior. The present investigation examined a methodology to teach recognizable mands based on environmental variables known to evoke participants' idiosyncratic communicative responses in the natural environment. An alternating treatments design was used during Experiment 1 to identify the variables that were functionally related to gestures emitted by 4 children with autism. Results showed that gestures functioned as requests for attention for 1 participant and as requests for assistance to obtain a preferred item or event for 3 participants. Video modeling was used during Experiment 2 to compare mand acquisition when video sequences were either related or unrelated to the results of the functional analysis. An alternating treatments within multiple probe design showed that participants repeatedly acquired mands during the function-based condition but not during the nonfunction-based condition. In addition, generalization of the response was observed during the former but not the latter condition.
Raver, C. Cybele; Blair, Clancy; Willoughby, Michael
2017-01-01
In a predominantly low-income, population-based longitudinal sample of 1,259 children followed from birth, results suggest that chronic exposure to poverty and the strains of financial hardship were each uniquely predictive of young children’s performance on measures of executive functioning. Results suggest that temperament-based vulnerability serves as a statistical moderator of the link between poverty-related risk and children’s executive functioning. Implications for models of ecology and biology in shaping the development of children’s self-regulation are discussed. PMID:22563675
Binder, Harald; Sauerbrei, Willi; Royston, Patrick
2013-06-15
In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2) = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Still, C. J.; Griffith, D.; Edwards, E.; Forrestel, E.; Lehmann, C.; Anderson, M.; Craine, J.; Pau, S.; Osborne, C.
2014-12-01
Variation in plant species traits, such as photosynthetic and hydraulic properties, can indicate vulnerability or resilience to climate change, and feed back to broad-scale spatial and temporal patterns in biogeochemistry, demographics, and biogeography. Yet, predicting how vegetation will respond to future environmental changes is severely limited by the inability of our models to represent species-level trait variation in processes and properties, as current generation process-based models are mostly based on the generalized and abstracted concept of plant functional types (PFTs) which were originally developed for hydrological modeling. For example, there are close to 11,000 grass species, but most vegetation models have only a single C4 grass and one or two C3 grass PFTs. However, while species trait databases are expanding rapidly, they have been produced mostly from unstructured research, with a focus on easily researched traits that are not necessarily the most important for determining plant function. Additionally, implementing realistic species-level trait variation in models is challenging. Combining related and ecologically similar species in these models might ameliorate this limitation. Here we argue for an intermediate, lineage-based approach to PFTs, which draws upon recent advances in gene sequencing and phylogenetic modeling, and where trait complex variations and anatomical features are constrained by a shared evolutionary history. We provide an example of this approach with grass lineages that vary in photosynthetic pathway (C3 or C4) and other functional and structural traits. We use machine learning approaches and geospatial databases to infer the most important environmental controls and climate niche variation for the distribution of grass lineages, and utilize a rapidly expanding grass trait database to demonstrate examples of lineage-based grass PFTs. For example, grasses in the Andropogoneae are typically tall species that dominate wet and seasonally burned ecosystems, whereas Chloridoideae grasses are associated with semi-arid regions. These two C4 lineages are expected to respond quite differently to climate change, but are often modelled as a single PFT.
NASA Astrophysics Data System (ADS)
Jian, Wang; Xiaohong, Meng; Hong, Liu; Wanqiu, Zheng; Yaning, Liu; Sheng, Gui; Zhiyang, Wang
2017-03-01
Full waveform inversion and reverse time migration are active research areas for seismic exploration. Forward modeling in the time domain determines the precision of the results, and numerical solutions of finite difference have been widely adopted as an important mathematical tool for forward modeling. In this article, the optimum combined of window functions was designed based on the finite difference operator using a truncated approximation of the spatial convolution series in pseudo-spectrum space, to normalize the outcomes of existing window functions for different orders. The proposed combined window functions not only inherit the characteristics of the various window functions, to provide better truncation results, but also control the truncation error of the finite difference operator manually and visually by adjusting the combinations and analyzing the characteristics of the main and side lobes of the amplitude response. Error level and elastic forward modeling under the proposed combined system were compared with outcomes from conventional window functions and modified binomial windows. Numerical dispersion is significantly suppressed, which is compared with modified binomial window function finite-difference and conventional finite-difference. Numerical simulation verifies the reliability of the proposed method.
A dual change model of life satisfaction and functioning for individuals with schizophrenia
Edmondson, Melissa; Pahwa, Rohini; Lee, Karen Kyeunghae; Hoe, Maanse; Brekke, John S.
2013-01-01
Despite the notion that increases in functioning should be associated with increases in life satisfaction in schizophrenia, research has often found no association between the two. Dual change models of global and domain-specific life satisfaction and functioning were examined in 145 individuals with schizophrenia receiving community-based services over 12 months. Functioning and satisfaction were measured using the Role Functioning Scale and Satisfaction with Life Scale. Data were analyzed using latent growth curve modeling. Improvement in global life satisfaction was associated with improvement in overall functioning over time. Satisfaction with living situation also improved as independent functioning improved. Work satisfaction did not improve as work functioning improved. Although social functioning improved, satisfaction with social relationships did not. The link between overall functioning and global life satisfaction provides support for a recovery-based orientation to community based psychosocial rehabilitation services. When examining sub-domains, the link between outcomes and subjective experience suggests a more complex picture than previously found. These findings are crucial to interventions and programs aimed at improving functioning and the subjective experiences of consumers recovering from mental illness. Interventions that show improvements in functional outcomes can assume that they will show concurrent improvements in global life satisfaction as well and in satisfaction with independent living. Interventions geared toward improving social functioning will need to consider the complexity of social relationships and how they affect satisfaction associated with personal relationships. Interventions geared towards improving work functioning will need to consider how the quality and level of work affect satisfaction with employment. PMID:22591780
Structural and functional aspects of C1-inhibitor.
Bos, Ineke G A; Hack, C Erik; Abrahams, Jan Pieter
2002-09-01
C1-Inh is a serpin that inhibits serine proteases from the complement and the coagulation pathway. C1-Inh consists of a serpin domain and a unique N-terminal domain and is heavily glycosylated. Non-functional mutants of C1-Inh can give insight into the inhibitory mechanism of C1-Inh. This review describes a novel 3D model of C1-Inh, based on a newly developed homology modelling method. This model gives insight into a possible potentiation mechanism of C1-Inh and based on this model the essential residues for efficient inhibition by C1-Inh are discussed.
Tawhai, M. H.; Clark, A. R.; Donovan, G. M.; Burrowes, K. S.
2011-01-01
Computational models of lung structure and function necessarily span multiple spatial and temporal scales, i.e., dynamic molecular interactions give rise to whole organ function, and the link between these scales cannot be fully understood if only molecular or organ-level function is considered. Here, we review progress in constructing multiscale finite element models of lung structure and function that are aimed at providing a computational framework for bridging the spatial scales from molecular to whole organ. These include structural models of the intact lung, embedded models of the pulmonary airways that couple to model lung tissue, and models of the pulmonary vasculature that account for distinct structural differences at the extra- and intra-acinar levels. Biophysically based functional models for tissue deformation, pulmonary blood flow, and airway bronchoconstriction are also described. The development of these advanced multiscale models has led to a better understanding of complex physiological mechanisms that govern regional lung perfusion and emergent heterogeneity during bronchoconstriction. PMID:22011236
Kuesten, Carla; Bi, Jian
2018-06-03
Conventional drivers of liking analysis was extended with a time dimension into temporal drivers of liking (TDOL) based on functional data analysis methodology and non-additive models for multiple-attribute time-intensity (MATI) data. The non-additive models, which consider both direct effects and interaction effects of attributes to consumer overall liking, include Choquet integral and fuzzy measure in the multi-criteria decision-making, and linear regression based on variance decomposition. Dynamics of TDOL, i.e., the derivatives of the relative importance functional curves were also explored. Well-established R packages 'fda', 'kappalab' and 'relaimpo' were used in the paper for developing TDOL. Applied use of these methods shows that the relative importance of MATI curves offers insights for understanding the temporal aspects of consumer liking for fruit chews.
NASA Technical Reports Server (NTRS)
Barro, E.; Delbufalo, A.; Rossi, F.
1993-01-01
The definition of some modern high demanding space systems requires a different approach to system definition and design from that adopted for traditional missions. System functionality is strongly coupled to the operational analysis, aimed at characterizing the dynamic interactions of the flight element with its surrounding environment and its ground control segment. Unambiguous functional, operational and performance requirements are to be defined for the system, thus improving also the successive development stages. This paper proposes a Petri Nets based methodology and two related prototype applications (to ARISTOTELES orbit control and to Hermes telemetry generation) for the operational analysis of space systems through the dynamic modeling of their functions and a related computer aided environment (ISIDE) able to make the dynamic model work, thus enabling an early validation of the system functional representation, and to provide a structured system requirements data base, which is the shared knowledge base interconnecting static and dynamic applications, fully traceable with the models and interfaceable with the external world.
Meszlényi, Regina J.; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network. PMID:29089883
Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.
Barton, Alan J; Valdés, Julio J; Orchard, Robert
2009-01-01
Classical neural networks are composed of neurons whose nature is determined by a certain function (the neuron model), usually pre-specified. In this paper, a type of neural network (NN-GP) is presented in which: (i) each neuron may have its own neuron model in the form of a general function, (ii) any layout (i.e network interconnection) is possible, and (iii) no bias nodes or weights are associated to the connections, neurons or layers. The general functions associated to a neuron are learned by searching a function space. They are not provided a priori, but are rather built as part of an Evolutionary Computation process based on Genetic Programming. The resulting network solutions are evaluated based on a fitness measure, which may, for example, be based on classification or regression errors. Two real-world examples are presented to illustrate the promising behaviour on classification problems via construction of a low-dimensional representation of a high-dimensional parameter space associated to the set of all network solutions.
Exact diagonalization library for quantum electron models
NASA Astrophysics Data System (ADS)
Iskakov, Sergei; Danilov, Michael
2018-04-01
We present an exact diagonalization C++ template library (EDLib) for solving quantum electron models, including the single-band finite Hubbard cluster and the multi-orbital impurity Anderson model. The observables that can be computed using EDLib are single particle Green's functions and spin-spin correlation functions. This code provides three different types of Hamiltonian matrix storage that can be chosen based on the model.
Kobayashi, Seiji
2002-05-10
A point-spread function (PSF) is commonly used as a model of an optical disk readout channel. However, the model given by the PSF does not contain the quadratic distortion generated by the photo-detection process. We introduce a model for calculating an approximation of the quadratic component of a signal. We show that this model can be further simplified when a read-only-memory (ROM) disk is assumed. We introduce an edge-spread function by which a simple nonlinear model of an optical ROM disk readout channel is created.
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther
2015-01-01
Several key capabilities have been identified by the aerospace community as lacking in the material/models for composite materials currently available within commercial transient dynamic finite element codes such as LS-DYNA. Some of the specific desired features that have been identified include the incorporation of both plasticity and damage within the material model, the capability of using the material model to analyze the response of both three-dimensional solid elements and two dimensional shell elements, and the ability to simulate the response of composites composed with a variety of composite architectures, including laminates, weaves and braids. In addition, a need has been expressed to have a material model that utilizes tabulated experimentally based input to define the evolution of plasticity and damage as opposed to utilizing discrete input parameters (such as modulus and strength) and analytical functions based on curve fitting. To begin to address these needs, an orthotropic macroscopic plasticity based model suitable for implementation within LS-DYNA has been developed. Specifically, the Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The coefficients in the yield function are determined based on tabulated stress-strain curves in the various normal and shear directions, along with selected off-axis curves. Incorporating rate dependence into the yield function is achieved by using a series of tabluated input curves, each at a different constant strain rate. The non-associative flow-rule is used to compute the evolution of the effective plastic strain. Systematic procedures have been developed to determine the values of the various coefficients in the yield function and the flow rule based on the tabulated input data. An algorithm based on the radial return method has been developed to facilitate the numerical implementation of the material model. The presented paper will present in detail the development of the orthotropic plasticity model and the procedures used to obtain the required material parameters. Methods in which a combination of actual testing and selective numerical testing can be combined to yield the appropriate input data for the model will be described. A specific laminated polymer matrix composite will be examined to demonstrate the application of the model.
Surface functional groups in capacitive deionization with porous carbon electrodes
NASA Astrophysics Data System (ADS)
Hemmatifar, Ali; Oyarzun, Diego I.; Palko, James W.; Hawks, Steven A.; Stadermann, Michael; Santiago, Juan G.; Stanford Microfluidics Lab Team; Lawrence Livermore National Lab Team
2017-11-01
Capacitive deionization (CDI) is a promising technology for removal of toxic ions and salt from water. In CDI, an applied potential of about 1 V to pairs of porous electrodes (e.g. activated carbon) induces ion electromigration and electrostatic adsorption at electrode surfaces. Immobile surface functional groups play a critical role in the type and capacity of ion adsorption, and this can dramatically change desalination performance. We here use models and experiments to study weak electrolyte surface groups which protonate and/or depropotante based on their acid/base dissociation constants and local pore pH. Net chemical surface charge and differential capacitance can thus vary during CDI operation. In this work, we present a CDI model based on weak electrolyte acid/base equilibria theory. Our model incorporates preferential cation (anion) adsorption for activated carbon with acidic (basic) surface groups. We validated our model with experiments on custom built CDI cells with a variety of functionalizations. To this end, we varied electrolyte pH and measured adsorption of individual anionic and cationic ions using inductively coupled plasma mass spectrometry (ICP-MS) and ion chromatography (IC) techniques. Our model shows good agreement with experiments and provides a framework useful in the design of CDI control schemes.
Learning to rank using user clicks and visual features for image retrieval.
Yu, Jun; Tao, Dacheng; Wang, Meng; Rui, Yong
2015-04-01
The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.
Modeling of nanoscale liquid mixture transport by density functional hydrodynamics
NASA Astrophysics Data System (ADS)
Dinariev, Oleg Yu.; Evseev, Nikolay V.
2017-06-01
Modeling of multiphase compositional hydrodynamics at nanoscale is performed by means of density functional hydrodynamics (DFH). DFH is the method based on density functional theory and continuum mechanics. This method has been developed by the authors over 20 years and used for modeling in various multiphase hydrodynamic applications. In this paper, DFH was further extended to encompass phenomena inherent in liquids at nanoscale. The new DFH extension is based on the introduction of external potentials for chemical components. These potentials are localized in the vicinity of solid surfaces and take account of the van der Waals forces. A set of numerical examples, including disjoining pressure, film precursors, anomalous rheology, liquid in contact with heterogeneous surface, capillary condensation, and forward and reverse osmosis, is presented to demonstrate modeling capabilities.
Force Project Technology Presentation to the NRCC
2014-02-04
Functional Bridge components Smart Odometer Adv Pretreatment Smart Bridge Multi-functional Gap Crossing Fuel Automated Tracking System Adv...comprehensive matrix of candidate composite material systems and textile reinforcement architectures via modeling/analyses and testing. Product(s...Validated Dynamic Modeling tool based on parametric study using material models to reliably predict the textile mechanics of the hose
Bai, Yu; Katahira, Kentaro; Ohira, Hideki
2014-01-01
Humans are capable of correcting their actions based on actions performed in the past, and this ability enables them to adapt to a changing environment. The computational field of reinforcement learning (RL) has provided a powerful explanation for understanding such processes. Recently, the dual learning system, modeled as a hybrid model that incorporates value update based on reward-prediction error and learning rate modulation based on the surprise signal, has gained attention as a model for explaining various neural signals. However, the functional significance of the hybrid model has not been established. In the present study, we used computer simulation in a reversal learning task to address functional significance in a probabilistic reversal learning task. The hybrid model was found to perform better than the standard RL model in a large parameter setting. These results suggest that the hybrid model is more robust against the mistuning of parameters compared with the standard RL model when decision-makers continue to learn stimulus-reward contingencies, which can create abrupt changes. The parameter fitting results also indicated that the hybrid model fit better than the standard RL model for more than 50% of the participants, which suggests that the hybrid model has more explanatory power for the behavioral data than the standard RL model. PMID:25161635
From brain topography to brain topology: relevance of graph theory to functional neuroscience.
Minati, Ludovico; Varotto, Giulia; D'Incerti, Ludovico; Panzica, Ferruccio; Chan, Dennis
2013-07-10
Although several brain regions show significant specialization, higher functions such as cross-modal information integration, abstract reasoning and conscious awareness are viewed as emerging from interactions across distributed functional networks. Analytical approaches capable of capturing the properties of such networks can therefore enhance our ability to make inferences from functional MRI, electroencephalography and magnetoencephalography data. Graph theory is a branch of mathematics that focuses on the formal modelling of networks and offers a wide range of theoretical tools to quantify specific features of network architecture (topology) that can provide information complementing the anatomical localization of areas responding to given stimuli or tasks (topography). Explicit modelling of the architecture of axonal connections and interactions among areas can furthermore reveal peculiar topological properties that are conserved across diverse biological networks, and highly sensitive to disease states. The field is evolving rapidly, partly fuelled by computational developments that enable the study of connectivity at fine anatomical detail and the simultaneous interactions among multiple regions. Recent publications in this area have shown that graph-based modelling can enhance our ability to draw causal inferences from functional MRI experiments, and support the early detection of disconnection and the modelling of pathology spread in neurodegenerative disease, particularly Alzheimer's disease. Furthermore, neurophysiological studies have shown that network topology has a profound link to epileptogenesis and that connectivity indices derived from graph models aid in modelling the onset and spread of seizures. Graph-based analyses may therefore significantly help understand the bases of a range of neurological conditions. This review is designed to provide an overview of graph-based analyses of brain connectivity and their relevance to disease aimed principally at general neuroscientists and clinicians.
Accurate lithography simulation model based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki
2017-07-01
Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.
Papadopoulos, Anthony
2009-01-01
The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.
Use of Model-Based Design Methods for Enhancing Resiliency Analysis of Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Knox, Lenora A.
The most common traditional non-functional requirement analysis is reliability. With systems becoming more complex, networked, and adaptive to environmental uncertainties, system resiliency has recently become the non-functional requirement analysis of choice. Analysis of system resiliency has challenges; which include, defining resilience for domain areas, identifying resilience metrics, determining resilience modeling strategies, and understanding how to best integrate the concepts of risk and reliability into resiliency. Formal methods that integrate all of these concepts do not currently exist in specific domain areas. Leveraging RAMSoS, a model-based reliability analysis methodology for Systems of Systems (SoS), we propose an extension that accounts for resiliency analysis through evaluation of mission performance, risk, and cost using multi-criteria decision-making (MCDM) modeling and design trade study variability modeling evaluation techniques. This proposed methodology, coined RAMSoS-RESIL, is applied to a case study in the multi-agent unmanned aerial vehicle (UAV) domain to investigate the potential benefits of a mission architecture where functionality to complete a mission is disseminated across multiple UAVs (distributed) opposed to being contained in a single UAV (monolithic). The case study based research demonstrates proof of concept for the proposed model-based technique and provides sufficient preliminary evidence to conclude which architectural design (distributed vs. monolithic) is most resilient based on insight into mission resilience performance, risk, and cost in addition to the traditional analysis of reliability.
NASA Astrophysics Data System (ADS)
Wang, Linjuan; Abeyaratne, Rohan
2018-07-01
The peridynamic model of a solid does not involve spatial gradients of the displacement field and is therefore well suited for studying defect propagation. Here, bond-based peridynamic theory is used to study the equilibrium and steady propagation of a lattice defect - a kink - in one dimension. The material transforms locally, from one state to another, as the kink passes through. The kink is in equilibrium if the applied force is less than a certain critical value that is calculated, and propagates if it exceeds that value. The kinetic relation giving the propagation speed as a function of the applied force is also derived. In addition, it is shown that the dynamical solutions of certain differential-equation-based models of a continuum are the same as those of the peridynamic model provided the micromodulus function is chosen suitably. A formula for calculating the micromodulus function of the equivalent peridynamic model is derived and illustrated. This ability to replace a differential-equation-based model with a peridynamic one may prove useful when numerically studying more complicated problems such as those involving multiple and interacting defects.
Evaluation of computing systems using functionals of a Stochastic process
NASA Technical Reports Server (NTRS)
Meyer, J. F.; Wu, L. T.
1980-01-01
An intermediate model was used to represent the probabilistic nature of a total system at a level which is higher than the base model and thus closer to the performance variable. A class of intermediate models, which are generally referred to as functionals of a Markov process, were considered. A closed form solution of performability for the case where performance is identified with the minimum value of a functional was developed.
H. Li; X. Deng; Andy Dolloff; E. P. Smith
2015-01-01
A novel clustering method for bivariate functional data is proposed to group streams based on their waterâair temperature relationship. A distance measure is developed for bivariate curves by using a time-varying coefficient model and a weighting scheme. This distance is also adjusted by spatial correlation of streams via the variogram. Therefore, the proposed...
Li, Cai; Lowe, Robert; Ziemke, Tom
2014-01-01
In this article, we propose an architecture of a bio-inspired controller that addresses the problem of learning different locomotion gaits for different robot morphologies. The modeling objective is split into two: baseline motion modeling and dynamics adaptation. Baseline motion modeling aims to achieve fundamental functions of a certain type of locomotion and dynamics adaptation provides a "reshaping" function for adapting the baseline motion to desired motion. Based on this assumption, a three-layer architecture is developed using central pattern generators (CPGs, a bio-inspired locomotor center for the baseline motion) and dynamic motor primitives (DMPs, a model with universal "reshaping" functions). In this article, we use this architecture with the actor-critic algorithms for finding a good "reshaping" function. In order to demonstrate the learning power of the actor-critic based architecture, we tested it on two experiments: (1) learning to crawl on a humanoid and, (2) learning to gallop on a puppy robot. Two types of actor-critic algorithms (policy search and policy gradient) are compared in order to evaluate the advantages and disadvantages of different actor-critic based learning algorithms for different morphologies. Finally, based on the analysis of the experimental results, a generic view/architecture for locomotion learning is discussed in the conclusion.
Li, Cai; Lowe, Robert; Ziemke, Tom
2014-01-01
In this article, we propose an architecture of a bio-inspired controller that addresses the problem of learning different locomotion gaits for different robot morphologies. The modeling objective is split into two: baseline motion modeling and dynamics adaptation. Baseline motion modeling aims to achieve fundamental functions of a certain type of locomotion and dynamics adaptation provides a “reshaping” function for adapting the baseline motion to desired motion. Based on this assumption, a three-layer architecture is developed using central pattern generators (CPGs, a bio-inspired locomotor center for the baseline motion) and dynamic motor primitives (DMPs, a model with universal “reshaping” functions). In this article, we use this architecture with the actor-critic algorithms for finding a good “reshaping” function. In order to demonstrate the learning power of the actor-critic based architecture, we tested it on two experiments: (1) learning to crawl on a humanoid and, (2) learning to gallop on a puppy robot. Two types of actor-critic algorithms (policy search and policy gradient) are compared in order to evaluate the advantages and disadvantages of different actor-critic based learning algorithms for different morphologies. Finally, based on the analysis of the experimental results, a generic view/architecture for locomotion learning is discussed in the conclusion. PMID:25324773
Lin, Tai-Chi; Zhu, Danhong; Hinton, David R.; Clegg, Dennis O.; Humayun, Mark S.
2017-01-01
Dysfunction and death of retinal pigment epithelium (RPE) and or photoreceptors can lead to irreversible vision loss. The eye represents an ideal microenvironment for stem cell-based therapy. It is considered an “immune privileged” site, and the number of cells needed for therapy is relatively low for the area of focused vision (macula). Further, surgical placement of stem cell-derived grafts (RPE, retinal progenitors, and photoreceptor precursors) into the vitreous cavity or subretinal space has been well established. For preclinical tests, assessments of stem cell-derived graft survival and functionality are conducted in animal models by various noninvasive approaches and imaging modalities. In vivo experiments conducted in animal models based on replacing photoreceptors and/or RPE cells have shown survival and functionality of the transplanted cells, rescue of the host retina, and improvement of visual function. Based on the positive results obtained from these animal experiments, human clinical trials are being initiated. Despite such progress in stem cell research, ethical, regulatory, safety, and technical difficulties still remain a challenge for the transformation of this technique into a standard clinical approach. In this review, the current status of preclinical safety and efficacy studies for retinal cell replacement therapies conducted in animal models will be discussed. PMID:28928775
Adaptive Modeling of Details for Physically-Based Sound Synthesis and Propagation
2015-03-21
the interface that ensures the consistency and validity of the solution given by the two methods. Transfer functions are used to model two-way...release; distribution is unlimited. Adaptive modeling of details for physically-based sound synthesis and propagation The views, opinions and/or...Research Triangle Park, NC 27709-2211 Applied sciences, Adaptive modeling , Physcially-based, Sound synthesis, Propagation, Virtual world REPORT
ERIC Educational Resources Information Center
Van Lancker Sidtis, Diana
2007-01-01
Neurolinguistic research has been engaged in evaluating models of language using measures from brain structure and function, and/or in investigating brain structure and function with respect to language representation using proposed models of language. While the aphasiological strategy, which classifies aphasias based on performance modality and a…
ERIC Educational Resources Information Center
Engelhard, George, Jr.; Wang, Jue
2014-01-01
The authors of the Focus article pose important questions regarding whether or not performance-based tasks related to executive functioning are best viewed as reflective or formative indicators. Miyake and Friedman (2012) define executive functioning (EF) as "a set of general-purpose control mechanisms, often linked to the prefrontal cortex…
A Two-Time Scale Decentralized Model Predictive Controller Based on Input and Output Model
Niu, Jian; Zhao, Jun; Xu, Zuhua; Qian, Jixin
2009-01-01
A decentralized model predictive controller applicable for some systems which exhibit different dynamic characteristics in different channels was presented in this paper. These systems can be regarded as combinations of a fast model and a slow model, the response speeds of which are in two-time scale. Because most practical models used for control are obtained in the form of transfer function matrix by plant tests, a singular perturbation method was firstly used to separate the original transfer function matrix into two models in two-time scale. Then a decentralized model predictive controller was designed based on the two models derived from the original system. And the stability of the control method was proved. Simulations showed that the method was effective. PMID:19834542
Prediction of Erectile Function Following Treatment for Prostate Cancer
Alemozaffar, Mehrdad; Regan, Meredith M.; Cooperberg, Matthew R.; Wei, John T.; Michalski, Jeff M.; Sandler, Howard M.; Hembroff, Larry; Sadetsky, Natalia; Saigal, Christopher S.; Litwin, Mark S.; Klein, Eric; Kibel, Adam S.; Hamstra, Daniel A.; Pisters, Louis L.; Kuban, Deborah A.; Kaplan, Irving D.; Wood, David P.; Ciezki, Jay; Dunn, Rodney L.; Carroll, Peter R.; Sanda, Martin G.
2013-01-01
Context Sexual function is the health-related quality of life (HRQOL) domain most commonly impaired after prostate cancer treatment; however, validated tools to enable personalized prediction of erectile dysfunction after prostate cancer treatment are lacking. Objective To predict long-term erectile function following prostate cancer treatment based on individual patient and treatment characteristics. Design Pretreatment patient characteristics, sexual HRQOL, and treatment details measured in a longitudinal academic multicenter cohort (Prostate Cancer Outcomes and Satisfaction With Treatment Quality Assessment; enrolled from 2003 through 2006), were used to develop models predicting erectile function 2 years after treatment. A community-based cohort (community-based Cancer of the Prostate Strategic Urologic Research Endeavor [CaPSURE]; enrolled 1995 through 2007) externally validated model performance. Patients in US academic and community-based practices whose HRQOL was measured pretreatment (N = 1201) underwent follow-up after prostatectomy, external radiotherapy, or brachytherapy for prostate cancer. Sexual outcomes among men completing 2 years’ follow-up (n = 1027) were used to develop models predicting erectile function that were externally validated among 1913 patients in a community-based cohort. Main Outcome Measures Patient-reported functional erections suitable for intercourse 2 years following prostate cancer treatment. Results Two years after prostate cancer treatment, 368 (37% [95% CI, 34%–40%]) of all patients and 335 (48% [95% CI, 45%–52%]) of those with functional erections prior to treatment reported functional erections; 531 (53% [95% CI, 50%–56%]) of patients without penile prostheses reported use of medications or other devices for erectile dysfunction. Pretreatment sexual HRQOL score, age, serum prostate-specific antigen level, race/ethnicity, body mass index, and intended treatment details were associated with functional erections 2 years after treatment. Multivariable logistic regression models predicting erectile function estimated 2-year function probabilities from as low as 10% or less to as high as 70% or greater depending on the individual’s pretreatment patient characteristics and treatment details. The models performed well in predicting erections in external validation among CaPSURE cohort patients (areas under the receiver operating characteristic curve, 0.77 [95% CI, 0.74–0.80] for prostatectomy; 0.87 [95% CI, 0.80–0.94] for external radiotherapy; and 0.90 [95% CI, 0.85–0.95] for brachytherapy). Conclusion Stratification by pretreatment patient characteristics and treatment details enables prediction of erectile function 2 years after prostatectomy, external radiotherapy, or brachytherapy for prostate cancer. PMID:21934053
NASA Astrophysics Data System (ADS)
Demirel, Mehmet C.; Mai, Juliane; Mendiguren, Gorka; Koch, Julian; Samaniego, Luis; Stisen, Simon
2018-02-01
Satellite-based earth observations offer great opportunities to improve spatial model predictions by means of spatial-pattern-oriented model evaluations. In this study, observed spatial patterns of actual evapotranspiration (AET) are utilised for spatial model calibration tailored to target the pattern performance of the model. The proposed calibration framework combines temporally aggregated observed spatial patterns with a new spatial performance metric and a flexible spatial parameterisation scheme. The mesoscale hydrologic model (mHM) is used to simulate streamflow and AET and has been selected due to its soil parameter distribution approach based on pedo-transfer functions and the build in multi-scale parameter regionalisation. In addition two new spatial parameter distribution options have been incorporated in the model in order to increase the flexibility of root fraction coefficient and potential evapotranspiration correction parameterisations, based on soil type and vegetation density. These parameterisations are utilised as they are most relevant for simulated AET patterns from the hydrologic model. Due to the fundamental challenges encountered when evaluating spatial pattern performance using standard metrics, we developed a simple but highly discriminative spatial metric, i.e. one comprised of three easily interpretable components measuring co-location, variation and distribution of the spatial data. The study shows that with flexible spatial model parameterisation used in combination with the appropriate objective functions, the simulated spatial patterns of actual evapotranspiration become substantially more similar to the satellite-based estimates. Overall 26 parameters are identified for calibration through a sequential screening approach based on a combination of streamflow and spatial pattern metrics. The robustness of the calibrations is tested using an ensemble of nine calibrations based on different seed numbers using the shuffled complex evolution optimiser. The calibration results reveal a limited trade-off between streamflow dynamics and spatial patterns illustrating the benefit of combining separate observation types and objective functions. At the same time, the simulated spatial patterns of AET significantly improved when an objective function based on observed AET patterns and a novel spatial performance metric compared to traditional streamflow-only calibration were included. Since the overall water balance is usually a crucial goal in hydrologic modelling, spatial-pattern-oriented optimisation should always be accompanied by traditional discharge measurements. In such a multi-objective framework, the current study promotes the use of a novel bias-insensitive spatial pattern metric, which exploits the key information contained in the observed patterns while allowing the water balance to be informed by discharge observations.
Value of eddy-covariance data for individual-based, forest gap models
NASA Astrophysics Data System (ADS)
Roedig, Edna; Cuntz, Matthias; Huth, Andreas
2014-05-01
Individual-based forest gap models simulate tree growth and carbon fluxes on large time scales. They are a well established tool to predict forest dynamics and successions. However, the effect of climatic variables on processes of such individual-based models is uncertain (e.g. the effect of temperature or soil moisture on the gross primary production (GPP)). Commonly, functional relationships and parameter values that describe the effect of climate variables on the model processes are gathered from various vegetation models of different spatial scales. Though, their accuracies and parameter values have not been validated for the specific model scales of individual-based forest gap models. In this study, we address this uncertainty by linking Eddy-covariance (EC) data and a forest gap model. The forest gap model FORMIND is applied on the Norwegian spruce monoculture forest at Wetzstein in Thuringia, Germany for the years 2003-2008. The original parameterizations of climatic functions are adapted according to the EC-data. The time step of the model is reduced to one day in order to adapt to the high resolution EC-data. The FORMIND model uses functional relationships on an individual level, whereas the EC-method measures eco-physiological responses at the ecosystem level. However, we assume that in homogeneous stands as in our study, functional relationships for both methods are comparable. The model is then validated at the spruce forest Waldstein, Germany. Results show that the functional relationships used in the model, are similar to those observed with the EC-method. The temperature reduction curve is well reflected in the EC-data, though parameter values differ from the originally expected values. For example at the freezing point, the observed GPP is 30% higher than predicted by the forest gap model. The response of observed GPP to soil moisture shows that the permanent wilting point is 7 vol-% lower than the value derived from the literature. The light response curve, integrated over the canopy and the forest stand, is underestimated compared to the measured data. The EC-method measures a yearly carbon balance of 13 mol(CO2)m-2 for the Wetzstein site. The model with the original parameterization overestimates the yearly carbon balance by nearly 5 mol(CO2)m-2 while the model with an EC-based parameterization fits the measured data very well. The parameter values derived from EC-data are applied on the spruce forest Waldstein and clearly improve estimates of the carbon balance.
Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard
2011-01-01
Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method frequently correspond to subregions of visible spots that may represent post-translational modifications or co-migrating proteins that cannot be visually resolved from adjacent, more abundant proteins on the gel image. Thus, it is possible that this image-based approach may actually improve the realized resolution of the gel, revealing differentially expressed proteins that would not have even been detected as spots by modern spot-based analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Hyun-Seob; Thomas, Dennis G.; Stegen, James C.
In a recent study of denitrification dynamics in hyporheic zone sediments, we observed a significant time lag (up to several days) in enzymatic response to the changes in substrate concentration. To explore an underlying mechanism and understand the interactive dynamics between enzymes and nutrients, we developed a trait-based model that associates a community’s traits with functional enzymes, instead of typically used species guilds (or functional guilds). This enzyme-based formulation allows to collectively describe biogeochemical functions of microbial communities without directly parameterizing the dynamics of species guilds, therefore being scalable to complex communities. As a key component of modeling, we accountedmore » for microbial regulation occurring through transcriptional and translational processes, the dynamics of which was parameterized based on the temporal profiles of enzyme concentrations measured using a new signature peptide-based method. The simulation results using the resulting model showed several days of a time lag in enzymatic responses as observed in experiments. Further, the model showed that the delayed enzymatic reactions could be primarily controlled by transcriptional responses and that the dynamics of transcripts and enzymes are closely correlated. The developed model can serve as a useful tool for predicting biogeochemical processes in natural environments, either independently or through integration with hydrologic flow simulators.« less
KECSA-Movable Type Implicit Solvation Model (KMTISM)
2015-01-01
Computation of the solvation free energy for chemical and biological processes has long been of significant interest. The key challenges to effective solvation modeling center on the choice of potential function and configurational sampling. Herein, an energy sampling approach termed the “Movable Type” (MT) method, and a statistical energy function for solvation modeling, “Knowledge-based and Empirical Combined Scoring Algorithm” (KECSA) are developed and utilized to create an implicit solvation model: KECSA-Movable Type Implicit Solvation Model (KMTISM) suitable for the study of chemical and biological systems. KMTISM is an implicit solvation model, but the MT method performs energy sampling at the atom pairwise level. For a specific molecular system, the MT method collects energies from prebuilt databases for the requisite atom pairs at all relevant distance ranges, which by its very construction encodes all possible molecular configurations simultaneously. Unlike traditional statistical energy functions, KECSA converts structural statistical information into categorized atom pairwise interaction energies as a function of the radial distance instead of a mean force energy function. Within the implicit solvent model approximation, aqueous solvation free energies are then obtained from the NVT ensemble partition function generated by the MT method. Validation is performed against several subsets selected from the Minnesota Solvation Database v2012. Results are compared with several solvation free energy calculation methods, including a one-to-one comparison against two commonly used classical implicit solvation models: MM-GBSA and MM-PBSA. Comparison against a quantum mechanics based polarizable continuum model is also discussed (Cramer and Truhlar’s Solvation Model 12). PMID:25691832
The future of music in therapy and medicine.
Thaut, Michael H
2005-12-01
The understanding of music's role and function in therapy and medicine is undergoing a rapid transformation, based on neuroscientific research showing the reciprocal relationship between studying the neurobiological foundations of music in the brain and how musical behavior through learning and experience changes brain and behavior function. Through this research the theory and clinical practice of music therapy is changing more and more from a social science model, based on cultural roles and general well-being concepts, to a neuroscience-guided model based on brain function and music perception. This paradigm shift has the potential to move music therapy from an adjunct modality to a central treatment modality in rehabilitation and therapy.
Petersen, James H.; DeAngelis, Donald L.
1992-01-01
The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.
NASA Astrophysics Data System (ADS)
Nourani, Vahid; Mousavi, Shahram; Dabrowska, Dominika; Sadikoglu, Fahreddin
2017-05-01
As an innovation, both black box and physical-based models were incorporated into simulating groundwater flow and contaminant transport. Time series of groundwater level (GL) and chloride concentration (CC) observed at different piezometers of study plain were firstly de-noised by the wavelet-based de-noising approach. The effect of de-noised data on the performance of artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) was evaluated. Wavelet transform coherence was employed for spatial clustering of piezometers. Then for each cluster, ANN and ANFIS models were trained to predict GL and CC values. Finally, considering the predicted water heads of piezometers as interior conditions, the radial basis function as a meshless method which solves partial differential equations of GFCT, was used to estimate GL and CC values at any point within the plain where there is not any piezometer. Results indicated that efficiency of ANFIS based spatiotemporal model was more than ANN based model up to 13%.
NASA Astrophysics Data System (ADS)
di Stefano, Marco; Paulsen, Jonas; Lien, Tonje G.; Hovig, Eivind; Micheletti, Cristian
2016-10-01
Combining genome-wide structural models with phenomenological data is at the forefront of efforts to understand the organizational principles regulating the human genome. Here, we use chromosome-chromosome contact data as knowledge-based constraints for large-scale three-dimensional models of the human diploid genome. The resulting models remain minimally entangled and acquire several functional features that are observed in vivo and that were never used as input for the model. We find, for instance, that gene-rich, active regions are drawn towards the nuclear center, while gene poor and lamina associated domains are pushed to the periphery. These and other properties persist upon adding local contact constraints, suggesting their compatibility with non-local constraints for the genome organization. The results show that suitable combinations of data analysis and physical modelling can expose the unexpectedly rich functionally-related properties implicit in chromosome-chromosome contact data. Specific directions are suggested for further developments based on combining experimental data analysis and genomic structural modelling.
Di Stefano, Marco; Paulsen, Jonas; Lien, Tonje G; Hovig, Eivind; Micheletti, Cristian
2016-10-27
Combining genome-wide structural models with phenomenological data is at the forefront of efforts to understand the organizational principles regulating the human genome. Here, we use chromosome-chromosome contact data as knowledge-based constraints for large-scale three-dimensional models of the human diploid genome. The resulting models remain minimally entangled and acquire several functional features that are observed in vivo and that were never used as input for the model. We find, for instance, that gene-rich, active regions are drawn towards the nuclear center, while gene poor and lamina associated domains are pushed to the periphery. These and other properties persist upon adding local contact constraints, suggesting their compatibility with non-local constraints for the genome organization. The results show that suitable combinations of data analysis and physical modelling can expose the unexpectedly rich functionally-related properties implicit in chromosome-chromosome contact data. Specific directions are suggested for further developments based on combining experimental data analysis and genomic structural modelling.
Tree-Structured Digital Organisms Model
NASA Astrophysics Data System (ADS)
Suzuki, Teruhiko; Nobesawa, Shiho; Tahara, Ikuo
Tierra and Avida are well-known models of digital organisms. They describe a life process as a sequence of computation codes. A linear sequence model may not be the only way to describe a digital organism, though it is very simple for a computer-based model. Thus we propose a new digital organism model based on a tree structure, which is rather similar to the generic programming. With our model, a life process is a combination of various functions, as if life in the real world is. This implies that our model can easily describe the hierarchical structure of life, and it can simulate evolutionary computation through mutual interaction of functions. We verified our model by simulations that our model can be regarded as a digital organism model according to its definitions. Our model even succeeded in creating species such as viruses and parasites.
NASA Astrophysics Data System (ADS)
Fovet, O.; Hrachowitz, M.; RUIZ, L.; Gascuel-odoux, C.; Savenije, H.
2013-12-01
While most hydrological models reproduce the general flow dynamics of a system, they frequently fail to adequately mimic system internal processes. This is likely to make them inadequate to simulate solutes transport. For example, the hysteresis between storage and discharge, which is often observed in shallow hard-rock aquifers, is rarely well reproduced by models. One main reason is that this hysteresis has little weight in the calibration because objective functions are based on time series of individual variables. This reduces the ability of classical calibration/validation procedures to assess the relevance of the conceptual hypothesis associated with hydrological models. Calibrating models on variables derived from the combination of different individual variables (like stream discharge and groundwater levels) is a way to insure that models will be accepted based on their consistency. Here we therefore test the value of this more systems-like approach to test different hypothesis on the behaviour of a small experimental low-land catchment in French Brittany (ORE AgrHys) where a high hysteresis is observed on the stream flow vs. shallow groundwater level relationship. Several conceptual models were applied to this site, and calibrated using objective functions based on metrics of this hysteresis. The tested model structures differed with respect to the storage function in each reservoir, the storage-discharge function in each reservoir, the deep loss expressions (as constant or variable fraction), the number of reservoirs (from 1 to 4) and their organization (parallel, series). The observed hysteretic groundwater level-discharge relationship was not satisfactorily reproduced by most of the tested models except for the most complex ones. Those were thus more consistent, their underlying hypotheses are probably more realistic even though their performance for simulating observed stream flow was decreased. Selecting models based on such systems-like approach is likely to improve their efficiency for environmental application e.g. on solute transport issues. The next step would be to apply the same approach with variables combining hydrological and biogeochemical variables.
Crowd evacuation model based on bacterial foraging algorithm
NASA Astrophysics Data System (ADS)
Shibiao, Mu; Zhijun, Chen
To understand crowd evacuation, a model based on a bacterial foraging algorithm (BFA) is proposed in this paper. Considering dynamic and static factors, the probability of pedestrian movement is established using cellular automata. In addition, given walking and queue times, a target optimization function is built. At the same time, a BFA is used to optimize the objective function. Finally, through real and simulation experiments, the relationship between the parameters of evacuation time, exit width, pedestrian density, and average evacuation speed is analyzed. The results show that the model can effectively describe a real evacuation.
Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model
NASA Astrophysics Data System (ADS)
Al Sobhi, Mashail M.
2015-02-01
Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.
An Exospheric Temperature Model Based On CHAMP Observations and TIEGCM Simulations
NASA Astrophysics Data System (ADS)
Ruan, Haibing; Lei, Jiuhou; Dou, Xiankang; Liu, Siqing; Aa, Ercha
2018-02-01
In this work, thermospheric densities from the accelerometer measurement on board the CHAMP satellite during 2002-2009 and the simulations from the National Center for Atmospheric Research Thermosphere Ionosphere Electrodynamics General Circulation Model (NCAR-TIEGCM) are employed to develop an empirical exospheric temperature model (ETM). The two-dimensional basis functions of the ETM are first provided from the principal component analysis of the TIEGCM simulations. Based on the exospheric temperatures derived from CHAMP thermospheric densities, a global distribution of the exospheric temperatures is reconstructed. A parameterization is conducted for each basis function amplitude as a function of solar-geophysical and seasonal conditions. Thus, the ETM can be utilized to model the thermospheric temperature and mass density under a specified condition. Our results showed that the averaged standard deviation of the ETM is generally less than 10% than approximately 30% in the MSIS model. Besides, the ETM reproduces the global thermospheric evolutions including the equatorial thermosphere anomaly.
Mateen, Bilal Akhter; Bussas, Matthias; Doogan, Catherine; Waller, Denise; Saverino, Alessia; Király, Franz J; Playford, E Diane
2018-05-01
To determine whether tests of cognitive function and patient-reported outcome measures of motor function can be used to create a machine learning-based predictive tool for falls. Prospective cohort study. Tertiary neurological and neurosurgical center. In all, 337 in-patients receiving neurosurgical, neurological, or neurorehabilitation-based care. Binary (Y/N) for falling during the in-patient episode, the Trail Making Test (a measure of attention and executive function) and the Walk-12 (a patient-reported measure of physical function). The principal outcome was a fall during the in-patient stay ( n = 54). The Trail test was identified as the best predictor of falls. Moreover, addition of other variables, did not improve the prediction (Wilcoxon signed-rank P < 0.001). Classical linear statistical modeling methods were then compared with more recent machine learning based strategies, for example, random forests, neural networks, support vector machines. The random forest was the best modeling strategy when utilizing just the Trail Making Test data (Wilcoxon signed-rank P < 0.001) with 68% (± 7.7) sensitivity, and 90% (± 2.3) specificity. This study identifies a simple yet powerful machine learning (Random Forest) based predictive model for an in-patient neurological population, utilizing a single neuropsychological test of cognitive function, the Trail Making test.
Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.
Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L
2017-10-01
The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.
Fractional Gaussian model in global optimization
NASA Astrophysics Data System (ADS)
Dimri, V. P.; Srivastava, R. P.
2009-12-01
Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.
Multiscale Modeling of Carbon Nanotube-Epoxy Nanocomposites
NASA Astrophysics Data System (ADS)
Fasanella, Nicholas A.
Epoxy-composites are widely used in the aerospace industry. In order to improve upon stiffness and thermal conductivity; carbon nanotube additives to epoxies are being explored. This dissertation presents multiscale modeling techniques to study the engineering properties of single walled carbon nanotube (SWNT)-epoxy nanocomposites, consisting of pristine and covalently functionalized systems. Using Molecular Dynamics (MD), thermomechanical properties were calculated for a representative polymer unit cell. Finite Element (FE) and orientation distribution function (ODF) based methods were used in a multiscale framework to obtain macroscale properties. An epoxy network was built using the dendrimer growth approach. The epoxy model was verified by matching the experimental glass transition temperature, density, and dilatation. MD, via the constant valence force field (CVFF), was used to explore the mechanical and dilatometric effects of adding pristine and functionalized SWNTs to epoxy. Full stiffness matrices and linear coefficient of thermal expansion vectors were obtained. The Green-Kubo method was used to investigate the thermal conductivity as a function of temperature for the various nanocomposites. Inefficient phonon transport at the ends of nanotubes is an important factor in the thermal conductivity of the nanocomposites, and for this reason discontinuous nanotubes were modeled in addition to long nanotubes. To obtain continuum-scale elastic properties from the MD data, multiscale modeling was considered to give better control over the volume fraction of nanotubes, and investigate the effects of nanotube alignment. Two methods were considered; an FE based method, and an ODF based method. The FE method probabilistically assigned elastic properties of elements from the MD lattice results based on the desired volume fraction and alignment of the nanotubes. For the ODF method, a distribution function was generated based on the desired amount of nanotube alignment; and the stiffness matrix was calculated. A rule of mixture approach was implemented in the ODF model to vary the SWNT volume fraction. Both the ODF and FE models are compared and contrasted. ODF analysis is significantly faster for nanocomposites and is a novel contribution in this thesis. Multiscale modeling allows for the effects of nanofillers in epoxy systems to be characterized without having to run costly experiments.
Day, Ryan; Joo, Hyun; Chavan, Archana; Lennox, Kristin P.; Chen, Ann; Dahl, David B.; Vannucci, Marina; Tsai, Jerry W.
2012-01-01
As an alternative to the common template based protein structure prediction methods based on main-chain position, a novel side-chain centric approach has been developed. Together with a Bayesian loop modeling procedure and a combination scoring function, the Stone Soup algorithm was applied to the CASP9 set of template based modeling targets. Although the method did not generate as large of perturbations to the template structures as necessary, the analysis of the results gives unique insights into the differences in packing between the target structures and their templates. Considerable variation in packing is found between target and template structures even when the structures are close, and this variation is found due to 2 and 3 body packing interactions. Outside the inherent restrictions in packing representation of the PDB, the first steps in correctly defining those regions of variable packing have been mapped primarily to local interactions, as the packing at the secondary and tertiary structure are largely conserved. Of the scoring functions used, a loop scoring function based on water structure exhibited some promise for discrimination. These results present a clear structural path for further development of a side-chain centered approach to template based modeling. PMID:23266765
Day, Ryan; Joo, Hyun; Chavan, Archana C; Lennox, Kristin P; Chen, Y Ann; Dahl, David B; Vannucci, Marina; Tsai, Jerry W
2013-02-01
As an alternative to the common template based protein structure prediction methods based on main-chain position, a novel side-chain centric approach has been developed. Together with a Bayesian loop modeling procedure and a combination scoring function, the Stone Soup algorithm was applied to the CASP9 set of template based modeling targets. Although the method did not generate as large of perturbations to the template structures as necessary, the analysis of the results gives unique insights into the differences in packing between the target structures and their templates. Considerable variation in packing is found between target and template structures even when the structures are close, and this variation is found due to 2 and 3 body packing interactions. Outside the inherent restrictions in packing representation of the PDB, the first steps in correctly defining those regions of variable packing have been mapped primarily to local interactions, as the packing at the secondary and tertiary structure are largely conserved. Of the scoring functions used, a loop scoring function based on water structure exhibited some promise for discrimination. These results present a clear structural path for further development of a side-chain centered approach to template based modeling. Copyright © 2012 Elsevier Ltd. All rights reserved.
Web-based application for inverting one-dimensional magnetotelluric data using Python
NASA Astrophysics Data System (ADS)
Suryanto, Wiwit; Irnaka, Theodosius Marwan
2016-11-01
One-dimensional modeling of magnetotelluric (MT) data has been performed using an online application on a web-based virtual private server. The application was developed with the Python language using the Django framework with HTML and CSS components. The input data, including the apparent resistivity and phase as a function of period or frequency with standard deviation, can be entered through an interactive web page that can be freely accessed at https://komputasi.geofisika.ugm.ac.id. The subsurface models, represented by resistivity as a function of depth, are iteratively improved by changing the model parameters, such as the resistivity and the layer depth, based on the observed apparent resistivity and phase data. The output of the application displayed on the screen presents resistivity as a function of depth and includes the RMS error for each iteration. Synthetic and real data were used in comparative tests of the application's performance, and it is shown that the application developed accurate subsurface resistivity models. Hence, this application can be used for practical one-dimensional modeling of MT data.
Teschke, Kay; Spierings, Judith; Marion, Stephen A; Demers, Paul A; Davies, Hugh W; Kennedy, Susan M
2004-12-01
In a study of wood dust exposure and lung function, we tested the effect on the exposure-response relationship of six different exposure metrics using the mean measured exposure of each subject versus the mean exposure based on various methods of grouping subjects, including job-based groups and groups based on an empirical model of the determinants of exposure. Multiple linear regression was used to examine the association between wood dust concentration and forced expiratory volume in 1s (FEV(1)), adjusting for age, sex, height, race, pediatric asthma, and smoking. Stronger point estimates of the exposure-response relationships were observed when exposures were based on increasing levels of aggregation, allowing the relationships to be found statistically significant in four of the six metrics. The strongest point estimates were found when exposures were based on the determinants of exposure model. Determinants of exposure modeling offers the potential for improvement in risk estimation equivalent to or beyond that from job-based exposure grouping.
GROMOS polarizable charge-on-spring models for liquid urea: COS/U and COS/U2
NASA Astrophysics Data System (ADS)
Lin, Zhixiong; Bachmann, Stephan J.; van Gunsteren, Wilfred F.
2015-03-01
Two one-site polarizable urea models, COS/U and COS/U2, based on the charge-on-spring model are proposed. The models are parametrized against thermodynamic properties of urea-water mixtures in combination with the polarizable COS/G2 and COS/D2 models for liquid water, respectively, and have the same functional form of the inter-atomic interaction function and are based on the same parameter calibration procedure and type of experimental data as used to develop the GROMOS biomolecular force field. Thermodynamic, dielectric, and dynamic properties of urea-water mixtures simulated using the polarizable models are closer to experimental data than using the non-polarizable models. The COS/U and COS/U2 models may be used in biomolecular simulations of protein denaturation.
NASA Technical Reports Server (NTRS)
Yam, Yeung; Johnson, Timothy L.; Lang, Jeffrey H.
1987-01-01
A model reduction technique based on aggregation with respect to sensor and actuator influence functions rather than modes is presented for large systems of coupled second-order differential equations. Perturbation expressions which can predict the effects of spillover on both the reduced-order plant model and the neglected plant model are derived. For the special case of collocated actuators and sensors, these expressions lead to the derivation of constraints on the controller gains that are, given the validity of the perturbation technique, sufficient to guarantee the stability of the closed-loop system. A case study demonstrates the derivation of stabilizing controllers based on the present technique. The use of control and observation synthesis in modifying the dimension of the reduced-order plant model is also discussed. A numerical example is provided for illustration.
Johnson, Philip J.; Berhane, Sarah; Kagebayashi, Chiaki; Satomura, Shinji; Teng, Mabel; Reeves, Helen L.; O'Beirne, James; Fox, Richard; Skowronska, Anna; Palmer, Daniel; Yeo, Winnie; Mo, Frankie; Lai, Paul; Iñarrairaegui, Mercedes; Chan, Stephen L.; Sangro, Bruno; Miksad, Rebecca; Tada, Toshifumi; Kumada, Takashi; Toyoda, Hidenori
2015-01-01
Purpose Most patients with hepatocellular carcinoma (HCC) have associated chronic liver disease, the severity of which is currently assessed by the Child-Pugh (C-P) grade. In this international collaboration, we identify objective measures of liver function/dysfunction that independently influence survival in patients with HCC and then combine these into a model that could be compared with the conventional C-P grade. Patients and Methods We developed a simple model to assess liver function, based on 1,313 patients with HCC of all stages from Japan, that involved only serum bilirubin and albumin levels. We then tested the model using similar cohorts from other geographical regions (n = 5,097) and other clinical situations (patients undergoing resection [n = 525] or sorafenib treatment for advanced HCC [n = 1,132]). The specificity of the model for liver (dys)function was tested in patients with chronic liver disease but without HCC (n = 501). Results The model, the Albumin-Bilirubin (ALBI) grade, performed at least as well as the C-P grade in all geographic regions. The majority of patients with HCC had C-P grade A disease at presentation, and within this C-P grade, ALBI revealed two classes with clearly different prognoses. Its utility in patients with chronic liver disease alone supported the contention that the ALBI grade was indeed an index of liver (dys)function. Conclusion The ALBI grade offers a simple, evidence-based, objective, and discriminatory method of assessing liver function in HCC that has been extensively tested in an international setting. This new model eliminates the need for subjective variables such as ascites and encephalopathy, a requirement in the conventional C-P grade. PMID:25512453
Spaceflight tracking and data network operational reliability assessment for Skylab
NASA Technical Reports Server (NTRS)
Seneca, V. I.; Mlynarczyk, R. H.
1974-01-01
Data on the spaceflight communications equipment status during the Skylab mission were subjected to an operational reliability assessment. Reliability models were revised to reflect pertinent equipment changes accomplished prior to the beginning of the Skylab missions. Appropriate adjustments were made to fit the data to the models. The availabilities are based on the failure events resulting in the stations inability to support a function of functions and the MTBF's are based on all events including 'can support' and 'cannot support'. Data were received from eleven land-based stations and one ship.
An important challenge for an integrative approach to developmental systems toxicology is associating putative molecular initiating events (MIEs), cell signaling pathways, cell function and modeled fetal exposure kinetics. We have developed a chemical classification model based o...
An Extension of RSS-based Model Comparison Tests for Weighted Least Squares
2012-08-22
use the model comparison test statistic to analyze the null hypothesis. Under the null hypothesis, the weighted least squares cost functional is JWLS ...q̂WLSH ) = 10.3040×106. Under the alternative hypothesis, the weighted least squares cost functional is JWLS (q̂WLS) = 8.8394 × 106. Thus the model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almansouri, Hani; Johnson, Christi R; Clayton, Dwight A
All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thickmore » concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.« less
NASA Astrophysics Data System (ADS)
Almansouri, Hani; Johnson, Christi; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector
2017-02-01
All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.
NASA Astrophysics Data System (ADS)
Hartman, Joshua D.; Monaco, Stephen; Schatschneider, Bohdan; Beran, Gregory J. O.
2015-09-01
We assess the quality of fragment-based ab initio isotropic 13C chemical shift predictions for a collection of 25 molecular crystals with eight different density functionals. We explore the relative performance of cluster, two-body fragment, combined cluster/fragment, and the planewave gauge-including projector augmented wave (GIPAW) models relative to experiment. When electrostatic embedding is employed to capture many-body polarization effects, the simple and computationally inexpensive two-body fragment model predicts both isotropic 13C chemical shifts and the chemical shielding tensors as well as both cluster models and the GIPAW approach. Unlike the GIPAW approach, hybrid density functionals can be used readily in a fragment model, and all four hybrid functionals tested here (PBE0, B3LYP, B3PW91, and B97-2) predict chemical shifts in noticeably better agreement with experiment than the four generalized gradient approximation (GGA) functionals considered (PBE, OPBE, BLYP, and BP86). A set of recommended linear regression parameters for mapping between calculated chemical shieldings and observed chemical shifts are provided based on these benchmark calculations. Statistical cross-validation procedures are used to demonstrate the robustness of these fits.
Hartman, Joshua D; Monaco, Stephen; Schatschneider, Bohdan; Beran, Gregory J O
2015-09-14
We assess the quality of fragment-based ab initio isotropic (13)C chemical shift predictions for a collection of 25 molecular crystals with eight different density functionals. We explore the relative performance of cluster, two-body fragment, combined cluster/fragment, and the planewave gauge-including projector augmented wave (GIPAW) models relative to experiment. When electrostatic embedding is employed to capture many-body polarization effects, the simple and computationally inexpensive two-body fragment model predicts both isotropic (13)C chemical shifts and the chemical shielding tensors as well as both cluster models and the GIPAW approach. Unlike the GIPAW approach, hybrid density functionals can be used readily in a fragment model, and all four hybrid functionals tested here (PBE0, B3LYP, B3PW91, and B97-2) predict chemical shifts in noticeably better agreement with experiment than the four generalized gradient approximation (GGA) functionals considered (PBE, OPBE, BLYP, and BP86). A set of recommended linear regression parameters for mapping between calculated chemical shieldings and observed chemical shifts are provided based on these benchmark calculations. Statistical cross-validation procedures are used to demonstrate the robustness of these fits.
NASA Technical Reports Server (NTRS)
Torian, J. G.
1977-01-01
Consumables models required for the mission planning and scheduling function are formulated. The relation of the models to prelaunch, onboard, ground support, and postmission functions for the space transportation systems is established. Analytical models consisting of an orbiter planning processor with consumables data base is developed. A method of recognizing potential constraint violations in both the planning and flight operations functions, and a flight data file storage/retrieval of information over an extended period which interfaces with a flight operations processor for monitoring of the actual flights is presented.
Quality assessment of protein model-structures based on structural and functional similarities.
Konopka, Bogumil M; Nebel, Jean-Christophe; Kotulska, Malgorzata
2012-09-21
Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. GOBA--Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best result for two targets of CASP8, and one of CASP9, compared to the contest participants. Consequently, GOBA offers a novel single model quality assessment program that addresses the practical needs of biologists. In conjunction with other Model Quality Assessment Programs (MQAPs), it would prove useful for the evaluation of single protein models.
Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.
Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir
2018-04-01
In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.
Characterization and prediction of chemical functions and weight fractions in consumer products.
Isaacs, Kristin K; Goldsmith, Michael-Rock; Egeghy, Peter; Phillips, Katherine; Brooks, Raina; Hong, Tao; Wambaugh, John F
2016-01-01
Assessing exposures from the thousands of chemicals in commerce requires quantitative information on the chemical constituents of consumer products. Unfortunately, gaps in available composition data prevent assessment of exposure to chemicals in many products. Here we propose filling these gaps via consideration of chemical functional role. We obtained function information for thousands of chemicals from public sources and used a clustering algorithm to assign chemicals into 35 harmonized function categories (e.g., plasticizers, antimicrobials, solvents). We combined these functions with weight fraction data for 4115 personal care products (PCPs) to characterize the composition of 66 different product categories (e.g., shampoos). We analyzed the combined weight fraction/function dataset using machine learning techniques to develop quantitative structure property relationship (QSPR) classifier models for 22 functions and for weight fraction, based on chemical-specific descriptors (including chemical properties). We applied these classifier models to a library of 10196 data-poor chemicals. Our predictions of chemical function and composition will inform exposure-based screening of chemicals in PCPs for combination with hazard data in risk-based evaluation frameworks. As new information becomes available, this approach can be applied to other classes of products and the chemicals they contain in order to provide essential consumer product data for use in exposure-based chemical prioritization.
An Accurate and Dynamic Computer Graphics Muscle Model
NASA Technical Reports Server (NTRS)
Levine, David Asher
1997-01-01
A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.
NASA Astrophysics Data System (ADS)
Ye, Jing; Dang, Yaoguo; Li, Bingjun
2018-01-01
Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.
Estimating neural response functions from fMRI
Kumar, Sukhbinder; Penny, William
2014-01-01
This paper proposes a methodology for estimating Neural Response Functions (NRFs) from fMRI data. These NRFs describe non-linear relationships between experimental stimuli and neuronal population responses. The method is based on a two-stage model comprising an NRF and a Hemodynamic Response Function (HRF) that are simultaneously fitted to fMRI data using a Bayesian optimization algorithm. This algorithm also produces a model evidence score, providing a formal model comparison method for evaluating alternative NRFs. The HRF is characterized using previously established “Balloon” and BOLD signal models. We illustrate the method with two example applications based on fMRI studies of the auditory system. In the first, we estimate the time constants of repetition suppression and facilitation, and in the second we estimate the parameters of population receptive fields in a tonotopic mapping study. PMID:24847246
NASA Astrophysics Data System (ADS)
Gariano, Stefano Luigi; Terranova, Oreste; Greco, Roberto; Iaquinta, Pasquale; Iovine, Giulio
2013-04-01
In Calabria (Southern Italy), rainfall-induced landslides often cause significant economic loss and victims. The timing of activation of rainfall-induced landslides can be predicted by means of either empirical ("hydrological") or physically-based ("complete") approaches. In this study, by adopting the Genetic-Algorithm based release of the hydrological model SAKe (Self Adaptive Kernel), the relationships between the rainfall series and the dates of historical activations of the Acri slope movement, a large rock slide located in the Sila Massif (Northern Calabria), have been investigated. SAKe is a self-adaptive hydrological model, based on a black-box approach and on the assumption of a linear and steady slope-stability response to rainfall. The model can be employed to predict the timing of occurrence of rainfall-induced landslides. With the model, either the mobilizations of a single phenomenon, or those of a homogeneous set of landslides in a given study area can be analysed. By properly tuning the model parameters against past occurrences, the mobility function and the threshold value can be identified. The ranges of the parameters depend on the characteristics of the slope and of the considered landslide, besides hydrological characteristics of the triggering events. SAKe requires as input: i) the series of rains, and ii) the set of known dates of landslide activation. The output of the model is represented by the mobilization function, Z(t): it is defined by means of the convolution between the rains and a filter function (i.e. the Kernel). The triggering conditions occur when the value of Z(t) gets greater than a given threshold, Zcr. In particular, the specific release of the model here employed (GA-SAKe) employs an automated tool, based on elitist Genetic Algorithms. As a result, a family of optimal, discretized kernels has been obtained from initial standard analytical functions. Such kernels maximize the fitness function of the model: they have been selected by means of a calibration technique based on the operators selection, crossover, and mutation. In this way, the values of model parameters could be iteratively changed, aiming at improving the fitness of the tested solutions. An example of model optimization is discussed, with reference to the Acri case study, to exemplify the potential application of SAKe for early-warning and civil-protection purposes.
Integrated Workforce Modeling System
NASA Technical Reports Server (NTRS)
Moynihan, Gary P.
2000-01-01
There are several computer-based systems, currently in various phases of development at KSC, which encompass some component, aspect, or function of workforce modeling. These systems may offer redundant capabilities and/or incompatible interfaces. A systems approach to workforce modeling is necessary in order to identify and better address user requirements. This research has consisted of two primary tasks. Task 1 provided an assessment of existing and proposed KSC workforce modeling systems for their functionality and applicability to the workforce planning function. Task 2 resulted in the development of a proof-of-concept design for a systems approach to workforce modeling. The model incorporates critical aspects of workforce planning, including hires, attrition, and employee development.
Improving the performances of autofocus based on adaptive retina-like sampling model
NASA Astrophysics Data System (ADS)
Hao, Qun; Xiao, Yuqing; Cao, Jie; Cheng, Yang; Sun, Ce
2018-03-01
An adaptive retina-like sampling model (ARSM) is proposed to balance autofocusing accuracy and efficiency. Based on the model, we carry out comparative experiments between the proposed method and the traditional method in terms of accuracy, the full width of the half maxima (FWHM) and time consumption. Results show that the performances of our method are better than that of the traditional method. Meanwhile, typical autofocus functions, including sum-modified-Laplacian (SML), Laplacian (LAP), Midfrequency-DCT (MDCT) and Absolute Tenengrad (ATEN) are compared through comparative experiments. The smallest FWHM is obtained by the use of LAP, which is more suitable for evaluating accuracy than other autofocus functions. The autofocus function of MDCT is most suitable to evaluate the real-time ability.
NASA Astrophysics Data System (ADS)
Ochoa, Diego Alejandro; García, Jose Eduardo
2016-04-01
The Preisach model is a classical method for describing nonlinear behavior in hysteretic systems. According to this model, a hysteretic system contains a collection of simple bistable units which are characterized by an internal field and a coercive field. This set of bistable units exhibits a statistical distribution that depends on these fields as parameters. Thus, nonlinear response depends on the specific distribution function associated with the material. This model is satisfactorily used in this work to describe the temperature-dependent ferroelectric response in PZT- and KNN-based piezoceramics. A distribution function expanded in Maclaurin series considering only the first terms in the internal field and the coercive field is proposed. Changes in coefficient relations of a single distribution function allow us to explain the complex temperature dependence of hard piezoceramic behavior. A similar analysis based on the same form of the distribution function shows that the KNL-NTS properties soften around its orthorhombic to tetragonal phase transition.
Observation-Based Dissipation and Input Terms for Spectral Wave Models, with End-User Testing
2014-09-30
scale influence of the Great barrier reef matrix on wave attenuation, Coral Reefs [published, refereed] Ghantous, M., and A.V. Babanin, 2014: One...Observation-Based Dissipation and Input Terms for Spectral Wave Models...functions, based on advanced understanding of physics of air-sea interactions, wave breaking and swell attenuation, in wave - forecast models. OBJECTIVES The
Ma, Jing; Yu, Jiong; Hao, Guangshu; Wang, Dan; Sun, Yanni; Lu, Jianxin; Cao, Hongcui; Lin, Feiyan
2017-02-20
The prevalence of high hyperlipemia is increasing around the world. Our aims are to analyze the relationship of triglyceride (TG) and cholesterol (TC) with indexes of liver function and kidney function, and to develop a prediction model of TG, TC in overweight people. A total of 302 adult healthy subjects and 273 overweight subjects were enrolled in this study. The levels of fasting indexes of TG (fs-TG), TC (fs-TC), blood glucose, liver function, and kidney function were measured and analyzed by correlation analysis and multiple linear regression (MRL). The back propagation artificial neural network (BP-ANN) was applied to develop prediction models of fs-TG and fs-TC. The results showed there was significant difference in biochemical indexes between healthy people and overweight people. The correlation analysis showed fs-TG was related to weight, height, blood glucose, and indexes of liver and kidney function; while fs-TC was correlated with age, indexes of liver function (P < 0.01). The MRL analysis indicated regression equations of fs-TG and fs-TC both had statistic significant (P < 0.01) when included independent indexes. The BP-ANN model of fs-TG reached training goal at 59 epoch, while fs-TC model achieved high prediction accuracy after training 1000 epoch. In conclusions, there was high relationship of fs-TG and fs-TC with weight, height, age, blood glucose, indexes of liver function and kidney function. Based on related variables, the indexes of fs-TG and fs-TC can be predicted by BP-ANN models in overweight people.
Guo, Shu Hai; Wu, Bo
2017-12-01
Aquatic ecological regionalization and aquatic ecological function regionalization are the basis of water environmental management of a river basin and rational utilization of an aquatic ecosystem, and have been studied in China for more than ten years. Regarding the common problems in this field, the relationship between aquatic ecological regionalization and aquatic ecological function regionalization was discussed in this study by systematic analysis of the aquatic ecological zoning and the types of aquatic ecological function. Based on the dual tree structure, we put forward the RFCH process and the diamond conceptual model. Taking Liaohe River basin as an example and referring to the results of existing regionalization studies, we classified the aquatic ecological function regions based on three-class aquatic ecological regionalization. This study provided a process framework for aquatic ecological function regionalization of a river basin.
NASA Technical Reports Server (NTRS)
Miles, J. H.
1974-01-01
A rational function is presented for the acoustic spectra generated by deflection of engine exhaust jets for under-the-wing and over-the-wing versions of externally blown flaps. The functional representation is intended to provide a means for compact storage of data and for data analysis. The expressions are based on Fourier transform functions for the Strouhal normalized pressure spectral density, and on a correction for reflection effects based on the N-independent-source model of P. Thomas extended by use of a reflected ray transfer function. Curve fit comparisons are presented for blown flap data taken from turbofan engine tests and from large scale cold-flow model tests. Application of the rational function to scrubbing noise theory is also indicated.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
MADGiC: a model-based approach for identifying driver genes in cancer
Korthauer, Keegan D.; Kendziorski, Christina
2015-01-01
Motivation: Identifying and prioritizing somatic mutations is an important and challenging area of cancer research that can provide new insights into gene function as well as new targets for drug development. Most methods for prioritizing mutations rely primarily on frequency-based criteria, where a gene is identified as having a driver mutation if it is altered in significantly more samples than expected according to a background model. Although useful, frequency-based methods are limited in that all mutations are treated equally. It is well known, however, that some mutations have no functional consequence, while others may have a major deleterious impact. The spatial pattern of mutations within a gene provides further insight into their functional consequence. Properly accounting for these factors improves both the power and accuracy of inference. Also important is an accurate background model. Results: Here, we develop a Model-based Approach for identifying Driver Genes in Cancer (termed MADGiC) that incorporates both frequency and functional impact criteria and accommodates a number of factors to improve the background model. Simulation studies demonstrate advantages of the approach, including a substantial increase in power over competing methods. Further advantages are illustrated in an analysis of ovarian and lung cancer data from The Cancer Genome Atlas (TCGA) project. Availability and implementation: R code to implement this method is available at http://www.biostat.wisc.edu/ kendzior/MADGiC/. Contact: kendzior@biostat.wisc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25573922
NASA Astrophysics Data System (ADS)
Hogri, Roni; Bamford, Simeon A.; Taub, Aryeh H.; Magal, Ari; Giudice, Paolo Del; Mintz, Matti
2015-02-01
Neuroprostheses could potentially recover functions lost due to neural damage. Typical neuroprostheses connect an intact brain with the external environment, thus replacing damaged sensory or motor pathways. Recently, closed-loop neuroprostheses, bidirectionally interfaced with the brain, have begun to emerge, offering an opportunity to substitute malfunctioning brain structures. In this proof-of-concept study, we demonstrate a neuro-inspired model-based approach to neuroprostheses. A VLSI chip was designed to implement essential cerebellar synaptic plasticity rules, and was interfaced with cerebellar input and output nuclei in real time, thus reproducing cerebellum-dependent learning in anesthetized rats. Such a model-based approach does not require prior system identification, allowing for de novo experience-based learning in the brain-chip hybrid, with potential clinical advantages and limitations when compared to existing parametric ``black box'' models.
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Perera, Ricardo; De Roeck, Guido
2008-06-01
This paper develops a sensitivity-based updating method to identify the damage in a tested reinforced concrete (RC) frame modeled with a two-dimensional planar finite element (FE) by minimizing the discrepancies of modal frequencies and mode shapes. In order to reduce the number of unknown variables, a bidimensional damage (element) function is proposed, resulting in a considerable improvement of the optimization performance. For damage identification, a reference FE model of the undamaged frame divided into a few damage functions is firstly obtained and then a rough identification is carried out to detect possible damage locations, which are subsequently refined with new damage functions to accurately identify the damage. From a design point of view, it would be useful to evaluate, in a simplified way, the remaining bending stiffness of cracked beam sections or segments. Hence, an RC damage model based on a static mechanism is proposed to estimate the remnant stiffness of a cracked RC beam segment. The damage model is based on the assumption that the damage effect spreads over a region and the stiffness in the segment changes linearly. Furthermore, the stiffness reduction evaluated using this damage model is compared with the FE updating result. It is shown that the proposed bidimensional damage function is useful in producing a well-conditioned optimization problem and the aforementioned damage model can be used for an approximate stiffness estimation of a cracked beam segment.
Error rate information in attention allocation pilot models
NASA Technical Reports Server (NTRS)
Faulkner, W. H.; Onstott, E. D.
1977-01-01
The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.
Hyper-Book: A Formal Model for Electronic Books.
ERIC Educational Resources Information Center
Catenazzi, Nadia; Sommaruga, Lorenzo
1994-01-01
Presents a model for electronic books based on the paper book metaphor. Discussion includes how the book evolves under the effects of its functional components; the use and impact of the model for organizing and presenting electronic documents in the context of electronic publishing; and the possible applications of a system based on the model.…
Building a reference functional model for EHR systems.
Sumita, Yuki; Takata, Mami; Ishitsuka, Keiju; Tominaga, Yasuyuki; Ohe, Kazuhiko
2007-09-01
Our aim was to develop a reference functional model for electric health record systems (RFM). Such a RFM is built from functions using functional descriptive elements (FDEs) and represents the static relationships between them. This paper presents a new format for describing electric health record (EHR) system functions. Questionnaire and field interview survey was conducted in five hospitals in Japan and one in the USA, to collect data on EHR system functions. Based on survey results, a reference functional list (RFL) was created, in which each EHR system function was listed and divided into 13 FDE types. By analyzing the RFL, we built the meta-functional model and the functional model using UML class diagrams. The former defines language for expressing the functional model, while the latter represents functions, FDEs and their static relationships. A total of 385 functions were represented in the RFL. Six patterns were found for the relationships between functions. The meta-functional model was created as a new format for describing functions. Examples of the functional model, which included the six patterns in the relationships between functions and 11 verbs, were created. We present the meta-functional model, which is a new description format for the functional structure and relationships. Although a more detailed description is required to apply the RFM to the semiautomatic generation of functional specification documents, our RFM can visualize functional structures and functional relationships, classify functions using multiple axes and identify the similarities and differences between functions. The RFM will promote not only the standardization of EHR systems, but also communications between system developers and healthcare providers in the EHR system-design processes. 2006 Elsevier Ireland Ltd
Ferrada, Evandro; Vergara, Ismael A; Melo, Francisco
2007-01-01
The correct discrimination between native and near-native protein conformations is essential for achieving accurate computer-based protein structure prediction. However, this has proven to be a difficult task, since currently available physical energy functions, empirical potentials and statistical scoring functions are still limited in achieving this goal consistently. In this work, we assess and compare the ability of different full atom knowledge-based potentials to discriminate between native protein structures and near-native protein conformations generated by comparative modeling. Using a benchmark of 152 near-native protein models and their corresponding native structures that encompass several different folds, we demonstrate that the incorporation of close non-bonded pairwise atom terms improves the discriminating power of the empirical potentials. Since the direct and unbiased derivation of close non-bonded terms from current experimental data is not possible, we obtained and used those terms from the corresponding pseudo-energy functions of a non-local knowledge-based potential. It is shown that this methodology significantly improves the discrimination between native and near-native protein conformations, suggesting that a proper description of close non-bonded terms is important to achieve a more complete and accurate description of native protein conformations. Some external knowledge-based energy functions that are widely used in model assessment performed poorly, indicating that the benchmark of models and the specific discrimination task tested in this work constitutes a difficult challenge.
Refinement of protein termini in template-based modeling using conformational space annealing.
Park, Hahnbeom; Ko, Junsu; Joo, Keehyoung; Lee, Julian; Seok, Chaok; Lee, Jooyoung
2011-09-01
The rapid increase in the number of experimentally determined protein structures in recent years enables us to obtain more reliable protein tertiary structure models than ever by template-based modeling. However, refinement of template-based models beyond the limit available from the best templates is still needed for understanding protein function in atomic detail. In this work, we develop a new method for protein terminus modeling that can be applied to refinement of models with unreliable terminus structures. The energy function for terminus modeling consists of both physics-based and knowledge-based potential terms with carefully optimized relative weights. Effective sampling of both the framework and terminus is performed using the conformational space annealing technique. This method has been tested on a set of termini derived from a nonredundant structure database and two sets of termini from the CASP8 targets. The performance of the terminus modeling method is significantly improved over our previous method that does not employ terminus refinement. It is also comparable or superior to the best server methods tested in CASP8. The success of the current approach suggests that similar strategy may be applied to other types of refinement problems such as loop modeling or secondary structure rearrangement. Copyright © 2011 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Shen, Zhengwei; Cheng, Lishuang
2017-09-01
Total variation (TV)-based image deblurring method can bring on staircase artifacts in the homogenous region of the latent images recovered from the degraded images while a wavelet/frame-based image deblurring method will lead to spurious noise spikes and pseudo-Gibbs artifacts in the vicinity of discontinuities of the latent images. To suppress these artifacts efficiently, we propose a nonconvex composite wavelet/frame and TV-based image deblurring model. In this model, the wavelet/frame and the TV-based methods may complement each other, which are verified by theoretical analysis and experimental results. To further improve the quality of the latent images, nonconvex penalty function is used to be the regularization terms of the model, which may induce a stronger sparse solution and will more accurately estimate the relative large gradient or wavelet/frame coefficients of the latent images. In addition, by choosing a suitable parameter to the nonconvex penalty function, the subproblem that splits by the alternative direction method of multipliers algorithm from the proposed model can be guaranteed to be a convex optimization problem; hence, each subproblem can converge to a global optimum. The mean doubly augmented Lagrangian and the isotropic split Bregman algorithms are used to solve these convex subproblems where the designed proximal operator is used to reduce the computational complexity of the algorithms. Extensive numerical experiments indicate that the proposed model and algorithms are comparable to other state-of-the-art model and methods.
Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements
NASA Astrophysics Data System (ADS)
Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga
The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.
REVIEWS OF TOPICAL PROBLEMS: Nonlinear dynamics of the brain: emotion and cognition
NASA Astrophysics Data System (ADS)
Rabinovich, Mikhail I.; Muezzinoglu, M. K.
2010-07-01
Experimental investigations of neural system functioning and brain activity are standardly based on the assumption that perceptions, emotions, and cognitive functions can be understood by analyzing steady-state neural processes and static tomographic snapshots. The new approaches discussed in this review are based on the analysis of transient processes and metastable states. Transient dynamics is characterized by two basic properties, structural stability and information sensitivity. The ideas and methods that we discuss provide an explanation for the occurrence of and successive transitions between metastable states observed in experiments, and offer new approaches to behavior analysis. Models of the emotional and cognitive functions of the brain are suggested. The mathematical object that represents the observed transient brain processes in the phase space of the model is a structurally stable heteroclinic channel. The possibility of using the suggested models to construct a quantitative theory of some emotional and cognitive functions is illustrated.
Web-based applications for building, managing and analysing kinetic models of biological systems.
Lee, Dong-Yup; Saha, Rajib; Yusufi, Faraaz Noor Khan; Park, Wonjun; Karimi, Iftekhar A
2009-01-01
Mathematical modelling and computational analysis play an essential role in improving our capability to elucidate the functions and characteristics of complex biological systems such as metabolic, regulatory and cell signalling pathways. The modelling and concomitant simulation render it possible to predict the cellular behaviour of systems under various genetically and/or environmentally perturbed conditions. This motivates systems biologists/bioengineers/bioinformaticians to develop new tools and applications, allowing non-experts to easily conduct such modelling and analysis. However, among a multitude of systems biology tools developed to date, only a handful of projects have adopted a web-based approach to kinetic modelling. In this report, we evaluate the capabilities and characteristics of current web-based tools in systems biology and identify desirable features, limitations and bottlenecks for further improvements in terms of usability and functionality. A short discussion on software architecture issues involved in web-based applications and the approaches taken by existing tools is included for those interested in developing their own simulation applications.
What We Know About the Brain Structure-Function Relationship.
Batista-García-Ramó, Karla; Fernández-Verdecia, Caridad Ivette
2018-04-18
How the human brain works is still a question, as is its implication with brain architecture: the non-trivial structure–function relationship. The main hypothesis is that the anatomic architecture conditions, but does not determine, the neural network dynamic. The functional connectivity cannot be explained only considering the anatomical substrate. This involves complex and controversial aspects of the neuroscience field and that the methods and methodologies to obtain structural and functional connectivity are not always rigorously applied. The goal of the present article is to discuss about the progress made to elucidate the structure–function relationship of the Central Nervous System, particularly at the brain level, based on results from human and animal studies. The current novel systems and neuroimaging techniques with high resolutive physio-structural capacity have brought about the development of an integral framework of different structural and morphometric tools such as image processing, computational modeling and graph theory. Different laboratories have contributed with in vivo, in vitro and computational/mathematical models to study the intrinsic neural activity patterns based on anatomical connections. We conclude that multi-modal techniques of neuroimaging are required such as an improvement on methodologies for obtaining structural and functional connectivity. Even though simulations of the intrinsic neural activity based on anatomical connectivity can reproduce much of the observed patterns of empirical functional connectivity, future models should be multifactorial to elucidate multi-scale relationships and to infer disorder mechanisms.
ERIC Educational Resources Information Center
Aldosari, Mubarak S.
2016-01-01
This study conducted an in-depth analysis of the efficacy of the Decision Model in the development of function-based treatments for disruptive behaviors in four toddlers with disabilities aged from 26 to 34 months in inclusive toddler classrooms. The research was conducted in three parts. In Part 1, a functional behavioral assessment was conducted…
Wildlife tradeoffs based on landscape models of habitat preference
Loehle, C.; Mitchell, M.S.; White, M.
2000-01-01
Wildlife tradeoffs based on landscape models of habitat preference were presented. Multiscale logistic regression models were used and based on these models a spatial optimization technique was utilized to generate optimal maps. The tradeoffs were analyzed by gradually increasing the weighting on a single species in the objective function over a series of simulations. Results indicated that efficiency of habitat management for species diversity could be maximized for small landscapes by incorporating spatial context.
Analysis and control of the METC fluid-bed gasifier. Quarterly report, October 1994--January 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farell, A.E.; Reddy, S.
1995-03-01
This document summarizes work performed for the period 10/1/94 to 2/1/95. The initial phase of the work focuses on developing a simple transfer function model of the Fluidized Bed Gasifier (FBG). This transfer function model will be developed based purely on the gasifier responses to step changes in gasifier inputs (including reactor air, convey air, cone nitrogen, FBG pressure, and coal feedrate). This transfer function model will represent a linear, dynamic model that is valid near the operating point at which the data was taken. In addition, a similar transfer function model will be developed using MGAS in order tomore » assess MGAS for use as a model of the FBG for control systems analysis.« less
Oosting, Ellen; Hoogeboom, Thomas J; Appelman-de Vries, Suzan A; Swets, Adam; Dronkers, Jaap J; van Meeteren, Nico L U
2016-01-01
The aim of this study was to evaluate the value of conventional factors, the Risk Assessment and Predictor Tool (RAPT) and performance-based functional tests as predictors of delayed recovery after total hip arthroplasty (THA). A prospective cohort study in a regional hospital in the Netherlands with 315 patients was attending for THA in 2012. The dependent variable recovery of function was assessed with the Modified Iowa Levels of Assistance scale. Delayed recovery was defined as taking more than 3 days to walk independently. Independent variables were age, sex, BMI, Charnley score, RAPT score and scores for four performance-based tests [2-minute walk test, timed up and go test (TUG), 10-meter walking test (10 mW) and hand grip strength]. Regression analysis with all variables identified older age (>70 years), Charnley score C, slow walking speed (10 mW >10.0 s) and poor functional mobility (TUG >10.5 s) as the best predictors of delayed recovery of function. This model (AUC 0.85, 95% CI 0.79-0.91) performed better than a model with conventional factors and RAPT scores, and significantly better (p = 0.04) than a model with only conventional factors (AUC 0.81, 95% CI 0.74-0.87). The combination of performance-based tests and conventional factors predicted inpatient functional recovery after THA. Two simple functional performance-based tests have a significant added value to a more conventional screening with age and comorbidities to predict recovery of functioning immediately after total hip surgery. Patients over 70 years old, with comorbidities, with a TUG score >10.5 s and a walking speed >1.0 m/s are at risk for delayed recovery of functioning. Those high risk patients need an accurate discharge plan and could benefit from targeted pre- and postoperative therapeutic exercise programs.
A well-balanced meshless tsunami propagation and inundation model
NASA Astrophysics Data System (ADS)
Brecht, Rüdiger; Bihlo, Alexander; MacLachlan, Scott; Behrens, Jörn
2018-05-01
We present a novel meshless tsunami propagation and inundation model. We discretize the nonlinear shallow-water equations using a well-balanced scheme relying on radial basis function based finite differences. For the inundation model, radial basis functions are used to extrapolate the dry region from nearby wet points. Numerical results against standard one- and two-dimensional benchmarks are presented.
Thomas E. Dilts; Peter J. Weisberg; Camie M. Dencker; Jeanne C. Chambers
2015-01-01
We have three goals. (1) To develop a suite of functionally relevant climate variables for modelling vegetation distribution on arid and semi-arid landscapes of the Great Basin, USA. (2) To compare the predictive power of vegetation distribution models based on mechanistically proximate factors (water deficit variables) and factors that are more mechanistically removed...
Prediction of spectral acceleration response ordinates based on PGA attenuation
Graizer, V.; Kalkan, E.
2009-01-01
Developed herein is a new peak ground acceleration (PGA)-based predictive model for 5% damped pseudospectral acceleration (SA) ordinates of free-field horizontal component of ground motion from shallow-crustal earthquakes. The predictive model of ground motion spectral shape (i.e., normalized spectrum) is generated as a continuous function of few parameters. The proposed model eliminates the classical exhausted matrix of estimator coefficients, and provides significant ease in its implementation. It is structured on the Next Generation Attenuation (NGA) database with a number of additions from recent Californian events including 2003 San Simeon and 2004 Parkfield earthquakes. A unique feature of the model is its new functional form explicitly integrating PGA as a scaling factor. The spectral shape model is parameterized within an approximation function using moment magnitude, closest distance to the fault (fault distance) and VS30 (average shear-wave velocity in the upper 30 m) as independent variables. Mean values of its estimator coefficients were computed by fitting an approximation function to spectral shape of each record using robust nonlinear optimization. Proposed spectral shape model is independent of the PGA attenuation, allowing utilization of various PGA attenuation relations to estimate the response spectrum of earthquake recordings.
Electro-thermal battery model identification for automotive applications
NASA Astrophysics Data System (ADS)
Hu, Y.; Yurkovich, S.; Guezennec, Y.; Yurkovich, B. J.
This paper describes a model identification procedure for identifying an electro-thermal model of lithium ion batteries used in automotive applications. The dynamic model structure adopted is based on an equivalent circuit model whose parameters are scheduled on the state-of-charge, temperature, and current direction. Linear spline functions are used as the functional form for the parametric dependence. The model identified in this way is valid inside a large range of temperatures and state-of-charge, so that the resulting model can be used for automotive applications such as on-board estimation of the state-of-charge and state-of-health. The model coefficients are identified using a multiple step genetic algorithm based optimization procedure designed for large scale optimization problems. The validity of the procedure is demonstrated experimentally for an A123 lithium ion iron-phosphate battery.
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
A neural network model for transference and repetition compulsion based on pattern completion.
Javanbakht, Arash; Ragan, Charles L
2008-01-01
In recent years because of the fascinating growth of the body of neuroscientific knowledge, psychoanalytic scientists have worked on models for the neurological substrates of key psychoanalytic concepts. Transference is an important example. In this article, the psychological process of transference is described, employing the neurological function of pattern completion in hippocampal and thalamo-cortical pathways. Similarly, repetition compulsion is seen as another type of such neurological function; however, it is understood as an attempt for mastery of the unknown, rather than simply for mastery of past experiences and perceptions. Based on this suggested model of neurological function, the myth of the psychoanalyst as blank screen is seen as impossible and ineffective, based on neurofunctional understandings of neuropsychological process. The mutative effect of psychoanalytic therapy, correcting patterns of pathological relatedness, is described briefly from conscious and unconscious perspectives. While cognitive understanding (insight) helps to modify transferentially restored, maladaptive patterns of relatedness, the development of more adaptive patterns is also contingent upon an affective experience (working through), which alters the neurological substrates of unconscious, pathological affective patterns and their neurological functional correlates.
Tawhai, Merryn H.; Clark, Alys R.; Burrowes, Kelly S.
2011-01-01
Biophysically-based computational models provide a tool for integrating and explaining experimental data, observations, and hypotheses. Computational models of the pulmonary circulation have evolved from minimal and efficient constructs that have been used to study individual mechanisms that contribute to lung perfusion, to sophisticated multi-scale and -physics structure-based models that predict integrated structure-function relationships within a heterogeneous organ. This review considers the utility of computational models in providing new insights into the function of the pulmonary circulation, and their application in clinically motivated studies. We review mathematical and computational models of the pulmonary circulation based on their application; we begin with models that seek to answer questions in basic science and physiology and progress to models that aim to have clinical application. In looking forward, we discuss the relative merits and clinical relevance of computational models: what important features are still lacking; and how these models may ultimately be applied to further increasing our understanding of the mechanisms occurring in disease of the pulmonary circulation. PMID:22034608
NASA Astrophysics Data System (ADS)
Jurčo, Branislav
We describe an integrable model, related to the Gaudin magnet, and its relation to the matrix model of Brézin, Itzykson, Parisi and Zuber. Relation is based on Bethe ansatz for the integrable model and its interpretation using orthogonal polynomials and saddle point approximation. Large-N limit of the matrix model corresponds to the thermodynamic limit of the integrable system. In this limit (functional) Bethe ansatz is the same as the generating function for correlators of the matrix models.
Report of the LSPI/NASA Workshop on Lunar Base Methodology Development
NASA Technical Reports Server (NTRS)
Nozette, Stewart; Roberts, Barney
1985-01-01
Groundwork was laid for computer models which will assist in the design of a manned lunar base. The models, herein described, will provide the following functions for the successful conclusion of that task: strategic planning; sensitivity analyses; impact analyses; and documentation. Topics addressed include: upper level model description; interrelationship matrix; user community; model features; model descriptions; system implementation; model management; and plans for future action.
He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min
2013-01-01
Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.
Deformable complex network for refining low-resolution X-ray structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chong; Wang, Qinghua; Ma, Jianpeng, E-mail: jpma@bcm.edu
2015-10-27
A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint withmore » the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.« less
Wan, Songlin; Zhang, Xiangchao; He, Xiaoying; Xu, Min
2016-12-20
Computer controlled optical surfacing requires an accurate tool influence function (TIF) for reliable path planning and deterministic fabrication. Near the edge of the workpieces, the TIF has a nonlinear removal behavior, which will cause a severe edge-roll phenomenon. In the present paper, a new edge pressure model is developed based on the finite element analysis results. The model is represented as the product of a basic pressure function and a correcting function. The basic pressure distribution is calculated according to the surface shape of the polishing pad, and the correcting function is used to compensate the errors caused by the edge effect. Practical experimental results demonstrate that the new model can accurately predict the edge TIFs with different overhang ratios. The relative error of the new edge model can be reduced to 15%.
A new CFD based non-invasive method for functional diagnosis of coronary stenosis.
Xie, Xinzhou; Zheng, Minwen; Wen, Didi; Li, Yabing; Xie, Songyun
2018-03-22
Accurate functional diagnosis of coronary stenosis is vital for decision making in coronary revascularization. With recent advances in computational fluid dynamics (CFD), fractional flow reserve (FFR) can be derived non-invasively from coronary computed tomography angiography images (FFR CT ) for functional measurement of stenosis. However, the accuracy of FFR CT is limited due to the approximate modeling approach of maximal hyperemia conditions. To overcome this problem, a new CFD based non-invasive method is proposed. Instead of modeling maximal hyperemia condition, a series of boundary conditions are specified and those simulated results are combined to provide a pressure-flow curve for a stenosis. Then, functional diagnosis of stenosis is assessed based on parameters derived from the obtained pressure-flow curve. The proposed method is applied to both idealized and patient-specific models, and validated with invasive FFR in six patients. Results show that additional hemodynamic information about the flow resistances of a stenosis is provided, which cannot be directly obtained from anatomy information. Parameters derived from the simulated pressure-flow curve show a linear and significant correlations with invasive FFR (r > 0.95, P < 0.05). The proposed method can assess flow resistances by the pressure-flow curve derived parameters without modeling of maximal hyperemia condition, which is a new promising approach for non-invasive functional assessment of coronary stenosis.
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
Virial Coefficients for the Liquid Argon
NASA Astrophysics Data System (ADS)
Korth, Micheal; Kim, Saesun
2014-03-01
We begin with a geometric model of hard colliding spheres and calculate probability densities in an iterative sequence of calculations that lead to the pair correlation function. The model is based on a kinetic theory approach developed by Shinomoto, to which we added an interatomic potential for argon based on the model from Aziz. From values of the pair correlation function at various values of density, we were able to find viral coefficients of liquid argon. The low order coefficients are in good agreement with theoretical hard sphere coefficients, but appropriate data for argon to which these results might be compared is difficult to find.
From Connectivity Models to Region Labels: Identifying Foci of a Neurological Disorder
Venkataraman, Archana; Kubicki, Marek; Golland, Polina
2014-01-01
We propose a novel approach to identify the foci of a neurological disorder based on anatomical and functional connectivity information. Specifically, we formulate a generative model that characterizes the network of abnormal functional connectivity emanating from the affected foci. This allows us to aggregate pairwise connectivity changes into a region-based representation of the disease. We employ the variational expectation-maximization algorithm to fit the model and subsequently identify both the afflicted regions and the differences in connectivity induced by the disorder. We demonstrate our method on a population study of schizophrenia. PMID:23864168
Corrected confidence bands for functional data using principal components.
Goldsmith, J; Greven, S; Crainiceanu, C
2013-03-01
Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.
Corrected Confidence Bands for Functional Data Using Principal Components
Goldsmith, J.; Greven, S.; Crainiceanu, C.
2014-01-01
Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003
Computational neuroanatomy: ontology-based representation of neural components and connectivity.
Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron
2009-02-05
A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewsuk, Kevin Gregory; Arguello, Jose Guadalupe, Jr.; Reiterer, Markus W.
2006-02-01
The ease and ability to predict sintering shrinkage and densification with the Skorohod-Olevsky viscous sintering (SOVS) model within a finite-element (FE) code have been improved with the use of an Arrhenius-type viscosity function. The need for a better viscosity function was identified by evaluating SOVS model predictions made using a previously published polynomial viscosity function. Predictions made using the original, polynomial viscosity function do not accurately reflect experimentally observed sintering behavior. To more easily and better predict sintering behavior using FE simulations, a thermally activated viscosity function based on creep theory was used with the SOVS model. In comparison withmore » the polynomial viscosity function, SOVS model predictions made using the Arrhenius-type viscosity function are more representative of experimentally observed viscosity and sintering behavior. Additionally, the effects of changes in heating rate on densification can easily be predicted with the Arrhenius-type viscosity function. Another attribute of the Arrhenius-type viscosity function is that it provides the potential to link different sintering models. For example, the apparent activation energy, Q, for densification used in the construction of the master sintering curve for a low-temperature cofire ceramic dielectric has been used as the apparent activation energy for material flow in the Arrhenius-type viscosity function to predict heating rate-dependent sintering behavior using the SOVS model.« less
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Warren, Jeffrey M; Hanson, Paul J; Iversen, Colleen M; Kumar, Jitendra; Walker, Anthony P; Wullschleger, Stan D
2015-01-01
There is wide breadth of root function within ecosystems that should be considered when modeling the terrestrial biosphere. Root structure and function are closely associated with control of plant water and nutrient uptake from the soil, plant carbon (C) assimilation, partitioning and release to the soils, and control of biogeochemical cycles through interactions within the rhizosphere. Root function is extremely dynamic and dependent on internal plant signals, root traits and morphology, and the physical, chemical and biotic soil environment. While plant roots have significant structural and functional plasticity to changing environmental conditions, their dynamics are noticeably absent from the land component of process-based Earth system models used to simulate global biogeochemical cycling. Their dynamic representation in large-scale models should improve model veracity. Here, we describe current root inclusion in models across scales, ranging from mechanistic processes of single roots to parameterized root processes operating at the landscape scale. With this foundation we discuss how existing and future root functional knowledge, new data compilation efforts, and novel modeling platforms can be leveraged to enhance root functionality in large-scale terrestrial biosphere models by improving parameterization within models, and introducing new components such as dynamic root distribution and root functional traits linked to resource extraction. No claim to original US Government works. New Phytologist © 2014 New Phytologist Trust.
Thermal form-factor approach to dynamical correlation functions of integrable lattice models
NASA Astrophysics Data System (ADS)
Göhmann, Frank; Karbach, Michael; Klümper, Andreas; Kozlowski, Karol K.; Suzuki, Junji
2017-11-01
We propose a method for calculating dynamical correlation functions at finite temperature in integrable lattice models of Yang-Baxter type. The method is based on an expansion of the correlation functions as a series over matrix elements of a time-dependent quantum transfer matrix rather than the Hamiltonian. In the infinite Trotter-number limit the matrix elements become time independent and turn into the thermal form factors studied previously in the context of static correlation functions. We make this explicit with the example of the XXZ model. We show how the form factors can be summed utilizing certain auxiliary functions solving finite sets of nonlinear integral equations. The case of the XX model is worked out in more detail leading to a novel form-factor series representation of the dynamical transverse two-point function.
Compaction-Based Deformable Terrain Model as an Interface for Real-Time Vehicle Dynamics Simulations
2013-04-16
to vehicular loads, and the resulting visco-elastic-plastic stress/strain on the affected soil volume. Pedo transfer functions allow for the...resulting visco-elastic-plastic stress/strain on the affected soil volume. Pedo transfer functions allow for the calculation of the soil mechanics model
We compared two regression models, which are based on the Weibull and probit functions, for the analysis of pesticide toxicity data from laboratory studies on Illinois crop and native plant species. Both mathematical models are continuous, differentiable, strictly positive, and...
NASA Astrophysics Data System (ADS)
Chu, Huaqiang; Liu, Fengshan; Consalvi, Jean-Louis
2014-08-01
The relationship between the spectral line based weighted-sum-of-gray-gases (SLW) model and the full-spectrum k-distribution (FSK) model in isothermal and homogeneous media is investigated in this paper. The SLW transfer equation can be derived from the FSK transfer equation expressed in the k-distribution function without approximation. It confirms that the SLW model is equivalent to the FSK model in the k-distribution function form. The numerical implementation of the SLW relies on a somewhat arbitrary discretization of the absorption cross section whereas the FSK model finds the spectrally integrated intensity by integration over the smoothly varying cumulative-k distribution function using a Gaussian quadrature scheme. The latter is therefore in general more efficient as a fewer number of gray gases is required to achieve a prescribed accuracy. Sample numerical calculations were conducted to demonstrate the different efficiency of these two methods. The FSK model is found more accurate than the SLW model in radiation transfer in H2O; however, the SLW model is more accurate in media containing CO2 as the only radiating gas due to its explicit treatment of ‘clear gas.’
NASA Astrophysics Data System (ADS)
Lye, Ribin; Tan, James Peng Lung; Cheong, Siew Ann
2012-11-01
We describe a bottom-up framework, based on the identification of appropriate order parameters and determination of phase diagrams, for understanding progressively refined agent-based models and simulations of financial markets. We illustrate this framework by starting with a deterministic toy model, whereby N independent traders buy and sell M stocks through an order book that acts as a clearing house. The price of a stock increases whenever it is bought and decreases whenever it is sold. Price changes are updated by the order book before the next transaction takes place. In this deterministic model, all traders based their buy decisions on a call utility function, and all their sell decisions on a put utility function. We then make the agent-based model more realistic, by either having a fraction fb of traders buy a random stock on offer, or a fraction fs of traders sell a random stock in their portfolio. Based on our simulations, we find that it is possible to identify useful order parameters from the steady-state price distributions of all three models. Using these order parameters as a guide, we find three phases: (i) the dead market; (ii) the boom market; and (iii) the jammed market in the phase diagram of the deterministic model. Comparing the phase diagrams of the stochastic models against that of the deterministic model, we realize that the primary effect of stochasticity is to eliminate the dead market phase.
Effective Biot theory and its generalization to poroviscoelastic models
NASA Astrophysics Data System (ADS)
Liu, Xu; Greenhalgh, Stewart; Zhou, Bing; Greenhalgh, Mark
2018-02-01
A method is suggested to express the effective bulk modulus of the solid frame of a poroelastic material as a function of the saturated bulk modulus. This method enables effective Biot theory to be described through the use of seismic dispersion measurements or other models developed for the effective saturated bulk modulus. The effective Biot theory is generalized to a poroviscoelastic model of which the moduli are represented by the relaxation functions of the generalized fractional Zener model. The latter covers the general Zener and the Cole-Cole models as special cases. A global search method is described to determine the parameters of the relaxation functions, and a simple deterministic method is also developed to find the defining parameters of the single Cole-Cole model. These methods enable poroviscoelastic models to be constructed, which are based on measured seismic attenuation functions, and ensure that the model dispersion characteristics match the observations.
A Hermite-based lattice Boltzmann model with artificial viscosity for compressible viscous flows
NASA Astrophysics Data System (ADS)
Qiu, Ruofan; Chen, Rongqian; Zhu, Chenxiang; You, Yancheng
2018-05-01
A lattice Boltzmann model on Hermite basis for compressible viscous flows is presented in this paper. The model is developed in the framework of double-distribution-function approach, which has adjustable specific-heat ratio and Prandtl number. It contains a density distribution function for the flow field and a total energy distribution function for the temperature field. The equilibrium distribution function is determined by Hermite expansion, and the D3Q27 and D3Q39 three-dimensional (3D) discrete velocity models are used, in which the discrete velocity model can be replaced easily. Moreover, an artificial viscosity is introduced to enhance the model for capturing shock waves. The model is tested through several cases of compressible flows, including 3D supersonic viscous flows with boundary layer. The effect of artificial viscosity is estimated. Besides, D3Q27 and D3Q39 models are further compared in the present platform.
Local density approximation in site-occupation embedding theory
NASA Astrophysics Data System (ADS)
Senjean, Bruno; Tsuchiizu, Masahisa; Robert, Vincent; Fromager, Emmanuel
2017-01-01
Site-occupation embedding theory (SOET) is a density functional theory (DFT)-based method which aims at modelling strongly correlated electrons. It is in principle exact and applicable to model and quantum chemical Hamiltonians. The theory is presented here for the Hubbard Hamiltonian. In contrast to conventional DFT approaches, the site (or orbital) occupations are deduced in SOET from a partially interacting system consisting of one (or more) impurity site(s) and non-interacting bath sites. The correlation energy of the bath is then treated implicitly by means of a site-occupation functional. In this work, we propose a simple impurity-occupation functional approximation based on the two-level (2L) Hubbard model which is referred to as two-level impurity local density approximation (2L-ILDA). Results obtained on a prototypical uniform eight-site Hubbard ring are promising. The extension of the method to larger systems and more sophisticated model Hamiltonians is currently in progress.
Functional networks inference from rule-based machine learning models.
Lazzarini, Nicola; Widera, Paweł; Williamson, Stuart; Heer, Rakesh; Krasnogor, Natalio; Bacardit, Jaume
2016-01-01
Functional networks play an important role in the analysis of biological processes and systems. The inference of these networks from high-throughput (-omics) data is an area of intense research. So far, the similarity-based inference paradigm (e.g. gene co-expression) has been the most popular approach. It assumes a functional relationship between genes which are expressed at similar levels across different samples. An alternative to this paradigm is the inference of relationships from the structure of machine learning models. These models are able to capture complex relationships between variables, that often are different/complementary to the similarity-based methods. We propose a protocol to infer functional networks from machine learning models, called FuNeL. It assumes, that genes used together within a rule-based machine learning model to classify the samples, might also be functionally related at a biological level. The protocol is first tested on synthetic datasets and then evaluated on a test suite of 8 real-world datasets related to human cancer. The networks inferred from the real-world data are compared against gene co-expression networks of equal size, generated with 3 different methods. The comparison is performed from two different points of view. We analyse the enriched biological terms in the set of network nodes and the relationships between known disease-associated genes in a context of the network topology. The comparison confirms both the biological relevance and the complementary character of the knowledge captured by the FuNeL networks in relation to similarity-based methods and demonstrates its potential to identify known disease associations as core elements of the network. Finally, using a prostate cancer dataset as a case study, we confirm that the biological knowledge captured by our method is relevant to the disease and consistent with the specialised literature and with an independent dataset not used in the inference process. The implementation of our network inference protocol is available at: http://ico2s.org/software/funel.html.
Introduction of the Notion of Differential Equations by Modelling Based Teaching
ERIC Educational Resources Information Center
Budinski, Natalija; Takaci, Djurdjica
2011-01-01
This paper proposes modelling based learning as a tool for learning and teaching mathematics. The example of modelling real world problems leading to the exponential function as the solution of differential equations is described, as well as the observations about students' activities during the process. The students were acquainted with the…
Dealing with dissatisfaction in mathematical modelling to integrate QFD and Kano’s model
NASA Astrophysics Data System (ADS)
Retno Sari Dewi, Dian; Debora, Joana; Edy Sianto, Martinus
2017-12-01
The purpose of the study is to implement the integration of Quality Function Deployment (QFD) and Kano’s Model into mathematical model. Voice of customer data in QFD was collected using questionnaire and the questionnaire was developed based on Kano’s model. Then the operational research methodology was applied to build the objective function and constraints in the mathematical model. The relationship between voice of customer and engineering characteristics was modelled using linier regression model. Output of the mathematical model would be detail of engineering characteristics. The objective function of this model is to maximize satisfaction and minimize dissatisfaction as well. Result of this model is 62% .The major contribution of this research is to implement the existing mathematical model to integrate QFD and Kano’s Model in the case study of shoe cabinet.
Shahaf, Goded; Pratt, Hillel
2013-01-01
In this work we demonstrate the principles of a systematic modeling approach of the neurophysiologic processes underlying a behavioral function. The modeling is based upon a flexible simulation tool, which enables parametric specification of the underlying neurophysiologic characteristics. While the impact of selecting specific parameters is of interest, in this work we focus on the insights, which emerge from rather accepted assumptions regarding neuronal representation. We show that harnessing of even such simple assumptions enables the derivation of significant insights regarding the nature of the neurophysiologic processes underlying behavior. We demonstrate our approach in some detail by modeling the behavioral go/no-go task. We further demonstrate the practical significance of this simplified modeling approach in interpreting experimental data - the manifestation of these processes in the EEG and ERP literature of normal and abnormal (ADHD) function, as well as with comprehensive relevant ERP data analysis. In-fact we show that from the model-based spatiotemporal segregation of the processes, it is possible to derive simple and yet effective and theory-based EEG markers differentiating normal and ADHD subjects. We summarize by claiming that the neurophysiologic processes modeled for the go/no-go task are part of a limited set of neurophysiologic processes which underlie, in a variety of combinations, any behavioral function with measurable operational definition. Such neurophysiologic processes could be sampled directly from EEG on the basis of model-based spatiotemporal segregation.
Finite element analysis of left ventricle during cardiac cycles in viscoelasticity.
Shen, Jing Jin; Xu, Feng Yu; Yang, Wen An
2016-08-01
To investigate the effect of myocardial viscoeslasticity on heart function, this paper presents a finite element model based on a hyper-viscoelastic model for the passive myocardium and Hill's three-element model for the active contraction. The hyper-viscoelastic model considers the myocardium microstructure, while the active model is phenomenologically based on the combination of Hill's equation for the steady tetanized contraction and the specific time-length-force property of the myocardial muscle. To validate the finite element model, the end-diastole strains and the end-systole strain predicted by the model are compared with the experimental values in the literature. It is found that the proposed model not only can estimate well the pumping function of the heart, but also predicts the transverse shear strains. The finite element model is also applied to analyze the influence of viscoelasticity on the residual stresses in the myocardium. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Grelot, Frédéric; Agenais, Anne-Laurence; Brémond, Pauline
2015-04-01
In France, since 2011, it is mandatory for local communities to conduct cost-benefit analysis (CBA) of their flood management projects, to make them eligible for financial support from the State. Meanwhile, as a support, the French Ministry in charge of Environment proposed a methodology to fulfill CBA. Like for many other countries, this methodology is based on the estimation of flood damage. However, existing models to estimate flood damage were judged not convenient for a national-wide use. As a consequence, the French Ministry in charge of Environment launched studies to develop damage models for different sectors, such as: residential sector, public infrastructures, agricultural sector, and commercial and industrial sector. In this presentation, we aim at presenting and discussing methodological choices of those damage models. They all share the same principle: no sufficient data from past events were available to build damage models on a statistical analysis, so modeling was based on expert knowledge. We will focus on the model built for agricultural activities and more precisely for agricultural lands. This model was based on feedback from 30 agricultural experts who experienced floods in their geographical areas. They were selected to have a representative experience of crops and flood conditions in France. The model is composed of: (i) damaging functions, which reveal physiological vulnerability of crops, (ii) action functions, which correspond to farmers' decision rules for carrying on crops after a flood, and (iii) economic agricultural data, which correspond to featured characteristics of crops in the geographical area where the flood management project studied takes place. The two first components are generic and the third one is specific to the area studied. It is, thus, possible to produce flood damage functions adapted to different agronomic and geographical contexts. In the end, the model was applied to obtain a pool of damage functions giving damage in euros by hectare for 14 agricultural lands categories. As a conclusion, we will discuss the validation step of the model. Although the model was validated by experts, we analyse how it could gain insight from comparison with past events.
NASA Astrophysics Data System (ADS)
Grelot, Frédéric; Agenais, Anne-Laurence; Brémond, Pauline
2014-05-01
In France, since 2011, it is mandatory for local communities to conduct cost-benefit analysis (CBA) of their flood management projects, to make them eligible for financial support from the State. Meanwhile, as a support, the French Ministry in charge of Environment proposed a methodology to fulfill CBA. Like for many other countries, this methodology is based on the estimation of flood damage. Howerver, existing models to estimate flood damage were judged not convenient for a national-wide use. As a consequence, the French Ministry in charge of Environment launched studies to develop damage models for different sectors, such as: residential sector, public infrastructures, agricultural sector, and commercial and industrial sector. In this presentation, we aim at presenting and discussing methodological choices of those damage models. They all share the same principle: no sufficient data from past events were available to build damage models on a statistical analysis, so modeling was based on expert knowledge. We will focus on the model built for agricultural activities and more precisely for agricultural lands. This model was based on feedback from 30 agricultural experts who experienced floods in their geographical areas. They were selected to have a representative experience of crops and flood conditions in France. The model is composed of: (i) damaging functions, which reveal physiological vulnerability of crops, (ii) action functions, which correspond to farmers' decision rules for carrying on crops after a flood, and (iii) economic agricultural data, which correspond to featured characteristics of crops in the geographical area where the flood management project studied takes place. The two first components are generic and the third one is specific to the area studied. It is, thus, possible to produce flood damage functions adapted to different agronomic and geographical contexts. In the end, the model was applied to obtain a pool of damage functions giving damage in euros by hectare for 14 agricultural lands categories. As a conclusion, we will discuss the validation step of the model. Although the model was validated by experts, we analyse how it could gain insight from comparison with past events.
Integrated Modeling for Watershed Ecosystem Services Assessment and Forecasting
Regional scale watershed management decisions must be informed by the science-based relationship between anthropogenic activities on the landscape and the change in ecosystem structure, function, and services that occur as a result. We applied process-based models that represent...
Hepatic function imaging using dynamic Gd-EOB-DTPA enhanced MRI and pharmacokinetic modeling.
Ning, Jia; Yang, Zhiying; Xie, Sheng; Sun, Yongliang; Yuan, Chun; Chen, Huijun
2017-10-01
To determine whether pharmacokinetic modeling parameters with different output assumptions of dynamic contrast-enhanced MRI (DCE-MRI) using Gd-EOB-DTPA correlate with serum-based liver function tests, and compare the goodness of fit of the different output assumptions. A 6-min DCE-MRI protocol was performed in 38 patients. Four dual-input two-compartment models with different output assumptions and a published one-compartment model were used to calculate hepatic function parameters. The Akaike information criterion fitting error was used to evaluate the goodness of fit. Imaging-based hepatic function parameters were compared with blood chemistry using correlation with multiple comparison correction. The dual-input two-compartment model assuming venous flow equals arterial flow plus portal venous flow and no bile duct output better described the liver tissue enhancement with low fitting error and high correlation with blood chemistry. The relative uptake rate Kir derived from this model was found to be significantly correlated with direct bilirubin (r = -0.52, P = 0.015), prealbumin concentration (r = 0.58, P = 0.015), and prothrombin time (r = -0.51, P = 0.026). It is feasible to evaluate hepatic function by proper output assumptions. The relative uptake rate has the potential to serve as a biomarker of function. Magn Reson Med 78:1488-1495, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
A Model Independent S/W Framework for Search-Based Software Testing
Baik, Jongmoon
2014-01-01
In Model-Based Testing (MBT) area, Search-Based Software Testing (SBST) has been employed to generate test cases from the model of a system under test. However, many types of models have been used in MBT. If the type of a model has changed from one to another, all functions of a search technique must be reimplemented because the types of models are different even if the same search technique has been applied. It requires too much time and effort to implement the same algorithm over and over again. We propose a model-independent software framework for SBST, which can reduce redundant works. The framework provides a reusable common software platform to reduce time and effort. The software framework not only presents design patterns to find test cases for a target model but also reduces development time by using common functions provided in the framework. We show the effectiveness and efficiency of the proposed framework with two case studies. The framework improves the productivity by about 50% when changing the type of a model. PMID:25302314
Towards aspect-oriented functional--structural plant modelling.
Cieslak, Mikolaj; Seleznyova, Alla N; Prusinkiewicz, Przemyslaw; Hanan, Jim
2011-10-01
Functional-structural plant models (FSPMs) are used to integrate knowledge and test hypotheses of plant behaviour, and to aid in the development of decision support systems. A significant amount of effort is being put into providing a sound methodology for building them. Standard techniques, such as procedural or object-oriented programming, are not suited for clearly separating aspects of plant function that criss-cross between different components of plant structure, which makes it difficult to reuse and share their implementations. The aim of this paper is to present an aspect-oriented programming approach that helps to overcome this difficulty. The L-system-based plant modelling language L+C was used to develop an aspect-oriented approach to plant modelling based on multi-modules. Each element of the plant structure was represented by a sequence of L-system modules (rather than a single module), with each module representing an aspect of the element's function. Separate sets of productions were used for modelling each aspect, with context-sensitive rules facilitated by local lists of modules to consider/ignore. Aspect weaving or communication between aspects was made possible through the use of pseudo-L-systems, where the strict-predecessor of a production rule was specified as a multi-module. The new approach was used to integrate previously modelled aspects of carbon dynamics, apical dominance and biomechanics with a model of a developing kiwifruit shoot. These aspects were specified independently and their implementation was based on source code provided by the original authors without major changes. This new aspect-oriented approach to plant modelling is well suited for studying complex phenomena in plant science, because it can be used to integrate separate models of individual aspects of plant development and function, both previously constructed and new, into clearly organized, comprehensive FSPMs. In a future work, this approach could be further extended into an aspect-oriented programming language for FSPMs.
NASA Astrophysics Data System (ADS)
Donndorf, St.; Malz, A.; Kley, J.
2012-04-01
Cross section balancing is a generally accepted method for studying fault zone geometries. We show a method for the construction of structural 3D models of complex fault zones using a combination of gOcad modelling and balanced cross sections. In this work a 3D model of the Schlotheim graben in the Thuringian basin was created from serial, parallel cross sections and existing borehole data. The Thuringian Basin is originally a part of the North German Basin, which was separated from it by the Harz uplift in the Late Cretaceous. It comprises several parallel NW-trending inversion structures. The Schlotheim graben is one example of these inverted graben zones, whose structure poses special challenges to 3D modelling. The fault zone extends 30 km in NW-SE direction and 1 km in NE-SW direction. This project was split into two parts: data management and model building. To manage the fundamental data a central database was created in ESRI's ArcGIS. The development of a scripting interface handles the data exchange between the different steps of modelling. The first step is the pre-processing of the base data in ArcGIS, followed by cross section balancing with Midland Valley's Move software and finally the construction of the 3D model in Paradigm's gOcad. With the specific aim of constructing a 3D model based on cross sections, the functionality of the gOcad software had to be extended. These extensions include pre-processing functions to create a simplified and usable data base for gOcad as well as construction functions to create surfaces based on linearly distributed data and processing functions to create the 3D model from different surfaces. In order to use the model for further geological and hydrological simulations, special requirements apply to the surface properties. The first characteristic of the surfaces should be a quality mesh, which contains triangles with maximized internal angles. To achieve that, an external meshing tool was included in gOcad. The second characteristic is that intersecting lines between two surfaces must be included in both surfaces and share nodes with them. To finish the modelling process 3D balancing was performed to further improve the model quality.
Effects of rewiring strategies on information spreading in complex dynamic networks
NASA Astrophysics Data System (ADS)
Ally, Abdulla F.; Zhang, Ning
2018-04-01
Recent advances in networks and communication services have attracted much interest to understand information spreading in social networks. Consequently, numerous studies have been devoted to provide effective and accurate models for mimicking information spreading. However, knowledge on how to spread information faster and more widely remains a contentious issue. Yet, most existing works are based on static networks which limit the reality of dynamism of entities that participate in information spreading. Using the SIR epidemic model, this study explores and compares effects of two rewiring models (Fermi-Dirac and Linear functions) on information spreading in scale free and small world networks. Our results show that for all the rewiring strategies, the spreading influence replenishes with time but stabilizes in a steady state at later time-steps. This means that information spreading takes-off during the initial spreading steps, after which the spreading prevalence settles toward its equilibrium, with majority of the population having recovered and thus, no longer affecting the spreading. Meanwhile, rewiring strategy based on Fermi-Dirac distribution function in one way or another impedes the spreading process, however, the structure of the networks mimic the spreading, even with a low spreading rate. The worst case can be when the spreading rate is extremely small. The results emphasize that despite a big role of such networks in mimicking the spreading, the role of the parameters cannot be simply ignored. Apparently, the probability of giant degree neighbors being informed grows much faster with the rewiring strategy of linear function compared to that of Fermi-Dirac distribution function. Clearly, rewiring model based on linear function generates the fastest spreading across the networks. Therefore, if we are interested in speeding up the spreading process in stochastic modeling, linear function may play a pivotal role.
NASA Astrophysics Data System (ADS)
Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-04-01
The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.
Mahmood, Zanjbeel; Burton, Cynthia Z; Vella, Lea; Twamley, Elizabeth W
2018-04-13
Neuropsychological abilities may underlie successful performance of everyday functioning and social skills. We aimed to determine the strongest neuropsychological predictors of performance-based functional capacity and social skills performance across the spectrum of severe mental illness (SMI). Unemployed outpatients with SMI (schizophrenia, bipolar disorder, or major depression; n = 151) were administered neuropsychological (expanded MATRICS Consensus Cognitive Battery), functional capacity (UCSD Performance-Based Skills Assessment-Brief; UPSA-B), and social skills (Social Skills Performance Assessment; SSPA) assessments. Bivariate correlations between neuropsychological performance and UPSA-B and SSPA total scores showed that most neuropsychological tests were significantly associated with each performance-based measure. Forward entry stepwise regression analyses were conducted entering education, diagnosis, symptom severity, and neuropsychological performance as predictors of functional capacity and social skills. Diagnosis, working memory, sustained attention, and category and letter fluency emerged as significant predictors of functional capacity, in a model that explained 43% of the variance. Negative symptoms, sustained attention, and letter fluency were significant predictors of social skill performance, in a model explaining 35% of the variance. Functional capacity is positively associated with neuropsychological functioning, but diagnosis remains strongly influential, with mood disorder participants outperforming those with psychosis. Social skill performance appears to be positively associated with sustained attention and verbal fluency regardless of diagnosis; however, negative symptom severity strongly predicts social skills performance. Improving neuropsychological functioning may improve psychosocial functioning in people with SMI. Published by Elsevier Ltd.
Lee, Ho-Joon; Son, Myung Jin; Ahn, Jiwon; Oh, Soo Jin; Lee, Mihee; Kim, Ansoon; Jeung, Yun-Ji; Kim, Han-Gyeul; Won, Misun; Lim, Jung Hwa; Kim, Nam-Soon; Jung, Cho-Rock; Chung, Kyung-Sook
2017-12-01
Current in vitro liver models provide three-dimensional (3-D) microenvironments in combination with tissue engineering technology and can perform more accurate in vivo mimicry than two-dimensional models. However, a human cell-based, functionally mature liver model is still desired, which would provide an alternative to animal experiments and resolve low-prediction issues on species differences. Here, we prepared hybrid hydrogels of varying elasticity and compared them with a normal liver, to develop a more mature liver model that preserves liver properties in vitro. We encapsulated HepaRG cells, either alone or with supporting cells, in a biodegradable hybrid hydrogel. The elastic modulus of the 3D liver dynamically changed during culture due to the combined effects of prolonged degradation of hydrogel and extracellular matrix formation provided by the supporting cells. As a result, when the elastic modulus of the 3D liver model converges close to that of the in vivo liver (≅ 2.3 to 5.9 kPa), both phenotypic and functional maturation of the 3D liver were realized, while hepatic gene expression, albumin secretion, cytochrome p450-3A4 activity, and drug metabolism were enhanced. Finally, the 3D liver model was expanded to applications with embryonic stem cell-derived hepatocytes and primary human hepatocytes, and it supported prolonged hepatocyte survival and functionality in long-term culture. Our model represents critical progress in developing a biomimetic liver system to simulate liver tissue remodeling, and provides a versatile platform in drug development and disease modeling, ranging from physiology to pathology. We provide a functionally improved 3D liver model that recapitulates in vivo liver stiffness. We have experimentally addressed the issues of orchestrated effects of mechanical compliance, controlled matrix formation by stromal cells in conjunction with hepatic differentiation, and functional maturation of hepatocytes in a dynamic 3D microenvironment. Our model represents critical progress in developing a biomimetic liver system to simulate liver tissue remodeling, and provides a versatile platform in drug development and disease modeling, ranging from physiology to pathology. Additionally, recent advances in the stem-cell technologies have made the development of 3D organoid possible, and thus, our study also provides further contribution to the development of physiologically relevant stem-cell-based 3D tissues that provide an elasticity-based predefined biomimetic 3D microenvironment. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Modelling population distribution using remote sensing imagery and location-based data
NASA Astrophysics Data System (ADS)
Song, J.; Prishchepov, A. V.
2017-12-01
Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models, confirming that GWR models demonstrate better prediction accuracy. So this method provide detailed population density information for microscale citizen studies.
A descriptive model of resting-state networks using Markov chains.
Xie, H; Pal, R; Mitra, S
2016-08-01
Resting-state functional connectivity (RSFC) studies considering pairwise linear correlations have attracted great interests while the underlying functional network structure still remains poorly understood. To further our understanding of RSFC, this paper presents an analysis of the resting-state networks (RSNs) based on the steady-state distributions and provides a novel angle to investigate the RSFC of multiple functional nodes. This paper evaluates the consistency of two networks based on the Hellinger distance between the steady-state distributions of the inferred Markov chain models. The results show that generated steady-state distributions of default mode network have higher consistency across subjects than random nodes from various RSNs.
Yang, Yanzheng; Zhu, Qiuan; Peng, Changhui; Wang, Han; Xue, Wei; Lin, Guanghui; Wen, Zhongming; Chang, Jie; Wang, Meng; Liu, Guobin; Li, Shiqing
2016-01-01
Increasing evidence indicates that current dynamic global vegetation models (DGVMs) have suffered from insufficient realism and are difficult to improve, particularly because they are built on plant functional type (PFT) schemes. Therefore, new approaches, such as plant trait-based methods, are urgently needed to replace PFT schemes when predicting the distribution of vegetation and investigating vegetation sensitivity. As an important direction towards constructing next-generation DGVMs based on plant functional traits, we propose a novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China. The results demonstrated that a Gaussian mixture model (GMM) trained with a LMA-Nmass-LAI data combination yielded an accuracy of 72.82% in simulating vegetation distribution, providing more detailed parameter information regarding community structures and ecosystem functions. The new approach also performed well in analyses of vegetation sensitivity to different climatic scenarios. Although the trait-climate relationship is not the only candidate useful for predicting vegetation distributions and analysing climatic sensitivity, it sheds new light on the development of next-generation trait-based DGVMs. PMID:27052108
Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong
2013-01-01
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.
GROMOS polarizable charge-on-spring models for liquid urea: COS/U and COS/U2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Zhixiong; Bachmann, Stephan J.; Gunsteren, Wilfred F. van, E-mail: wfvgn@igc.phys.chem.ethz.ch
2015-03-07
Two one-site polarizable urea models, COS/U and COS/U2, based on the charge-on-spring model are proposed. The models are parametrized against thermodynamic properties of urea-water mixtures in combination with the polarizable COS/G2 and COS/D2 models for liquid water, respectively, and have the same functional form of the inter-atomic interaction function and are based on the same parameter calibration procedure and type of experimental data as used to develop the GROMOS biomolecular force field. Thermodynamic, dielectric, and dynamic properties of urea-water mixtures simulated using the polarizable models are closer to experimental data than using the non-polarizable models. The COS/U and COS/U2 modelsmore » may be used in biomolecular simulations of protein denaturation.« less
Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A
2010-07-01
Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.
Advancing Collaboration through Hydrologic Data and Model Sharing
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Castronova, A. M.; Miles, B.; Li, Z.; Morsy, M. M.
2015-12-01
HydroShare is an online, collaborative system for open sharing of hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around "resources" which are defined primarily by standardized metadata, content data models for each resource type, and an overarching resource data model based on the Open Archives Initiative's Object Reuse and Exchange (OAI-ORE) standard and a hierarchical file packaging system called "BagIt". HydroShare expands the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated to include geospatial and multidimensional space-time datasets commonly used in hydrology. HydroShare also includes new capability for sharing models, model components, and analytical tools and will take advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. It also supports web services and server/cloud based computation operating on resources for the execution of hydrologic models and analysis and visualization of hydrologic data. HydroShare uses iRODS as a network file system for underlying storage of datasets and models. Collaboration is enabled by casting datasets and models as "social objects". Social functions include both private and public sharing, formation of collaborative groups of users, and value-added annotation of shared datasets and models. The HydroShare web interface and social media functions were developed using the Django web application framework coupled to iRODS. Data visualization and analysis is supported through the Tethys Platform web GIS software stack. Links to external systems are supported by RESTful web service interfaces to HydroShare's content. This presentation will introduce the HydroShare functionality developed to date and describe ongoing development of functionality to support collaboration and integration of data and models.
Shao, J Y; Shu, C; Huang, H B; Chew, Y T
2014-03-01
A free-energy-based phase-field lattice Boltzmann method is proposed in this work to simulate multiphase flows with density contrast. The present method is to improve the Zheng-Shu-Chew (ZSC) model [Zheng, Shu, and Chew, J. Comput. Phys. 218, 353 (2006)] for correct consideration of density contrast in the momentum equation. The original ZSC model uses the particle distribution function in the lattice Boltzmann equation (LBE) for the mean density and momentum, which cannot properly consider the effect of local density variation in the momentum equation. To correctly consider it, the particle distribution function in the LBE must be for the local density and momentum. However, when the LBE of such distribution function is solved, it will encounter a severe numerical instability. To overcome this difficulty, a transformation, which is similar to the one used in the Lee-Lin (LL) model [Lee and Lin, J. Comput. Phys. 206, 16 (2005)] is introduced in this work to change the particle distribution function for the local density and momentum into that for the mean density and momentum. As a result, the present model still uses the particle distribution function for the mean density and momentum, and in the meantime, considers the effect of local density variation in the LBE as a forcing term. Numerical examples demonstrate that both the present model and the LL model can correctly simulate multiphase flows with density contrast, and the present model has an obvious improvement over the ZSC model in terms of solution accuracy. In terms of computational time, the present model is less efficient than the ZSC model, but is much more efficient than the LL model.
Research on Capturing of Customer Requirements Based on Innovation Theory
NASA Astrophysics Data System (ADS)
junwu, Ding; dongtao, Yang; zhenqiang, Bao
To exactly and effectively capture customer requirements information, a new customer requirements capturing modeling method was proposed. Based on the analysis of function requirement models of previous products and the application of technology system evolution laws of the Theory of Innovative Problem Solving (TRIZ), the customer requirements could be evolved from existing product designs, through modifying the functional requirement unit and confirming the direction of evolution design. Finally, a case study was provided to illustrate the feasibility of the proposed approach.
Dissipative transport in superlattices within the Wigner function formalism
Jonasson, O.; Knezevic, I.
2015-07-30
Here, we employ the Wigner function formalism to simulate partially coherent, dissipative electron transport in biased semiconductor superlattices. We introduce a model collision integral with terms that describe energy dissipation, momentum relaxation, and the decay of spatial coherences (localization). Based on a particle-based solution to the Wigner transport equation with the model collision integral, we simulate quantum electronic transport at 10 K in a GaAs/AlGaAs superlattice and accurately reproduce its current density vs field characteristics obtained in experiment.
NASA Astrophysics Data System (ADS)
Laminack, William; Gole, James
2015-12-01
A unique MEMS/NEMS approach is presented for the modeling of a detection platform for mixed gas interactions. Mixed gas analytes interact with nanostructured decorating metal oxide island sites supported on a microporous silicon substrate. The Inverse Hard/Soft acid/base (IHSAB) concept is used to assess a diversity of conductometric responses for mixed gas interactions as a function of these nanostructured metal oxides. The analyte conductometric responses are well represented using a combination diffusion/absorption-based model for multi-gas interactions where a newly developed response absorption isotherm, based on the Fermi distribution function is applied. A further coupling of this model with the IHSAB concept describes the considerations in modeling of multi-gas mixed analyte-interface, and analyte-analyte interactions. Taking into account the molecular electronic interaction of both the analytes with each other and an extrinsic semiconductor interface we demonstrate how the presence of one gas can enhance or diminish the reversible interaction of a second gas with the extrinsic semiconductor interface. These concepts demonstrate important considerations in the array-based formats for multi-gas sensing and its applications.
Meaning-Based Scoring: A Systemic Functional Linguistics Model for Automated Test Tasks
ERIC Educational Resources Information Center
Gleason, Jesse
2014-01-01
Communicative approaches to language teaching that emphasize the importance of speaking (e.g., task-based language teaching) require innovative and evidence-based means of assessing oral language. Nonetheless, research has yet to produce an adequate assessment model for oral language (Chun 2006; Downey et al. 2008). Limited by automatic speech…
A component-based system for agricultural drought monitoring by remote sensing.
Dong, Heng; Li, Jun; Yuan, Yanbin; You, Lin; Chen, Chao
2017-01-01
In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China's Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring.
A component-based system for agricultural drought monitoring by remote sensing
Yuan, Yanbin; You, Lin; Chen, Chao
2017-01-01
In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China’s Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring. PMID:29236700
Wan, Cen; Lees, Jonathan G; Minneci, Federico; Orengo, Christine A; Jones, David T
2017-10-01
Accurate gene or protein function prediction is a key challenge in the post-genome era. Most current methods perform well on molecular function prediction, but struggle to provide useful annotations relating to biological process functions due to the limited power of sequence-based features in that functional domain. In this work, we systematically evaluate the predictive power of temporal transcription expression profiles for protein function prediction in Drosophila melanogaster. Our results show significantly better performance on predicting protein function when transcription expression profile-based features are integrated with sequence-derived features, compared with the sequence-derived features alone. We also observe that the combination of expression-based and sequence-based features leads to further improvement of accuracy on predicting all three domains of gene function. Based on the optimal feature combinations, we then propose a novel multi-classifier-based function prediction method for Drosophila melanogaster proteins, FFPred-fly+. Interpreting our machine learning models also allows us to identify some of the underlying links between biological processes and developmental stages of Drosophila melanogaster.
Bayesian hierarchical functional data analysis via contaminated informative priors.
Scarpa, Bruno; Dunson, David B
2009-09-01
A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.
Protein Structure and Function Prediction Using I-TASSER
Yang, Jianyi; Zhang, Yang
2016-01-01
I-TASSER is a hierarchical protocol for automated protein structure prediction and structure-based function annotation. Starting from the amino acid sequence of target proteins, I-TASSER first generates full-length atomic structural models from multiple threading alignments and iterative structural assembly simulations followed by atomic-level structure refinement. The biological functions of the protein, including ligand-binding sites, enzyme commission number, and gene ontology terms, are then inferred from known protein function databases based on sequence and structure profile comparisons. I-TASSER is freely available as both an on-line server and a stand-alone package. This unit describes how to use the I-TASSER protocol to generate structure and function prediction and how to interpret the prediction results, as well as alternative approaches for further improving the I-TASSER modeling quality for distant-homologous and multi-domain protein targets. PMID:26678386
Binomial test statistics using Psi functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Kimiko o
2007-01-01
For the negative binomial model (probability generating function (p + 1 - pt){sup -k}) a logarithmic derivative is the Psi function difference {psi}(k + x) - {psi}(k); this and its derivatives lead to a test statistic to decide on the validity of a specified model. The test statistic uses a data base so there exists a comparison available between theory and application. Note that the test function is not dominated by outliers. Applications to (i) Fisher's tick data, (ii) accidents data, (iii) Weldon's dice data are included.
NASA Astrophysics Data System (ADS)
Patel, Nirmal; Sultana, Sharmin; Rashid, Tanweer; Krusienski, Dean; Audette, Michel A.
2015-03-01
This paper presents a methodology for the digital formatting of a printed atlas of the brainstem and the delineation of cranial nerves from this digital atlas. It also describes on-going work on the 3D resampling and refinement of the 2D functional regions and nerve contours. In MRI-based anatomical modeling for neurosurgery planning and simulation, the complexity of the functional anatomy entails a digital atlas approach, rather than less descriptive voxel or surface-based approaches. However, there is an insufficiency of descriptive digital atlases, in particular of the brainstem. Our approach proceeds from a series of numbered, contour-based sketches coinciding with slices of the brainstem featuring both closed and open contours. The closed contours coincide with functionally relevant regions, whereby our objective is to fill in each corresponding label, which is analogous to painting numbered regions in a paint-by-numbers kit. Any open contour typically coincides with a cranial nerve. This 2D phase is needed in order to produce densely labeled regions that can be stacked to produce 3D regions, as well as identifying the embedded paths and outer attachment points of cranial nerves. Cranial nerves are modeled using an explicit contour based technique called 1-Simplex. The relevance of cranial nerves modeling of this project is two-fold: i) this atlas will fill a void left by the brain segmentation communities, as no suitable digital atlas of the brainstem exists, and ii) this atlas is necessary to make explicit the attachment points of major nerves (except I and II) having a cranial origin. Keywords: digital atlas, contour models, surface models
Functional requirements of a mathematical model of the heart.
Palladino, Joseph L; Noordergraaf, Abraham
2009-01-01
Functional descriptions of the heart, especially the left ventricle, are often based on the measured variables pressure and ventricular outflow, embodied as a time-varying elastance. The fundamental difficulty of describing the mechanical properties of the heart with a time-varying elastance function that is set a priori is described. As an alternative, a new functional model of the heart is presented, which characterizes the ventricle's contractile state with parameters, rather than variables. Each chamber is treated as a pressure generator that is time and volume dependent. The heart's complex dynamics develop from a single equation based on the formation and relaxation of crossbridge bonds. This equation permits the calculation of ventricular elastance via E(v) = partial differentialp(v)/ partial differentialV(v). This heart model is defined independently from load properties, and ventricular elastance is dynamic and reflects changing numbers of crossbridge bonds. In this paper, the functionality of this new heart model is presented via computed work loops that demonstrate the Frank-Starling mechanism and the effects of preload, the effects of afterload, inotropic changes, and varied heart rate, as well as the interdependence of these effects. Results suggest the origin of the equivalent of Hill's force-velocity relation in the ventricle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartman, Joshua D.; Beran, Gregory J. O., E-mail: gregory.beran@ucr.edu; Monaco, Stephen
2015-09-14
We assess the quality of fragment-based ab initio isotropic {sup 13}C chemical shift predictions for a collection of 25 molecular crystals with eight different density functionals. We explore the relative performance of cluster, two-body fragment, combined cluster/fragment, and the planewave gauge-including projector augmented wave (GIPAW) models relative to experiment. When electrostatic embedding is employed to capture many-body polarization effects, the simple and computationally inexpensive two-body fragment model predicts both isotropic {sup 13}C chemical shifts and the chemical shielding tensors as well as both cluster models and the GIPAW approach. Unlike the GIPAW approach, hybrid density functionals can be used readilymore » in a fragment model, and all four hybrid functionals tested here (PBE0, B3LYP, B3PW91, and B97-2) predict chemical shifts in noticeably better agreement with experiment than the four generalized gradient approximation (GGA) functionals considered (PBE, OPBE, BLYP, and BP86). A set of recommended linear regression parameters for mapping between calculated chemical shieldings and observed chemical shifts are provided based on these benchmark calculations. Statistical cross-validation procedures are used to demonstrate the robustness of these fits.« less
Graham, James E.; Resnik, Linda; Karmarkar, Amol M.; Deutsch, Anne; Tan, Alai; Al Snih, Soham; Ottenbacher, Kenneth J.
2016-01-01
Background Medicare data from acute hospitals do not contain information on functional status. This lack of information limits the ability to conduct rehabilitation-related health services research. Objective The purpose of this study was to examine the associations between 5 comorbidity indexes derived from acute care claims data and functional status assessed at admission to an inpatient rehabilitation facility (IRF). Comorbidity indexes included tier comorbidity, Functional Comorbidity Index (FCI), Charlson Comorbidity Index, Elixhauser Comorbidity Index, and Hierarchical Condition Category (HCC). Design This was a retrospective cohort study. Methods Medicare beneficiaries with stroke, lower extremity joint replacement, and lower extremity fracture discharged to an IRF in 2011 were studied (N=105,441). Data from the beneficiary summary file, Medicare Provider Analysis and Review (MedPAR) file, and Inpatient Rehabilitation Facility–Patient Assessment Instrument (IRF-PAI) file were linked. Inpatient rehabilitation facility admission functional status was used as a proxy for acute hospital discharge functional status. Separate linear regression models for each impairment group were developed to assess the relationships between the comorbidity indexes and functional status. Base models included age, sex, race/ethnicity, disability, dual eligibility, and length of stay. Subsequent models included individual comorbidity indexes. Values of variance explained (R2) with each comorbidity index were compared. Results Base models explained 7.7% of the variance in motor function ratings for stroke, 3.8% for joint replacement, and 7.3% for fracture. The R2 increased marginally when comorbidity indexes were added to base models for stroke, joint replacement, and fracture: Charlson Comorbidity Index (0.4%, 0.5%, 0.3%), tier comorbidity (0.2%, 0.6%, 0.5%), FCI (0.4%, 1.2%, 1.6%), Elixhauser Comorbidity Index (1.2%, 1.9%, 3.5%), and HCC (2.2%, 2.1%, 2.8%). Limitation Patients from 3 impairment categories were included in the sample. Conclusions The 5 comorbidity indexes contributed little to predicting functional status. The indexes examined were not useful as proxies for functional status in the acute settings studied. PMID:26564253
Theoretical Development of an Orthotropic Elasto-Plastic Generalized Composite Material Model
NASA Technical Reports Server (NTRS)
Goldberg, Robert; Carney, Kelly; DuBois, Paul; Hoffarth, Canio; Harrington, Joseph; Rajan, Subramaniam; Blankenhorn, Gunther
2014-01-01
The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites is becoming critical as these materials are gaining increased usage in the aerospace and automotive industries. While there are several composite material models currently available within LSDYNA (Livermore Software Technology Corporation), there are several features that have been identified that could improve the predictive capability of a composite model. To address these needs, a combined plasticity and damage model suitable for use with both solid and shell elements is being developed and is being implemented into LS-DYNA as MAT_213. A key feature of the improved material model is the use of tabulated stress-strain data in a variety of coordinate directions to fully define the stress-strain response of the material. To date, the model development efforts have focused on creating the plasticity portion of the model. The Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic yield function with a nonassociative flow rule. The coefficients of the yield function, and the stresses to be used in both the yield function and the flow rule, are computed based on the input stress-strain curves using the effective plastic strain as the tracking variable. The coefficients in the flow rule are computed based on the obtained stress-strain data. The developed material model is suitable for implementation within LS-DYNA for use in analyzing the nonlinear response of polymer composites.
Population-based absolute risk estimation with survey data
Kovalchik, Stephanie A.; Pfeiffer, Ruth M.
2013-01-01
Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614
NASA Astrophysics Data System (ADS)
Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng
2013-11-01
A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.
ERIC Educational Resources Information Center
Jitendra, Asha K.; DuPaul, George J.; Volpe, Robert J.; Tresco, Katy E.; Junod, Rosemary E. Vile; Lutz, J. Gary; Cleary, Kristi S.; Flammer-Rivera, Lizette M.; Manella, Mark C.
2007-01-01
This study evaluated the effectiveness of two consultation-based models for designing academic interventions to enhance the educational functioning of children with attention deficit hyperactivity disorder. Children (N = 167) meeting "Diagnostic and Statistical Manual" (4th ed.--text revision; American Psychiatric Association, 2000) criteria for…
Monte Carlo-based searching as a tool to study carbohydrate structure
USDA-ARS?s Scientific Manuscript database
A torsion angle-based Monte-Carlo searching routine was developed and applied to several carbohydrate modeling problems. The routine was developed as a Unix shell script that calls several programs, which allows it to be interfaced with multiple potential functions and various functions for evaluat...
Modeling of flux, binding and substitution of urea molecules in the urea transporter dvUT.
Zhang, Hai-Tian; Wang, Zhe; Yu, Tao; Sang, Jian-Ping; Zou, Xian-Wu; Zou, Xiaoqin
2017-09-01
Urea transporters (UTs) are transmembrane proteins that transport urea molecules across cell membranes and play a crucial role in urea excretion and water balance. Modeling the functional characteristics of UTs helps us understand how their structures accomplish the functions at the atomic level, and facilitates future therapeutic design targeting the UTs. This study was based on the crystal structure of Desulfovibrio vulgaris urea transporter (dvUT). To model the binding behavior of urea molecules in dvUT, we constructed a cooperative binding model. To model the substitution of urea by the urea analogue N,N'-dimethylurea (DMU) in dvUT, we calculated the occupation probability of DMU along the urea pore and the ratio of the occupation probabilities of DMU at the external (S ext ) and internal (S int ) binding sites, and we established the mutual substitution rule for binding and substitution of urea and DMU. Based on these calculations and modelings, together with the use of the Monte Carlo (MC) method, we further modeled the urea flux in dvUT, equilibrium urea binding to dvUT, and the substitution of urea by DMU in the dvUT. Our modeling results are in good agreement with the existing experimental functional data. Furthermore, the modelings have discovered the microscopic process and mechanisms of those functional characteristics. The methods and the results would help our future understanding of the underlying mechanisms of the diseases associated with impaired UT functions and rational drug design for the treatment of these diseases. Copyright © 2017 Elsevier Inc. All rights reserved.
Loizzo, Joseph J
2016-06-01
Meditation research has begun to clarify the brain effects and mechanisms of contemplative practices while generating a range of typologies and explanatory models to guide further study. This comparative review explores a neglected area relevant to current research: the validity of a traditional central nervous system (CNS) model that coevolved with the practices most studied today and that provides the first comprehensive neural-based typology and mechanistic framework of contemplative practices. The subtle body model, popularly known as the chakra system from Indian yoga, was and is used as a map of CNS function in traditional Indian and Tibetan medicine, neuropsychiatry, and neuropsychology. The study presented here, based on the Nalanda tradition, shows that the subtle body model can be cross-referenced with modern CNS maps and challenges modern brain maps with its embodied network model of CNS function. It also challenges meditation research by: (1) presenting a more rigorous, neural-based typology of contemplative practices; (2) offering a more refined and complete network model of the mechanisms of contemplative practices; and (3) serving as an embodied, interoceptive neurofeedback aid that is more user friendly and complete than current teaching aids for clinical and practical applications of contemplative practice. © 2016 New York Academy of Sciences.
Numerical Analysis of Modeling Based on Improved Elman Neural Network
Jie, Shao
2014-01-01
A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance. PMID:25054172
Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.
2009-01-01
We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.
Extending existing structural identifiability analysis methods to mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2018-01-01
The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.
A new region-edge based level set model with applications to image segmentation
NASA Astrophysics Data System (ADS)
Zhi, Xuhao; Shen, Hong-Bin
2018-04-01
Level set model has advantages in handling complex shapes and topological changes, and is widely used in image processing tasks. The image segmentation oriented level set models can be grouped into region-based models and edge-based models, both of which have merits and drawbacks. Region-based level set model relies on fitting to color intensity of separated regions, but is not sensitive to edge information. Edge-based level set model evolves by fitting to local gradient information, but can get easily affected by noise. We propose a region-edge based level set model, which considers saliency information into energy function and fuses color intensity with local gradient information. The evolution of the proposed model is implemented by a hierarchical two-stage protocol, and the experimental results show flexible initialization, robust evolution and precise segmentation.
NASA Astrophysics Data System (ADS)
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2015-04-01
A multi-scale parameter-estimation method, as presented by Samaniego et al. (2010), is implemented and extended for the conceptual hydrological model COSERO. COSERO is a HBV-type model that is specialized for alpine-environments, but has been applied over a wide range of basins all over the world (see: Kling et al., 2014 for an overview). Within the methodology available small-scale information (DEM, soil texture, land cover, etc.) is used to estimate the coarse-scale model parameters by applying a set of transfer-functions (TFs) and subsequent averaging methods, whereby only TF hyper-parameters are optimized against available observations (e.g. runoff data). The parameter regionalisation approach was extended in order to allow for a more meta-heuristical handling of the transfer-functions. The two main novelties are: 1. An explicit introduction of constrains into parameter estimation scheme: The constraint scheme replaces invalid parts of the transfer-function-solution space with valid solutions. It is inspired by applications in evolutionary algorithms and related to the combination of learning and evolution. This allows the consideration of physical and numerical constraints as well as the incorporation of a priori modeller-experience into the parameter estimation. 2. Spline-based transfer-functions: Spline-based functions enable arbitrary forms of transfer-functions: This is of importance since in many cases the general relationship between sub-grid information and parameters are known, but not the form of the transfer-function itself. The contribution presents the results and experiences with the adopted method and the introduced extensions. Simulation are performed for the pre-alpine/alpine Traisen catchment in Lower Austria. References: Samaniego, L., Kumar, R., Attinger, S. (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale, Water Resour. Res., doi: 10.1029/2008WR007327 Kling, H., Stanzel, P., Fuchs, M., and Nachtnebel, H. P. (2014): Performance of the COSERO precipitation-runoff model under non-stationary conditions in basins with different climates, Hydrolog. Sci. J., doi: 10.1080/02626667.2014.959956.
Wang, Guobao; Corwin, Michael T; Olson, Kristin A; Badawi, Ramsey D; Sarkar, Souvik
2018-05-30
The hallmark of nonalcoholic steatohepatitis is hepatocellular inflammation and injury in the setting of hepatic steatosis. Recent work has indicated that dynamic 18F-FDG PET with kinetic modeling has the potential to assess hepatic inflammation noninvasively, while static FDG-PET did not show a promise. Because the liver has dual blood supplies, kinetic modeling of dynamic liver PET data is challenging in human studies. The objective of this study is to evaluate and identify a dual-input kinetic modeling approach for dynamic FDG-PET of human liver inflammation. Fourteen human patients with nonalcoholic fatty liver disease were included in the study. Each patient underwent one-hour dynamic FDG-PET/CT scan and had liver biopsy within six weeks. Three models were tested for kinetic analysis: traditional two-tissue compartmental model with an image-derived single-blood input function (SBIF), model with population-based dual-blood input function (DBIF), and modified model with optimization-derived DBIF through a joint estimation framework. The three models were compared using Akaike information criterion (AIC), F test and histopathologic inflammation reference. The results showed that the optimization-derived DBIF model improved the fitting of liver time activity curves and achieved lower AIC values and higher F values than the SBIF and population-based DBIF models in all patients. The optimization-derived model significantly increased FDG K1 estimates by 101% and 27% as compared with traditional SBIF and population-based DBIF. K1 by the optimization-derived model was significantly associated with histopathologic grades of liver inflammation while the other two models did not provide a statistical significance. In conclusion, modeling of DBIF is critical for kinetic analysis of dynamic liver FDG-PET data in human studies. The optimization-derived DBIF model is more appropriate than SBIF and population-based DBIF for dynamic FDG-PET of liver inflammation. © 2018 Institute of Physics and Engineering in Medicine.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G
2010-09-14
Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).
Functional Data Analysis in NTCP Modeling: A New Method to Explore the Radiation Dose-Volume Effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benadjaoud, Mohamed Amine, E-mail: mohamedamine.benadjaoud@gustaveroussy.fr; Université Paris sud, Le Kremlin-Bicêtre; Institut Gustave Roussy, Villejuif
2014-11-01
Purpose/Objective(s): To describe a novel method to explore radiation dose-volume effects. Functional data analysis is used to investigate the information contained in differential dose-volume histograms. The method is applied to the normal tissue complication probability modeling of rectal bleeding (RB) for patients irradiated in the prostatic bed by 3-dimensional conformal radiation therapy. Methods and Materials: Kernel density estimation was used to estimate the individual probability density functions from each of the 141 rectum differential dose-volume histograms. Functional principal component analysis was performed on the estimated probability density functions to explore the variation modes in the dose distribution. The functional principalmore » components were then tested for association with RB using logistic regression adapted to functional covariates (FLR). For comparison, 3 other normal tissue complication probability models were considered: the Lyman-Kutcher-Burman model, logistic model based on standard dosimetric parameters (LM), and logistic model based on multivariate principal component analysis (PCA). Results: The incidence rate of grade ≥2 RB was 14%. V{sub 65Gy} was the most predictive factor for the LM (P=.058). The best fit for the Lyman-Kutcher-Burman model was obtained with n=0.12, m = 0.17, and TD50 = 72.6 Gy. In PCA and FLR, the components that describe the interdependence between the relative volumes exposed at intermediate and high doses were the most correlated to the complication. The FLR parameter function leads to a better understanding of the volume effect by including the treatment specificity in the delivered mechanistic information. For RB grade ≥2, patients with advanced age are significantly at risk (odds ratio, 1.123; 95% confidence interval, 1.03-1.22), and the fits of the LM, PCA, and functional principal component analysis models are significantly improved by including this clinical factor. Conclusion: Functional data analysis provides an attractive method for flexibly estimating the dose-volume effect for normal tissues in external radiation therapy.« less
Learning Quantitative Sequence-Function Relationships from Massively Parallel Experiments
NASA Astrophysics Data System (ADS)
Atwal, Gurinder S.; Kinney, Justin B.
2016-03-01
A fundamental aspect of biological information processing is the ubiquity of sequence-function relationships—functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence-function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such "massively parallel" experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence-function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of "diffeomorphic modes"—directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence-function relationships.
NASA Technical Reports Server (NTRS)
Patterson, Jonathan D.; Breckenridge, Jonathan T.; Johnson, Stephen B.
2013-01-01
Building upon the purpose, theoretical approach, and use of a Goal-Function Tree (GFT) being presented by Dr. Stephen B. Johnson, described in a related Infotech 2013 ISHM abstract titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management", this paper will describe the core framework used to implement the GFTbased systems engineering process using the Systems Modeling Language (SysML). These two papers are ideally accepted and presented together in the same Infotech session. Statement of problem: SysML, as a tool, is currently not capable of implementing the theoretical approach described within the "Goal-Function Tree Modeling for Systems Engineering and Fault Management" paper cited above. More generally, SysML's current capabilities to model functional decompositions in the rigorous manner required in the GFT approach are limited. The GFT is a new Model-Based Systems Engineering (MBSE) approach to the development of goals and requirements, functions, and its linkage to design. As a growing standard for systems engineering, it is important to develop methods to implement GFT in SysML. Proposed Method of Solution: Many of the central concepts of the SysML language are needed to implement a GFT for large complex systems. In the implementation of those central concepts, the following will be described in detail: changes to the nominal SysML process, model view definitions and examples, diagram definitions and examples, and detailed SysML construct and stereotype definitions.
Conditioning 3D object-based models to dense well data
NASA Astrophysics Data System (ADS)
Wang, Yimin C.; Pyrcz, Michael J.; Catuneanu, Octavian; Boisvert, Jeff B.
2018-06-01
Object-based stochastic simulation models are used to generate categorical variable models with a realistic representation of complicated reservoir heterogeneity. A limitation of object-based modeling is the difficulty of conditioning to dense data. One method to achieve data conditioning is to apply optimization techniques. Optimization algorithms can utilize an objective function measuring the conditioning level of each object while also considering the geological realism of the object. Here, an objective function is optimized with implicit filtering which considers constraints on object parameters. Thousands of objects conditioned to data are generated and stored in a database. A set of objects are selected with linear integer programming to generate the final realization and honor all well data, proportions and other desirable geological features. Although any parameterizable object can be considered, objects from fluvial reservoirs are used to illustrate the ability to simultaneously condition multiple types of geologic features. Channels, levees, crevasse splays and oxbow lakes are parameterized based on location, path, orientation and profile shapes. Functions mimicking natural river sinuosity are used for the centerline model. Channel stacking pattern constraints are also included to enhance the geological realism of object interactions. Spatial layout correlations between different types of objects are modeled. Three case studies demonstrate the flexibility of the proposed optimization-simulation method. These examples include multiple channels with high sinuosity, as well as fragmented channels affected by limited preservation. In all cases the proposed method reproduces input parameters for the object geometries and matches the dense well constraints. The proposed methodology expands the applicability of object-based simulation to complex and heterogeneous geological environments with dense sampling.
An improved Rosetta pedotransfer function and evaluation in earth system models
NASA Astrophysics Data System (ADS)
Zhang, Y.; Schaap, M. G.
2017-12-01
Soil hydraulic parameters are often difficult and expensive to measure, leading to the pedotransfer functions (PTFs) an alternative to predict those parameters. Rosetta (Schaap et al., 2001, denoted as Rosetta1) are widely used PTFs, which is based on artificial neural network (ANN) analysis coupled with the bootstrap re-sampling method, allowing the estimation of van Genuchten water retention parameters (van Genuchten, 1980, abbreviated here as VG), saturated hydraulic conductivity (Ks), as well as their uncertainties. We present an improved hierarchical pedotransfer functions (Rosetta3) that unify the VG water retention and Ks submodels into one, thus allowing the estimation of uni-variate and bi-variate probability distributions of estimated parameters. Results show that the estimation bias of moisture content was reduced significantly. Rosetta1 and Posetta3 were implemented in the python programming language, and the source code are available online. Based on different soil water retention equations, there are diverse PTFs used in different disciplines of earth system modelings. PTFs based on Campbell [1974] or Clapp and Hornberger [1978] are frequently used in land surface models and general circulation models, while van Genuchten [1980] based PTFs are more widely used in hydrology and soil sciences. We use an independent global scale soil database to evaluate the performance of diverse PTFs used in different disciplines of earth system modelings. PTFs are evaluated based on different soil characteristics and environmental characteristics, such as soil textural data, soil organic carbon, soil pH, as well as precipitation and soil temperature. This analysis provides more quantitative estimation error information for PTF predictions in different disciplines of earth system modelings.
Urban search mobile platform modeling in hindered access conditions
NASA Astrophysics Data System (ADS)
Barankova, I. I.; Mikhailova, U. V.; Kalugina, O. B.; Barankov, V. V.
2018-05-01
The article explores the control system simulation and the design of the experimental model of the rescue robot mobile platform. The functional interface, a structural functional diagram of the mobile platform control unit, and a functional control scheme for the mobile platform of secure robot were modeled. The task of design a mobile platform for urban searching in hindered access conditions is realized through the use of a mechanical basis with a chassis and crawler drive, a warning device, human heat sensors and a microcontroller based on Arduino platforms.
Diagnostic layer integration in FPGA-based pipeline measurement systems for HEP experiments
NASA Astrophysics Data System (ADS)
Pozniak, Krzysztof T.
2007-08-01
Integrated triggering and data acquisition systems for high energy physics experiments may be considered as fast, multichannel, synchronous, distributed, pipeline measurement systems. A considerable extension of functional, technological and monitoring demands, which has recently been imposed on them, forced a common usage of large field-programmable gate array (FPGA), digital signal processing-enhanced matrices and fast optical transmission for their realization. This paper discusses modelling, design, realization and testing of pipeline measurement systems. A distribution of synchronous data stream flows is considered in the network. A general functional structure of a single network node is presented. A suggested, novel block structure of the node model facilitates full implementation in the FPGA chip, circuit standardization and parametrization, as well as integration of functional and diagnostic layers. A general method for pipeline system design was derived. This method is based on a unified model of the synchronous data network node. A few examples of practically realized, FPGA-based, pipeline measurement systems were presented. The described systems were applied in ZEUS and CMS.
A modeling framework for life history-based conservation planning
Eileen S. Burns; Sandor F. Toth; Robert G. Haight
2013-01-01
Reserve site selection models can be enhanced by including habitat conditions that populations need for food, shelter, and reproduction. We present a new population protection function that determines whether minimum areas of land with desired habitat features are present within the desired spatial conditions in the protected sites. Embedding the protection function as...
ERIC Educational Resources Information Center
Vaughn, Kelley; Hales, Cindy; Bush, Marta; Fox, James
1998-01-01
Describes implementation of functional behavioral assessment (FBA) through collaboration between a university (East Tennessee State University) and the local school system. Discusses related issues such as factors in team training, team size, FBA adaptations, and replicability of the FBA team model. (Author/DB)
Semiparametric Item Response Functions in the Context of Guessing
ERIC Educational Resources Information Center
Falk, Carl F.; Cai, Li
2016-01-01
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood-based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
Weighted Least Squares Fitting Using Ordinary Least Squares Algorithms.
ERIC Educational Resources Information Center
Kiers, Henk A. L.
1997-01-01
A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. The approach consists of iteratively performing steps of existing algorithms for ordinary least squares fitting of the same model and is based on maximizing a function that majorizes WLS loss function. (Author/SLD)
Quality assessment of protein model-structures based on structural and functional similarities
2012-01-01
Background Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. Results GOBA - Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. Conclusions The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best result for two targets of CASP8, and one of CASP9, compared to the contest participants. Consequently, GOBA offers a novel single model quality assessment program that addresses the practical needs of biologists. In conjunction with other Model Quality Assessment Programs (MQAPs), it would prove useful for the evaluation of single protein models. PMID:22998498
A Model Based on Environmental Factors for Diameter Distribution in Black Wattle in Brazil
Sanquetta, Carlos Roberto; Behling, Alexandre; Dalla Corte, Ana Paula; Péllico Netto, Sylvio; Rodrigues, Aurelio Lourenço; Simon, Augusto Arlindo
2014-01-01
This article discusses the dynamics of a diameter distribution in stands of black wattle throughout its growth cycle using the Weibull probability density function. Moreover, the parameters of this distribution were related to environmental variables from meteorological data and surface soil horizon with the aim of finding a model for diameter distribution which their coefficients were related to the environmental variables. We found that the diameter distribution of the stand changes only slightly over time and that the estimators of the Weibull function are correlated with various environmental variables, with accumulated rainfall foremost among them. Thus, a model was obtained in which the estimators of the Weibull function are dependent on rainfall. Such a function can have important applications, such as in simulating growth potential in regions where historical growth data is lacking, as well as the behavior of the stand under different environmental conditions. The model can also be used to project growth in diameter, based on the rainfall affecting the forest over a certain time period. PMID:24932909
Functional Freedom: A Psychological Model of Freedom in Decision-Making.
Lau, Stephan; Hiemisch, Anette
2017-07-05
The freedom of a decision is not yet sufficiently described as a psychological variable. We present a model of functional decision freedom that aims to fill that role. The model conceptualizes functional freedom as a capacity of people that varies depending on certain conditions of a decision episode. It denotes an inner capability to consciously shape complex decisions according to one's own values and needs. Functional freedom depends on three compensatory dimensions: it is greatest when the decision-maker is highly rational, when the structure of the decision is highly underdetermined, and when the decision process is strongly based on conscious thought and reflection. We outline possible research questions, argue for psychological benefits of functional decision freedom, and explicate the model's implications on current knowledge and research. In conclusion, we show that functional freedom is a scientific variable, permitting an additional psychological foothold in research on freedom, and that is compatible with a deterministic worldview.
NASA Technical Reports Server (NTRS)
Stapleton, Scott; Gries, Thomas; Waas, Anthony M.; Pineda, Evan J.
2014-01-01
Enhanced finite elements are elements with an embedded analytical solution that can capture detailed local fields, enabling more efficient, mesh independent finite element analysis. The shape functions are determined based on the analytical model rather than prescribed. This method was applied to adhesively bonded joints to model joint behavior with one element through the thickness. This study demonstrates two methods of maintaining the fidelity of such elements during adhesive non-linearity and cracking without increasing the mesh needed for an accurate solution. The first method uses adaptive shape functions, where the shape functions are recalculated at each load step based on the softening of the adhesive. The second method is internal mesh adaption, where cracking of the adhesive within an element is captured by further discretizing the element internally to represent the partially cracked geometry. By keeping mesh adaptations within an element, a finer mesh can be used during the analysis without affecting the global finite element model mesh. Examples are shown which highlight when each method is most effective in reducing the number of elements needed to capture adhesive nonlinearity and cracking. These methods are validated against analogous finite element models utilizing cohesive zone elements.
Modelling the ecological niche from functional traits
Kearney, Michael; Simpson, Stephen J.; Raubenheimer, David; Helmuth, Brian
2010-01-01
The niche concept is central to ecology but is often depicted descriptively through observing associations between organisms and habitats. Here, we argue for the importance of mechanistically modelling niches based on functional traits of organisms and explore the possibilities for achieving this through the integration of three theoretical frameworks: biophysical ecology (BE), the geometric framework for nutrition (GF) and dynamic energy budget (DEB) models. These three frameworks are fundamentally based on the conservation laws of thermodynamics, describing energy and mass balance at the level of the individual and capturing the prodigious predictive power of the concepts of ‘homeostasis’ and ‘evolutionary fitness’. BE and the GF provide mechanistic multi-dimensional depictions of climatic and nutritional niches, respectively, providing a foundation for linking organismal traits (morphology, physiology, behaviour) with habitat characteristics. In turn, they provide driving inputs and cost functions for mass/energy allocation within the individual as determined by DEB models. We show how integration of the three frameworks permits calculation of activity constraints, vital rates (survival, development, growth, reproduction) and ultimately population growth rates and species distributions. When integrated with contemporary niche theory, functional trait niche models hold great promise for tackling major questions in ecology and evolutionary biology. PMID:20921046
Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex.
Ulloa, Antonio; Horwitz, Barry
2016-01-01
A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were "non-task-specific" (NS) neurons that served as noise generators to "task-specific" neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional connectivities using the hybrid LSNM/TVB model and the original LSNM. Our framework thus presents a way to embed task-based neural models into the TVB platform, enabling a better comparison between empirical and computational data, which in turn can lead to a better understanding of how interacting neural populations give rise to human cognitive behaviors.
NASA Astrophysics Data System (ADS)
Ishii, Hiroyuki; Kobayashi, Nobuhiko; Hirose, Kenji
2017-01-01
We present a wave-packet dynamical approach to charge transport using maximally localized Wannier functions based on density functional theory including van der Waals interactions. We apply it to the transport properties of pentacene and rubrene single crystals and show the temperature-dependent natures from bandlike to thermally activated behaviors as a function of the magnitude of external static disorder. We compare the results with those obtained by the conventional band and hopping models and experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Minjing; Gao, Yuqian; Qian, Wei-Jun
Microbially mediated biogeochemical processes are catalyzed by enzymes that control the transformation of carbon, nitrogen, and other elements in environment. The dynamic linkage between enzymes and biogeochemical species transformation has, however, rarely been investigated because of the lack of analytical approaches to efficiently and reliably quantify enzymes and their dynamics in soils and sediments. Herein, we developed a signature peptide-based technique for sensitively quantifying dissimilatory and assimilatory enzymes using nitrate-reducing enzymes in a hyporheic zone sediment as an example. Moreover, the measured changes in enzyme concentration were found to correlate with the nitrate reduction rate in a way different frommore » that inferred from biogeochemical models based on biomass or functional genes as surrogates for functional enzymes. This phenomenon has important implications for understanding and modeling the dynamics of microbial community functions and biogeochemical processes in environments. Our results also demonstrate the importance of enzyme quantification for the identification and interrogation of those biogeochemical processes with low metabolite concentrations as a result of faster enzyme-catalyzed consumption of metabolites than their production. The dynamic enzyme behaviors provide a basis for the development of enzyme-based models to describe the relationship between the microbial community and biogeochemical processes.« less
Stable Local Volatility Calibration Using Kernel Splines
NASA Astrophysics Data System (ADS)
Coleman, Thomas F.; Li, Yuying; Wang, Cheng
2010-09-01
We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.
Yamaura, Yuichi; Royle, J. Andrew; Kuboi, Kouji; Tada, Tsuneo; Ikeno, Susumu; Makino, Shun'ichi
2011-01-01
1. In large-scale field surveys, a binary recording of each species' detection or nondetection has been increasingly adopted for its simplicity and low cost. Because of the importance of abundance in many studies, it is desirable to obtain inferences about abundance at species-, functional group-, and community-levels from such binary data. 2. We developed a novel hierarchical multi-species abundance model based on species-level detection/nondetection data. The model accounts for the existence of undetected species, and variability in abundance and detectability among species. Species-level detection/nondetection is linked to species- level abundance via a detection model that accommodates the expectation that probability of detection (at least one individuals is detected) increases with local abundance of the species. We applied this model to a 9-year dataset composed of the detection/nondetection of forest birds, at a single post-fire site (from 7 to 15 years after fire) in a montane area of central Japan. The model allocated undetected species into one of the predefined functional groups by assuming a prior distribution on individual group membership. 3. The results suggest that 15–20 species were missed in each year, and that species richness of communities and functional groups did not change with post-fire forest succession. Overall abundance of birds and abundance of functional groups tended to increase over time, although only in the winter, while decreases in detectabilities were observed in several species. 4. Synthesis and applications. Understanding and prediction of large-scale biodiversity dynamics partly hinge on how we can use data effectively. Our hierarchical model for detection/nondetection data estimates abundance in space/time at species-, functional group-, and community-levels while accounting for undetected individuals and species. It also permits comparison of multiple communities by many types of abundance-based diversity and similarity measures under imperfect detection.
Power function decay of hydraulic conductivity for a TOPMODEL-based infiltration routine
NASA Astrophysics Data System (ADS)
Wang, Jun; Endreny, Theodore A.; Hassett, James M.
2006-11-01
TOPMODEL rainfall-runoff hydrologic concepts are based on soil saturation processes, where soil controls on hydrograph recession have been represented by linear, exponential, and power function decay with soil depth. Although these decay formulations have been incorporated into baseflow decay and topographic index computations, only the linear and exponential forms have been incorporated into infiltration subroutines. This study develops a power function formulation of the Green and Ampt infiltration equation for the case where the power n = 1 and 2. This new function was created to represent field measurements in the New York City, USA, Ward Pound Ridge drinking water supply area, and provide support for similar sites reported by other researchers. Derivation of the power-function-based Green and Ampt model begins with the Green and Ampt formulation used by Beven in deriving an exponential decay model. Differences between the linear, exponential, and power function infiltration scenarios are sensitive to the relative difference between rainfall rates and hydraulic conductivity. Using a low-frequency 30 min design storm with 4.8 cm h-1 rain, the n = 2 power function formulation allows for a faster decay of infiltration and more rapid generation of runoff. Infiltration excess runoff is rare in most forested watersheds, and advantages of the power function infiltration routine may primarily include replication of field-observed processes in urbanized areas and numerical consistency with power function decay of baseflow and topographic index distributions. Equation development is presented within a TOPMODEL-based Ward Pound Ridge rainfall-runoff simulation. Copyright
ERIC Educational Resources Information Center
Kastner, Theodore A.; Walsh, Kevin K.
2006-01-01
Lack of sufficient accessible community-based health care services for individuals with developmental disabilities has led to disparities in health outcomes and an overreliance on expensive models of care delivered in hospitals and other safety net or state-subsidized providers. A functioning community-based primary health care model, with an…
ERIC Educational Resources Information Center
Clinton, Elias
2016-01-01
Video modeling is a non-punitive, evidence-based intervention that has been proven effective for teaching functional life skills and social skills to individuals with autism and developmental disabilities. Compared to the literature base on using video modeling for students with autism and developmental disabilities, fewer studies have examined…
Chung, Sang M; Lee, David J; Hand, Austin; Young, Philip; Vaidyanathan, Jayabharathi; Sahajwalla, Chandrahas
2015-12-01
The study evaluated whether the renal function decline rate per year with age in adults varies based on two primary statistical analyses: cross-section (CS), using one observation per subject, and longitudinal (LT), using multiple observations per subject over time. A total of 16628 records (3946 subjects; age range 30-92 years) of creatinine clearance and relevant demographic data were used. On average, four samples per subject were collected for up to 2364 days (mean: 793 days). A simple linear regression and random coefficient models were selected for CS and LT analyses, respectively. The renal function decline rates per year were 1.33 and 0.95 ml/min/year for CS and LT analyses, respectively, and were slower when the repeated individual measurements were considered. The study confirms that rates are different based on statistical analyses, and that a statistically robust longitudinal model with a proper sampling design provides reliable individual as well as population estimates of the renal function decline rates per year with age in adults. In conclusion, our findings indicated that one should be cautious in interpreting the renal function decline rate with aging information because its estimation was highly dependent on the statistical analyses. From our analyses, a population longitudinal analysis (e.g. random coefficient model) is recommended if individualization is critical, such as a dose adjustment based on renal function during a chronic therapy. Copyright © 2015 John Wiley & Sons, Ltd.
A method for diagnosing time dependent faults using model-based reasoning systems
NASA Technical Reports Server (NTRS)
Goodrich, Charles H.
1995-01-01
This paper explores techniques to apply model-based reasoning to equipment and systems which exhibit dynamic behavior (that which changes as a function of time). The model-based system of interest is KATE-C (Knowledge based Autonomous Test Engineer) which is a C++ based system designed to perform monitoring and diagnosis of Space Shuttle electro-mechanical systems. Methods of model-based monitoring and diagnosis are well known and have been thoroughly explored by others. A short example is given which illustrates the principle of model-based reasoning and reveals some limitations of static, non-time-dependent simulation. This example is then extended to demonstrate representation of time-dependent behavior and testing of fault hypotheses in that environment.
Research and exploration of product innovative design for function
NASA Astrophysics Data System (ADS)
Wang, Donglin; Wei, Zihui; Wang, Youjiang; Tan, Runhua
2009-07-01
Products innovation is under the prerequisite of realizing the new function, the realization of the new function must solve the contradiction. A new process model of new product innovative design was proposed based on Axiomatic Design (AD) Theory and Functional Structure Analysis (FSA), imbedded Principle of Solving Contradiction. In this model, employ AD Theory to guide FSA, determine the contradiction for the realization of the principle solution. To provide powerful support for innovative design tools in principle solution, Principle of Solving Contradiction in the model were imbedded, so as to boost up the innovation of principle solution. As a case study, an innovative design of button battery separator paper punching machine has been achieved with application of the proposed model.
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
PRIMsrc is a novel implementation of a non-parametric bump hunting procedure, based on the Patient Rule Induction Method (PRIM), offering a unified treatment of outcome variables, including censored time-to-event (Survival), continuous (Regression) and discrete (Classification) responses. To fit the model, it uses a recursive peeling procedure with specific peeling criteria and stopping rules depending on the response. To validate the model, it provides an objective function based on prediction-error or other specific statistic, as well as two alternative cross-validation techniques, adapted to the task of decision-rule making and estimation in the three types of settings. PRIMsrc comes as an open source R package, including at this point: (i) a main function for fitting a Survival Bump Hunting model with various options allowing cross-validated model selection to control model size (#covariates) and model complexity (#peeling steps) and generation of cross-validated end-point estimates; (ii) parallel computing; (iii) various S3-generic and specific plotting functions for data visualization, diagnostic, prediction, summary and display of results. It is available on CRAN and GitHub. PMID:26798326
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
Stock market index prediction using neural networks
NASA Astrophysics Data System (ADS)
Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok
1994-03-01
A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
Ueno, Yutaka; Ito, Shuntaro; Konagaya, Akihiko
2014-12-01
To better understand the behaviors and structural dynamics of proteins within a cell, novel software tools are being developed that can create molecular animations based on the findings of structural biology. This study proposes our method developed based on our prototypes to detect collisions and examine the soft-body dynamics of molecular models. The code was implemented with a software development toolkit for rigid-body dynamics simulation and a three-dimensional graphics library. The essential functions of the target software system included the basic molecular modeling environment, collision detection in the molecular models, and physical simulations of the movement of the model. Taking advantage of recent software technologies such as physics simulation modules and interpreted scripting language, the functions required for accurate and meaningful molecular animation were implemented efficiently.
Andersson, Therese M L; Dickman, Paul W; Eloranta, Sandra; Lambert, Paul C
2011-06-22
When the mortality among a cancer patient group returns to the same level as in the general population, that is, the patients no longer experience excess mortality, the patients still alive are considered "statistically cured". Cure models can be used to estimate the cure proportion as well as the survival function of the "uncured". One limitation of parametric cure models is that the functional form of the survival of the "uncured" has to be specified. It can sometimes be hard to find a survival function flexible enough to fit the observed data, for example, when there is high excess hazard within a few months from diagnosis, which is common among older age groups. This has led to the exclusion of older age groups in population-based cancer studies using cure models. Here we have extended the flexible parametric survival model to incorporate cure as a special case to estimate the cure proportion and the survival of the "uncured". Flexible parametric survival models use splines to model the underlying hazard function, and therefore no parametric distribution has to be specified. We have compared the fit from standard cure models to our flexible cure model, using data on colon cancer patients in Finland. This new method gives similar results to a standard cure model, when it is reliable, and better fit when the standard cure model gives biased estimates. Cure models within the framework of flexible parametric models enables cure modelling when standard models give biased estimates. These flexible cure models enable inclusion of older age groups and can give stage-specific estimates, which is not always possible from parametric cure models. © 2011 Andersson et al; licensee BioMed Central Ltd.
2011-01-01
Background When the mortality among a cancer patient group returns to the same level as in the general population, that is, the patients no longer experience excess mortality, the patients still alive are considered "statistically cured". Cure models can be used to estimate the cure proportion as well as the survival function of the "uncured". One limitation of parametric cure models is that the functional form of the survival of the "uncured" has to be specified. It can sometimes be hard to find a survival function flexible enough to fit the observed data, for example, when there is high excess hazard within a few months from diagnosis, which is common among older age groups. This has led to the exclusion of older age groups in population-based cancer studies using cure models. Methods Here we have extended the flexible parametric survival model to incorporate cure as a special case to estimate the cure proportion and the survival of the "uncured". Flexible parametric survival models use splines to model the underlying hazard function, and therefore no parametric distribution has to be specified. Results We have compared the fit from standard cure models to our flexible cure model, using data on colon cancer patients in Finland. This new method gives similar results to a standard cure model, when it is reliable, and better fit when the standard cure model gives biased estimates. Conclusions Cure models within the framework of flexible parametric models enables cure modelling when standard models give biased estimates. These flexible cure models enable inclusion of older age groups and can give stage-specific estimates, which is not always possible from parametric cure models. PMID:21696598
Stochastic optimal operation of reservoirs based on copula functions
NASA Astrophysics Data System (ADS)
Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen
2018-02-01
Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.
Wang, Yong; Tang, Chun; Wang, Erkang; Wang, Jin
2012-01-01
An increasing number of biological machines have been revealed to have more than two macroscopic states. Quantifying the underlying multiple-basin functional landscape is essential for understanding their functions. However, the present models seem to be insufficient to describe such multiple-state systems. To meet this challenge, we have developed a coarse grained triple-basin structure-based model with implicit ligand. Based on our model, the constructed functional landscape is sufficiently sampled by the brute-force molecular dynamics simulation. We explored maltose-binding protein (MBP) which undergoes large-scale domain motion between open, apo-closed (partially closed) and holo-closed (fully closed) states responding to ligand binding. We revealed an underlying mechanism whereby major induced fit and minor population shift pathways co-exist by quantitative flux analysis. We found that the hinge regions play an important role in the functional dynamics as well as that increases in its flexibility promote population shifts. This finding provides a theoretical explanation of the mechanistic discrepancies in PBP protein family. We also found a functional “backtracking” behavior that favors conformational change. We further explored the underlying folding landscape in response to ligand binding. Consistent with earlier experimental findings, the presence of ligand increases the cooperativity and stability of MBP. This work provides the first study to explore the folding dynamics and functional dynamics under the same theoretical framework using our triple-basin functional model. PMID:22532792
Designing Illustrations for CBVE Technical Procedures.
ERIC Educational Resources Information Center
Laugen, Ronald C.
A model was formulated for developing functional illustrations for text-based competency-based vocational education (CBVE) instructional materials. The proposed model contained four prescriptive steps that address the events of instruction to be provided or supported and the locations, content, and learning cues for each illustration. Usefulness…
Equal Area Logistic Estimation for Item Response Theory
NASA Astrophysics Data System (ADS)
Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li
2009-08-01
Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.
NASA Astrophysics Data System (ADS)
Huang, Honglan; Mao, Hanying; Mao, Hanling; Zheng, Weixue; Huang, Zhenfeng; Li, Xinxin; Wang, Xianghong
2017-12-01
Cumulative fatigue damage detection for used parts plays a key role in the process of remanufacturing engineering and is related to the service safety of the remanufactured parts. In light of the nonlinear properties of used parts caused by cumulative fatigue damage, the based nonlinear output frequency response functions detection approach offers a breakthrough to solve this key problem. First, a modified PSO-adaptive lasso algorithm is introduced to improve the accuracy of the NARMAX model under impulse hammer excitation, and then, an effective new algorithm is derived to estimate the nonlinear output frequency response functions under rectangular pulse excitation, and a based nonlinear output frequency response functions index is introduced to detect the cumulative fatigue damage in used parts. Then, a novel damage detection approach that integrates the NARMAX model and the rectangular pulse is proposed for nonlinear output frequency response functions identification and cumulative fatigue damage detection of used parts. Finally, experimental studies of fatigued plate specimens and used connecting rod parts are conducted to verify the validity of the novel approach. The obtained results reveal that the new approach can detect cumulative fatigue damages of used parts effectively and efficiently and that the various values of the based nonlinear output frequency response functions index can be used to detect the different fatigue damages or working time. Since the proposed new approach can extract nonlinear properties of systems by only a single excitation of the inspected system, it shows great promise for use in remanufacturing engineering applications.
NASA Astrophysics Data System (ADS)
Hack-ten Broeke, Mirjam J. D.; Kroes, Joop G.; Bartholomeus, Ruud P.; van Dam, Jos C.; de Wit, Allard J. W.; Supit, Iwan; Walvoort, Dennis J. J.; van Bakel, P. Jan T.; Ruijtenberg, Rob
2016-08-01
For calculating the effects of hydrological measures on agricultural production in the Netherlands a new comprehensive and climate proof method is being developed: WaterVision Agriculture (in Dutch: Waterwijzer Landbouw). End users have asked for a method that considers current and future climate, that can quantify the differences between years and also the effects of extreme weather events. Furthermore they would like a method that considers current farm management and that can distinguish three different causes of crop yield reduction: drought, saline conditions or too wet conditions causing oxygen shortage in the root zone. WaterVision Agriculture is based on the hydrological simulation model SWAP and the crop growth model WOFOST. SWAP simulates water transport in the unsaturated zone using meteorological data, boundary conditions (like groundwater level or drainage) and soil parameters. WOFOST simulates crop growth as a function of meteorological conditions and crop parameters. Using the combination of these process-based models we have derived a meta-model, i.e. a set of easily applicable simplified relations for assessing crop growth as a function of soil type and groundwater level. These relations are based on multiple model runs for at least 72 soil units and the possible groundwater regimes in the Netherlands. So far, we parameterized the model for the crops silage maize and grassland. For the assessment, the soil characteristics (soil water retention and hydraulic conductivity) are very important input parameters for all soil layers of these 72 soil units. These 72 soil units cover all soils in the Netherlands. This paper describes (i) the setup and examples of application of the process-based model SWAP-WOFOST, (ii) the development of the simplified relations based on this model and (iii) how WaterVision Agriculture can be used by farmers, regional government, water boards and others to assess crop yield reduction as a function of groundwater characteristics or as a function of the salt concentration in the root zone for the various soil types.
Interactive Acoustic Simulation in Urban and Complex Environments
2015-03-21
and validity of the solution given by the two methods. Transfer functions are used to model two-way couplings to allow multiple orders of acoustic...Function ( BRDF )[79, 137]. The ray models have also been applied to inhomogeneous outdoor media by numerical integration of the differential ray...surface, the interaction can be modeled by specular reflection, Snell’s law refraction, or BRDF -based reflection, depending on the surface properties
Interpretation of the results of statistical measurements. [search for basic probability model
NASA Technical Reports Server (NTRS)
Olshevskiy, V. V.
1973-01-01
For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.
ERIC Educational Resources Information Center
Ferrando, Pere J.
2004-01-01
This study used kernel-smoothing procedures to estimate the item characteristic functions (ICFs) of a set of continuous personality items. The nonparametric ICFs were compared with the ICFs estimated (a) by the linear model and (b) by Samejima's continuous-response model. The study was based on a conditioned approach and used an error-in-variables…
Model-based metrics of human-automation function allocation in complex work environments
NASA Astrophysics Data System (ADS)
Kim, So Young
Function allocation is the design decision which assigns work functions to all agents in a team, both human and automated. Efforts to guide function allocation systematically has been studied in many fields such as engineering, human factors, team and organization design, management science, and cognitive systems engineering. Each field focuses on certain aspects of function allocation, but not all; thus, an independent discussion of each does not address all necessary issues with function allocation. Four distinctive perspectives emerged from a review of these fields: technology-centered, human-centered, team-oriented, and work-oriented. Each perspective focuses on different aspects of function allocation: capabilities and characteristics of agents (automation or human), team structure and processes, and work structure and the work environment. Together, these perspectives identify the following eight issues with function allocation: 1) Workload, 2) Incoherency in function allocations, 3) Mismatches between responsibility and authority, 4) Interruptive automation, 5) Automation boundary conditions, 6) Function allocation preventing human adaptation to context, 7) Function allocation destabilizing the humans' work environment, and 8) Mission Performance. Addressing these issues systematically requires formal models and simulations that include all necessary aspects of human-automation function allocation: the work environment, the dynamics inherent to the work, agents, and relationships among them. Also, addressing these issues requires not only a (static) model, but also a (dynamic) simulation that captures temporal aspects of work such as the timing of actions and their impact on the agent's work. Therefore, with properly modeled work as described by the work environment, the dynamics inherent to the work, agents, and relationships among them, a modeling framework developed by this thesis, which includes static work models and dynamic simulation, can capture the issues with function allocation. Then, based on the eight issues, eight types of metrics are established. The purpose of these metrics is to assess the extent to which each issue exists with a given function allocation. Specifically, the eight types of metrics assess workload, coherency of a function allocation, mismatches between responsibility and authority, interruptive automation, automation boundary conditions, human adaptation to context, stability of the human's work environment, and mission performance. Finally, to validate the modeling framework and the metrics, a case study was conducted modeling four different function allocations between a pilot and flight deck automation during the arrival and approach phases of flight. A range of pilot cognitive control modes and maximum human taskload limits were also included in the model. The metrics were assessed for these four function allocations and analyzed to validate capability of the metrics to identify important issues in given function allocations. In addition, the design insights provided by the metrics are highlighted. This thesis concludes with a discussion of mechanisms for further validating the modeling framework and function allocation metrics developed here, and highlights where these developments can be applied in research and in the design of function allocations in complex work environments such as aviation operations.
Chen, Guangchao; Li, Xuehua; Chen, Jingwen; Zhang, Ya-Nan; Peijnenburg, Willie J G M
2014-12-01
Biodegradation is the principal environmental dissipation process of chemicals. As such, it is a dominant factor determining the persistence and fate of organic chemicals in the environment, and is therefore of critical importance to chemical management and regulation. In the present study, the authors developed in silico methods assessing biodegradability based on a large heterogeneous set of 825 organic compounds, using the techniques of the C4.5 decision tree, the functional inner regression tree, and logistic regression. External validation was subsequently carried out by 2 independent test sets of 777 and 27 chemicals. As a result, the functional inner regression tree exhibited the best predictability with predictive accuracies of 81.5% and 81.0%, respectively, on the training set (825 chemicals) and test set I (777 chemicals). Performance of the developed models on the 2 test sets was subsequently compared with that of the Estimation Program Interface (EPI) Suite Biowin 5 and Biowin 6 models, which also showed a better predictability of the functional inner regression tree model. The model built in the present study exhibits a reasonable predictability compared with existing models while possessing a transparent algorithm. Interpretation of the mechanisms of biodegradation was also carried out based on the models developed. © 2014 SETAC.
Sparse network-based models for patient classification using fMRI
Rosa, Maria J.; Portugal, Liana; Hahn, Tim; Fallgatter, Andreas J.; Garrido, Marta I.; Shawe-Taylor, John; Mourao-Miranda, Janaina
2015-01-01
Pattern recognition applied to whole-brain neuroimaging data, such as functional Magnetic Resonance Imaging (fMRI), has proved successful at discriminating psychiatric patients from healthy participants. However, predictive patterns obtained from whole-brain voxel-based features are difficult to interpret in terms of the underlying neurobiology. Many psychiatric disorders, such as depression and schizophrenia, are thought to be brain connectivity disorders. Therefore, pattern recognition based on network models might provide deeper insights and potentially more powerful predictions than whole-brain voxel-based approaches. Here, we build a novel sparse network-based discriminative modeling framework, based on Gaussian graphical models and L1-norm regularized linear Support Vector Machines (SVM). In addition, the proposed framework is optimized in terms of both predictive power and reproducibility/stability of the patterns. Our approach aims to provide better pattern interpretation than voxel-based whole-brain approaches by yielding stable brain connectivity patterns that underlie discriminative changes in brain function between the groups. We illustrate our technique by classifying patients with major depressive disorder (MDD) and healthy participants, in two (event- and block-related) fMRI datasets acquired while participants performed a gender discrimination and emotional task, respectively, during the visualization of emotional valent faces. PMID:25463459
Wavelet based free-form deformations for nonrigid registration
NASA Astrophysics Data System (ADS)
Sun, Wei; Niessen, Wiro J.; Klein, Stefan
2014-03-01
In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.
Miller, Anton; Shen, Jane; Mâsse, Louise C
2016-06-15
Allocation of resources for services and supports for children with neurodevelopmental disorders/disabilities (NDD/D) is often based on the presence of specific health conditions. This study investigated the relative roles of a child's diagnosed health condition and neurodevelopmental and related functional characteristics in explaining child and family health and well-being. The data on children with NDD/D (ages 5 to 14; weighted n = 120,700) are from the 2006 Participation and Activity Limitation Survey (PALS), a population-based Canadian survey of parents of children with functional limitations/disabilities. Direct and indirect effects of child diagnosis status-autism spectrum disorder (ASD)/not ASD-and functional characteristics (particularly, ASD-related impairments in speech, cognition, and emotion and behaviour) on child participation and family health and well-being were investigated in a series of structural equation models, while controlling for covariates. All models adequately fitted the data. Child ASD diagnosis was significantly associated with child participation and family health and well-being. When ASD-related child functional characteristics were added to the model, all direct effects from child diagnosis on child and family outcomes disappeared; the effect of child diagnosis on child and family outcomes was fully mediated via ASD-related child functional characteristics. Children's neurodevelopmental functional characteristics are integral to understanding the child and family health-related impact of neurodevelopmental disorders such as ASD. These findings have implications for the relative weighting given to functional versus diagnosis-specific factors in considering needs for services and supports.
Wavelet-based spectral finite element dynamic analysis for an axially moving Timoshenko beam
NASA Astrophysics Data System (ADS)
Mokhtari, Ali; Mirdamadi, Hamid Reza; Ghayour, Mostafa
2017-08-01
In this article, wavelet-based spectral finite element (WSFE) model is formulated for time domain and wave domain dynamic analysis of an axially moving Timoshenko beam subjected to axial pretension. The formulation is similar to conventional FFT-based spectral finite element (SFE) model except that Daubechies wavelet basis functions are used for temporal discretization of the governing partial differential equations into a set of ordinary differential equations. The localized nature of Daubechies wavelet basis functions helps to rule out problems of SFE model due to periodicity assumption, especially during inverse Fourier transformation and back to time domain. The high accuracy of WSFE model is then evaluated by comparing its results with those of conventional finite element and SFE results. The effects of moving beam speed and axial tensile force on vibration and wave characteristics, and static and dynamic stabilities of moving beam are investigated.
Nonparametric Hierarchical Bayesian Model for Functional Brain Parcellation
Lashkari, Danial; Sridharan, Ramesh; Vul, Edward; Hsieh, Po-Jang; Kanwisher, Nancy; Golland, Polina
2011-01-01
We develop a method for unsupervised analysis of functional brain images that learns group-level patterns of functional response. Our algorithm is based on a generative model that comprises two main layers. At the lower level, we express the functional brain response to each stimulus as a binary activation variable. At the next level, we define a prior over the sets of activation variables in all subjects. We use a Hierarchical Dirichlet Process as the prior in order to simultaneously learn the patterns of response that are shared across the group, and to estimate the number of these patterns supported by data. Inference based on this model enables automatic discovery and characterization of salient and consistent patterns in functional signals. We apply our method to data from a study that explores the response of the visual cortex to a collection of images. The discovered profiles of activation correspond to selectivity to a number of image categories such as faces, bodies, and scenes. More generally, our results appear superior to the results of alternative data-driven methods in capturing the category structure in the space of stimuli. PMID:21841977
An uncertainty model of acoustic metamaterials with random parameters
NASA Astrophysics Data System (ADS)
He, Z. C.; Hu, J. Y.; Li, Eric
2018-01-01
Acoustic metamaterials (AMs) are man-made composite materials. However, the random uncertainties are unavoidable in the application of AMs due to manufacturing and material errors which lead to the variance of the physical responses of AMs. In this paper, an uncertainty model based on the change of variable perturbation stochastic finite element method (CVPS-FEM) is formulated to predict the probability density functions of physical responses of AMs with random parameters. Three types of physical responses including the band structure, mode shapes and frequency response function of AMs are studied in the uncertainty model, which is of great interest in the design of AMs. In this computation, the physical responses of stochastic AMs are expressed as linear functions of the pre-defined random parameters by using the first-order Taylor series expansion and perturbation technique. Then, based on the linear function relationships of parameters and responses, the probability density functions of the responses can be calculated by the change-of-variable technique. Three numerical examples are employed to demonstrate the effectiveness of the CVPS-FEM for stochastic AMs, and the results are validated by Monte Carlo method successfully.
Development of surrogate models for the prediction of the flow around an aircraft propeller
NASA Astrophysics Data System (ADS)
Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros
2018-05-01
In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.
Towards an Analytical Age-Dependent Model of Contrast Sensitivity Functions for an Ageing Society
Joulan, Karine; Brémond, Roland
2015-01-01
The Contrast Sensitivity Function (CSF) describes how the visibility of a grating depends on the stimulus spatial frequency. Many published CSF data have demonstrated that contrast sensitivity declines with age. However, an age-dependent analytical model of the CSF is not available to date. In this paper, we propose such an analytical CSF model based on visual mechanisms, taking into account the age factor. To this end, we have extended an existing model from Barten (1999), taking into account the dependencies of this model's optical and physiological parameters on age. Age-dependent models of the cones and ganglion cells densities, the optical and neural MTF, and optical and neural noise are proposed, based on published data. The proposed age-dependent CSF is finally tested against available experimental data, with fair results. Such an age-dependent model may be beneficial when designing real-time age-dependent image coding and display applications. PMID:26078994
A Physiologically Based, Multi-Scale Model of Skeletal Muscle Structure and Function
Röhrle, O.; Davidson, J. B.; Pullan, A. J.
2012-01-01
Models of skeletal muscle can be classified as phenomenological or biophysical. Phenomenological models predict the muscle’s response to a specified input based on experimental measurements. Prominent phenomenological models are the Hill-type muscle models, which have been incorporated into rigid-body modeling frameworks, and three-dimensional continuum-mechanical models. Biophysically based models attempt to predict the muscle’s response as emerging from the underlying physiology of the system. In this contribution, the conventional biophysically based modeling methodology is extended to include several structural and functional characteristics of skeletal muscle. The result is a physiologically based, multi-scale skeletal muscle finite element model that is capable of representing detailed, geometrical descriptions of skeletal muscle fibers and their grouping. Together with a well-established model of motor-unit recruitment, the electro-physiological behavior of single muscle fibers within motor units is computed and linked to a continuum-mechanical constitutive law. The bridging between the cellular level and the organ level has been achieved via a multi-scale constitutive law and homogenization. The effect of homogenization has been investigated by varying the number of embedded skeletal muscle fibers and/or motor units and computing the resulting exerted muscle forces while applying the same excitatory input. All simulations were conducted using an anatomically realistic finite element model of the tibialis anterior muscle. Given the fact that the underlying electro-physiological cellular muscle model is capable of modeling metabolic fatigue effects such as potassium accumulation in the T-tubular space and inorganic phosphate build-up, the proposed framework provides a novel simulation-based way to investigate muscle behavior ranging from motor-unit recruitment to force generation and fatigue. PMID:22993509
Characterization and Prediction of Chemical Functions and ...
Assessing exposures from the thousands of chemicals in commerce requires quantitative information on the chemical constituents of consumer products. Unfortunately, gaps in available composition data prevent assessment of exposure to chemicals in many products. Here we propose filling these gaps via consideration of chemical functional role. We obtained function information for thousands of chemicals from public sources and used a clustering algorithm to assign chemicals into 35 harmonized function categories (e.g., plasticizers, antimicrobials, solvents). We combined these functions with weight fraction data for 4115 personal care products (PCPs) to characterize the composition of 66 different product categories (e.g., shampoos). We analyzed the combined weight fraction/function dataset using machine learning techniques to develop quantitative structure property relationship (QSPR) classifier models for 22 functions and for weight fraction, based on chemical-specific descriptors (including chemical properties). We applied these classifier models to a library of 10196 data-poor chemicals. Our predictions of chemical function and composition will inform exposure-based screening of chemicals in PCPs for combination with hazard data in risk-based evaluation frameworks. As new information becomes available, this approach can be applied to other classes of products and the chemicals they contain in order to provide essential consumer product data for use in exposure-b
Huang, Wenwen; Ebrahimi, Davoud; Dinjaski, Nina; Tarakanova, Anna; Buehler, Markus J; Wong, Joyce Y; Kaplan, David L
2017-04-18
Tailored biomaterials with tunable functional properties are crucial for a variety of task-specific applications ranging from healthcare to sustainable, novel bio-nanodevices. To generate polymeric materials with predictive functional outcomes, exploiting designs from nature while morphing them toward non-natural systems offers an important strategy. Silks are Nature's building blocks and are produced by arthropods for a variety of uses that are essential for their survival. Due to the genetic control of encoded protein sequence, mechanical properties, biocompatibility, and biodegradability, silk proteins have been selected as prototype models to emulate for the tunable designs of biomaterial systems. The bottom up strategy of material design opens important opportunities to create predictive functional outcomes, following the exquisite polymeric templates inspired by silks. Recombinant DNA technology provides a systematic approach to recapitulate, vary, and evaluate the core structure peptide motifs in silks and then biosynthesize silk-based polymers by design. Post-biosynthesis processing allows for another dimension of material design by controlled or assisted assembly. Multiscale modeling, from the theoretical prospective, provides strategies to explore interactions at different length scales, leading to selective material properties. Synergy among experimental and modeling approaches can provide new and more rapid insights into the most appropriate structure-function relationships to pursue while also furthering our understanding in terms of the range of silk-based systems that can be generated. This approach utilizes nature as a blueprint for initial polymer designs with useful functions (e.g., silk fibers) but also employs modeling-guided experiments to expand the initial polymer designs into new domains of functional materials that do not exist in nature. The overall path to these new functional outcomes is greatly accelerated via the integration of modeling with experiment. In this Account, we summarize recent advances in understanding and functionalization of silk-based protein systems, with a focus on the integration of simulation and experiment for biopolymer design. Spider silk was selected as an exemplary protein to address the fundamental challenges in polymer designs, including specific insights into the role of molecular weight, hydrophobic/hydrophilic partitioning, and shear stress for silk fiber formation. To expand current silk designs toward biointerfaces and stimuli responsive materials, peptide modules from other natural proteins were added to silk designs to introduce new functions, exploiting the modular nature of silk proteins and fibrous proteins in general. The integrated approaches explored suggest that protein folding, silk volume fraction, and protein amino acid sequence changes (e.g., mutations) are critical factors for functional biomaterial designs. In summary, the integrated modeling-experimental approach described in this Account suggests a more rationally directed and more rapid method for the design of polymeric materials. It is expected that this combined use of experimental and computational approaches has a broad applicability not only for silk-based systems, but also for other polymer and composite materials.
A quadrature based method of moments for nonlinear Fokker-Planck equations
NASA Astrophysics Data System (ADS)
Otten, Dustin L.; Vedula, Prakash
2011-09-01
Fokker-Planck equations which are nonlinear with respect to their probability densities and occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, fermions and bosons can be challenging to solve numerically. To address some underlying challenges, we propose the application of the direct quadrature based method of moments (DQMOM) for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations (NLFPEs). In DQMOM, probability density (or other distribution) functions are represented using a finite collection of Dirac delta functions, characterized by quadrature weights and locations (or abscissas) that are determined based on constraints due to evolution of generalized moments. Three particular examples of nonlinear Fokker-Planck equations considered in this paper include descriptions of: (i) the Shimizu-Yamada model, (ii) the Desai-Zwanzig model (both of which have been developed as models of muscular contraction) and (iii) fermions and bosons. Results based on DQMOM, for the transient and stationary solutions of the nonlinear Fokker-Planck equations, have been found to be in good agreement with other available analytical and numerical approaches. It is also shown that approximate reconstruction of the underlying probability density function from moments obtained from DQMOM can be satisfactorily achieved using a maximum entropy method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert
Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less
Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert; ...
2017-07-10
Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less
Nonrelativistic approaches derived from point-coupling relativistic models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lourenco, O.; Dutra, M.; Delfino, A.
2010-03-15
We construct nonrelativistic versions of relativistic nonlinear hadronic point-coupling models, based on new normalized spinor wave functions after small component reduction. These expansions give us energy density functionals that can be compared to their relativistic counterparts. We show that the agreement between the nonrelativistic limit approach and the Skyrme parametrizations becomes strongly dependent on the incompressibility of each model. We also show that the particular case A=B=0 (Walecka model) leads to the same energy density functional of the Skyrme parametrizations SV and ZR2, while the truncation scheme, up to order {rho}{sup 3}, leads to parametrizations for which {sigma}=1.
NASA Astrophysics Data System (ADS)
Dunn, S. M.; Colohan, R. J. E.
1999-09-01
A snow component has been developed for the distributed hydrological model, DIY, using an approach that sequentially evaluates the behaviour of different functions as they are implemented in the model. The evaluation is performed using multi-objective functions to ensure that the internal structure of the model is correct. The development of the model, using a sub-catchment in the Cairngorm Mountains in Scotland, demonstrated that the degree-day model can be enhanced for hydroclimatic conditions typical of those found in Scotland, without increasing meteorological data requirements. An important element of the snow model is a function to account for wind re-distribution. This causes large accumulations of snow in small pockets, which are shown to be important in sustaining baseflows in the rivers during the late spring and early summer, long after the snowpack has melted from the bulk of the catchment. The importance of the wind function would not have been identified using a single objective function of total streamflow to evaluate the model behaviour.
Dallmann, André; Ince, Ibrahim; Meyer, Michaela; Willmann, Stefan; Eissing, Thomas; Hempel, Georg
2017-11-01
In the past years, several repositories for anatomical and physiological parameters required for physiologically based pharmacokinetic modeling in pregnant women have been published. While providing a good basis, some important aspects can be further detailed. For example, they did not account for the variability associated with parameters or were lacking key parameters necessary for developing more detailed mechanistic pregnancy physiologically based pharmacokinetic models, such as the composition of pregnancy-specific tissues. The aim of this meta-analysis was to provide an updated and extended database of anatomical and physiological parameters in healthy pregnant women that also accounts for changes in the variability of a parameter throughout gestation and for the composition of pregnancy-specific tissues. A systematic literature search was carried out to collect study data on pregnancy-related changes of anatomical and physiological parameters. For each parameter, a set of mathematical functions was fitted to the data and to the standard deviation observed among the data. The best performing functions were selected based on numerical and visual diagnostics as well as based on physiological plausibility. The literature search yielded 473 studies, 302 of which met the criteria to be further analyzed and compiled in a database. In total, the database encompassed 7729 data. Although the availability of quantitative data for some parameters remained limited, mathematical functions could be generated for many important parameters. Gaps were filled based on qualitative knowledge and based on physiologically plausible assumptions. The presented results facilitate the integration of pregnancy-dependent changes in anatomy and physiology into mechanistic population physiologically based pharmacokinetic models. Such models can ultimately provide a valuable tool to investigate the pharmacokinetics during pregnancy in silico and support informed decision making regarding optimal dosing regimens in this vulnerable special population.
Tian, Zhongyuan; Fauré, Adrien; Mori, Hirotada; Matsuno, Hiroshi
2013-01-01
Glycogen and glucose are two sugar sources available during the lag phase of E. coli, but the mechanism that regulates their utilization is still unclear. Attempting to unveil the relationship between glucose and glycogen, we propose an integrated hybrid functional Petri net (HFPN) model including glycolysis, PTS, glycogen metabolic pathway, and their internal regulatory systems. By comparing known biological results to this model, basic necessary regulatory mechanism for utilizing glucose and glycogen were identified as a feedback circuit in which HPr and EIIAGlc play key roles. Based on this regulatory HFPN model, we discuss the process of glycogen utilization in E. coli in the context of a systematic understanding of carbohydrate metabolism.
Activity Diagrams for DEVS Models: A Case Study Modeling Health Care Behavior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozmen, Ozgur; Nutaro, James J
Discrete Event Systems Specification (DEVS) is a widely used formalism for modeling and simulation of discrete and continuous systems. While DEVS provides a sound mathematical representation of discrete systems, its practical use can suffer when models become complex. Five main functions, which construct the core of atomic modules in DEVS, can realize the behaviors that modelers want to represent. The integration of these functions is handled by the simulation routine, however modelers can implement each function in various ways. Therefore, there is a need for graphical representations of complex models to simplify their implementation and facilitate their reproduction. In thismore » work, we illustrate the use of activity diagrams for this purpose in the context of a health care behavior model, which is developed with an agent-based modeling paradigm.« less
NASA Technical Reports Server (NTRS)
Yin, Wan-Lee
1992-01-01
The stress-function-based variational method of Yin (1991) is extended and modified into a combined layer/sublaminate approach applicable to a laminated strip composed of a large number of differently orientated, anisotropic elastic plies. Lekhnitskii's (1963) stress functions are introduced into two interior layers adjacent to a particular interface. The remaining layers are grouped into an upper sublaminate and a lower sublaminate. The stress functions are expanded in truncated power series of the thickness coordinate, and the differential equations governing the coefficient functions are derived by using the complementary virtual work principle. The layer/sublaminate approach limits the dimension of the eigenvalue problem to a fixed number irrespective of the number of layers in the sublaminate, so that reasonably accurate solutions of the interlaminar stresses can be computed with extreme ease. For symmetric, four-layer, angle-ply and cross-ply laminates, a comparison of the previous analysis results based on the pure layer model and new results based on two different layer/sublaminate models indicates reasonable over-all agreement in the interlaminar stresses and superior agreement in the total peeling and shearing force.
Effective model hierarchies for dynamic and static classical density functional theories
NASA Astrophysics Data System (ADS)
Majaniemi, S.; Provatas, N.; Nonomura, M.
2010-09-01
The origin and methodology of deriving effective model hierarchies are presented with applications to solidification of crystalline solids. In particular, it is discussed how the form of the equations of motion and the effective parameters on larger scales can be obtained from the more microscopic models. It will be shown that tying together the dynamic structure of the projection operator formalism with static classical density functional theories can lead to incomplete (mass) transport properties even though the linearized hydrodynamics on large scales is correctly reproduced. To facilitate a more natural way of binding together the dynamics of the macrovariables and classical density functional theory, a dynamic generalization of density functional theory based on the nonequilibrium generating functional is suggested.
Adaptive model-based control systems and methods for controlling a gas turbine
NASA Technical Reports Server (NTRS)
Brunell, Brent Jerome (Inventor); Mathews, Jr., Harry Kirk (Inventor); Kumar, Aditya (Inventor)
2004-01-01
Adaptive model-based control systems and methods are described so that performance and/or operability of a gas turbine in an aircraft engine, power plant, marine propulsion, or industrial application can be optimized under normal, deteriorated, faulted, failed and/or damaged operation. First, a model of each relevant system or component is created, and the model is adapted to the engine. Then, if/when deterioration, a fault, a failure or some kind of damage to an engine component or system is detected, that information is input to the model-based control as changes to the model, constraints, objective function, or other control parameters. With all the information about the engine condition, and state and directives on the control goals in terms of an objective function and constraints, the control then solves an optimization so the optimal control action can be determined and taken. This model and control may be updated in real-time to account for engine-to-engine variation, deterioration, damage, faults and/or failures using optimal corrective control action command(s).
Point-spread function reconstruction in ground-based astronomy by l(1)-l(p) model.
Chan, Raymond H; Yuan, Xiaoming; Zhang, Wenxing
2012-11-01
In ground-based astronomy, images of objects in outer space are acquired via ground-based telescopes. However, the imaging system is generally interfered by atmospheric turbulence, and hence images so acquired are blurred with unknown point-spread function (PSF). To restore the observed images, the wavefront of light at the telescope's aperture is utilized to derive the PSF. A model with the Tikhonov regularization has been proposed to find the high-resolution phase gradients by solving a least-squares system. Here we propose the l(1)-l(p) (p=1, 2) model for reconstructing the phase gradients. This model can provide sharper edges in the gradients while removing noise. The minimization models can easily be solved by the Douglas-Rachford alternating direction method of a multiplier, and the convergence rate is readily established. Numerical results are given to illustrate that the model can give better phase gradients and hence a more accurate PSF. As a result, the restored images are much more accurate when compared to the traditional Tikhonov regularization model.
Satellite services system analysis study. Volume 1, part 2: Executive summary
NASA Technical Reports Server (NTRS)
1981-01-01
The early mission model was developed through a survey of the potential user market. Service functions were defined and a group of design reference missions were selected which represented needs for each of the service functions. Servicing concepts were developed through mission analysis and STS timeline constraint analysis. The hardware needs for accomplishing the service functions were identified with emphasis being placed on applying equipment in the current NASA inventory and that in advanced stages of planning. A more comprehensive service model was developed based on the NASA and DoD mission models segregated by mission class. The number of service events of each class were estimated based on average revisit and service assumptions. Service Kits were defined as collections of equipment applicable to performing one or more service functions. Preliminary design was carrid out on a selected set of hardware needed for early service missions. The organization and costing of the satellie service systems were addressed.
ERIC Educational Resources Information Center
Burton, Cami E.; Anderson, Darlene H.; Prater, Mary Anne; Dyches, Tina T.
2013-01-01
Researchers suggest that video-based interventions can provide increased opportunity for students with disabilities to acquire important academic and functional skills; however, little research exists regarding video-based interventions on the academic skills of students with autism and intellectual disability. We used a…
Tensor Based Representation and Analysis of Diffusion-Weighted Magnetic Resonance Images
ERIC Educational Resources Information Center
Barmpoutis, Angelos
2009-01-01
Cartesian tensor bases have been widely used to model spherical functions. In medical imaging, tensors of various orders can approximate the diffusivity function at each voxel of a diffusion-weighted MRI data set. This approximation produces tensor-valued datasets that contain information about the underlying local structure of the scanned tissue.…
Model-Based Reinforcement Learning under Concurrent Schedules of Reinforcement in Rodents
ERIC Educational Resources Information Center
Huh, Namjung; Jo, Suhyun; Kim, Hoseok; Sul, Jung Hoon; Jung, Min Whan
2009-01-01
Reinforcement learning theories postulate that actions are chosen to maximize a long-term sum of positive outcomes based on value functions, which are subjective estimates of future rewards. In simple reinforcement learning algorithms, value functions are updated only by trial-and-error, whereas they are updated according to the decision-maker's…
ERIC Educational Resources Information Center
Frain, Michael P.; Bishop, Malachy; Rumrill, Phillip D., Jr.; Chan, Fong; Tansey, Timothy N.; Strauser, David; Chiu, Chung-Yi
2015-01-01
Multiple sclerosis (MS) is an unpredictable, sometimes progressive chronic illness affecting people in the prime of their working lives. This article reviews the effects of MS on employment based on the World Health Organization's International Classification of Functioning, Disability and Health model. Correlations between employment and…
Computational neuroanatomy: ontology-based representation of neural components and connectivity
Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron
2009-01-01
Background A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. Results We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Conclusion Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future. PMID:19208191
Insertional engineering of chromosomes with Sleeping Beauty transposition: an overview.
Grabundzija, Ivana; Izsvák, Zsuzsanna; Ivics, Zoltán
2011-01-01
Novel genetic tools and mutagenesis strategies based on the Sleeping Beauty (SB) transposable element are currently under development with a vision to link primary DNA sequence information to gene functions in vertebrate models. By virtue of its inherent capacity to insert into DNA, the SB transposon can be developed into powerful tools for chromosomal manipulations. Mutagenesis screens based on SB have numerous advantages including high throughput and easy identification of mutated alleles. Forward genetic approaches based on insertional mutagenesis by engineered SB transposons have the advantage of providing insight into genetic networks and pathways based on phenotype. Indeed, the SB transposon has become a highly instrumental tool to induce tumors in experimental animals in a tissue-specific -manner with the aim of uncovering the genetic basis of diverse cancers. Here, we describe a battery of mutagenic cassettes that can be applied in conjunction with SB transposon vectors to mutagenize genes, and highlight versatile experimental strategies for the generation of engineered chromosomes for loss-of-function as well as gain-of-function mutagenesis for functional gene annotation in vertebrate models.
Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization
Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.
2014-01-01
Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406
On the role of general system theory for functional neuroimaging.
Stephan, Klaas Enno
2004-12-01
One of the most important goals of neuroscience is to establish precise structure-function relationships in the brain. Since the 19th century, a major scientific endeavour has been to associate structurally distinct cortical regions with specific cognitive functions. This was traditionally accomplished by correlating microstructurally defined areas with lesion sites found in patients with specific neuropsychological symptoms. Modern neuroimaging techniques with high spatial resolution have promised an alternative approach, enabling non-invasive measurements of regionally specific changes of brain activity that are correlated with certain components of a cognitive process. Reviewing classic approaches towards brain structure-function relationships that are based on correlational approaches, this article argues that these approaches are not sufficient to provide an understanding of the operational principles of a dynamic system such as the brain but must be complemented by models based on general system theory. These models reflect the connectional structure of the system under investigation and emphasize context-dependent couplings between the system elements in terms of effective connectivity. The usefulness of system models whose parameters are fitted to measured functional imaging data for testing hypotheses about structure-function relationships in the brain and their potential for clinical applications is demonstrated by several empirical examples.
On the role of general system theory for functional neuroimaging
Stephan, Klaas Enno
2004-01-01
One of the most important goals of neuroscience is to establish precise structure–function relationships in the brain. Since the 19th century, a major scientific endeavour has been to associate structurally distinct cortical regions with specific cognitive functions. This was traditionally accomplished by correlating microstructurally defined areas with lesion sites found in patients with specific neuropsychological symptoms. Modern neuroimaging techniques with high spatial resolution have promised an alternative approach, enabling non-invasive measurements of regionally specific changes of brain activity that are correlated with certain components of a cognitive process. Reviewing classic approaches towards brain structure–function relationships that are based on correlational approaches, this article argues that these approaches are not sufficient to provide an understanding of the operational principles of a dynamic system such as the brain but must be complemented by models based on general system theory. These models reflect the connectional structure of the system under investigation and emphasize context-dependent couplings between the system elements in terms of effective connectivity. The usefulness of system models whose parameters are fitted to measured functional imaging data for testing hypotheses about structure–function relationships in the brain and their potential for clinical applications is demonstrated by several empirical examples. PMID:15610393
Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David
2015-01-01
Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.
Working-memory capacity protects model-based learning from stress
Otto, A. Ross; Raio, Candace M.; Chiang, Alice; Phelps, Elizabeth A.; Daw, Nathaniel D.
2013-01-01
Accounts of decision-making have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental advances suggest that this classic distinction between habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning, called model-free and model-based learning. Popular neurocomputational accounts of reward processing emphasize the involvement of the dopaminergic system in model-free learning and prefrontal, central executive–dependent control systems in model-based choice. Here we hypothesized that the hypothalamic-pituitary-adrenal (HPA) axis stress response—believed to have detrimental effects on prefrontal cortex function—should selectively attenuate model-based contributions to behavior. To test this, we paired an acute stressor with a sequential decision-making task that affords distinguishing the relative contributions of the two learning strategies. We assessed baseline working-memory (WM) capacity and used salivary cortisol levels to measure HPA axis stress response. We found that stress response attenuates the contribution of model-based, but not model-free, contributions to behavior. Moreover, stress-induced behavioral changes were modulated by individual WM capacity, such that low-WM-capacity individuals were more susceptible to detrimental stress effects than high-WM-capacity individuals. These results enrich existing accounts of the interplay between acute stress, working memory, and prefrontal function and suggest that executive function may be protective against the deleterious effects of acute stress. PMID:24324166
Flow Channel Influence of a Collision-Based Piezoelectric Jetting Dispenser on Jet Performance
Deng, Guiling; Li, Junhui; Duan, Ji’an
2018-01-01
To improve the jet performance of a bi-piezoelectric jet dispenser, mathematical and simulation models were established according to the operating principle. In order to improve the accuracy and reliability of the simulation calculation, a viscosity model of the fluid was fitted to a fifth-order function with shear rate based on rheological test data, and the needle displacement model was fitted to a nine-order function with time based on real-time displacement test data. The results show that jet performance is related to the diameter of the nozzle outlet and the cone angle of the nozzle, and the impacts of the flow channel structure were confirmed. The approach of numerical simulation is confirmed by the testing results of droplet volume. It will provide a reliable simulation platform for mechanical collision-based jet dispensing and a theoretical basis for micro jet valve design and improvement. PMID:29677140
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindskog, M., E-mail: martin.lindskog@teorfys.lu.se; Wacker, A.; Wolf, J. M.
2014-09-08
We study the operation of an 8.5 μm quantum cascade laser based on GaInAs/AlInAs lattice matched to InP using three different simulation models based on density matrix (DM) and non-equilibrium Green's function (NEGF) formulations. The latter advanced scheme serves as a validation for the simpler DM schemes and, at the same time, provides additional insight, such as the temperatures of the sub-band carrier distributions. We find that for the particular quantum cascade laser studied here, the behavior is well described by simple quantum mechanical estimates based on Fermi's golden rule. As a consequence, the DM model, which includes second order currents,more » agrees well with the NEGF results. Both these simulations are in accordance with previously reported data and a second regrown device.« less
Projection pursuit water quality evaluation model based on chicken swam algorithm
NASA Astrophysics Data System (ADS)
Hu, Zhe
2018-03-01
In view of the uncertainty and ambiguity of each index in water quality evaluation, in order to solve the incompatibility of evaluation results of individual water quality indexes, a projection pursuit model based on chicken swam algorithm is proposed. The projection index function which can reflect the water quality condition is constructed, the chicken group algorithm (CSA) is introduced, the projection index function is optimized, the best projection direction of the projection index function is sought, and the best projection value is obtained to realize the water quality evaluation. The comparison between this method and other methods shows that it is reasonable and feasible to provide decision-making basis for water pollution control in the basin.
Links, Jonathan M; Schwartz, Brian S; Lin, Sen; Kanarek, Norma; Mitrani-Reiser, Judith; Sell, Tara Kirk; Watson, Crystal R; Ward, Doug; Slemp, Cathy; Burhans, Robert; Gill, Kimberly; Igusa, Tak; Zhao, Xilei; Aguirre, Benigno; Trainor, Joseph; Nigg, Joanne; Inglesby, Thomas; Carbone, Eric; Kendra, James M
2018-02-01
Policy-makers and practitioners have a need to assess community resilience in disasters. Prior efforts conflated resilience with community functioning, combined resistance and recovery (the components of resilience), and relied on a static model for what is inherently a dynamic process. We sought to develop linked conceptual and computational models of community functioning and resilience after a disaster. We developed a system dynamics computational model that predicts community functioning after a disaster. The computational model outputted the time course of community functioning before, during, and after a disaster, which was used to calculate resistance, recovery, and resilience for all US counties. The conceptual model explicitly separated resilience from community functioning and identified all key components for each, which were translated into a system dynamics computational model with connections and feedbacks. The components were represented by publicly available measures at the county level. Baseline community functioning, resistance, recovery, and resilience evidenced a range of values and geographic clustering, consistent with hypotheses based on the disaster literature. The work is transparent, motivates ongoing refinements, and identifies areas for improved measurements. After validation, such a model can be used to identify effective investments to enhance community resilience. (Disaster Med Public Health Preparedness. 2018;12:127-137).
NASA Astrophysics Data System (ADS)
Utama, D. N.; Ani, N.; Iqbal, M. M.
2018-03-01
Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.
Ocean biogeochemistry modeled with emergent trait-based genomics
NASA Astrophysics Data System (ADS)
Coles, V. J.; Stukel, M. R.; Brooks, M. T.; Burd, A.; Crump, B. C.; Moran, M. A.; Paul, J. H.; Satinsky, B. M.; Yager, P. L.; Zielinski, B. L.; Hood, R. R.
2017-12-01
Marine ecosystem models have advanced to incorporate metabolic pathways discovered with genomic sequencing, but direct comparisons between models and “omics” data are lacking. We developed a model that directly simulates metagenomes and metatranscriptomes for comparison with observations. Model microbes were randomly assigned genes for specialized functions, and communities of 68 species were simulated in the Atlantic Ocean. Unfit organisms were replaced, and the model self-organized to develop community genomes and transcriptomes. Emergent communities from simulations that were initialized with different cohorts of randomly generated microbes all produced realistic vertical and horizontal ocean nutrient, genome, and transcriptome gradients. Thus, the library of gene functions available to the community, rather than the distribution of functions among specific organisms, drove community assembly and biogeochemical gradients in the model ocean.
Shi, Xiaohu; Zhang, Jingfen; He, Zhiquan; Shang, Yi; Xu, Dong
2011-09-01
One of the major challenges in protein tertiary structure prediction is structure quality assessment. In many cases, protein structure prediction tools generate good structural models, but fail to select the best models from a huge number of candidates as the final output. In this study, we developed a sampling-based machine-learning method to rank protein structural models by integrating multiple scores and features. First, features such as predicted secondary structure, solvent accessibility and residue-residue contact information are integrated by two Radial Basis Function (RBF) models trained from different datasets. Then, the two RBF scores and five selected scoring functions developed by others, i.e., Opus-CA, Opus-PSP, DFIRE, RAPDF, and Cheng Score are synthesized by a sampling method. At last, another integrated RBF model ranks the structural models according to the features of sampling distribution. We tested the proposed method by using two different datasets, including the CASP server prediction models of all CASP8 targets and a set of models generated by our in-house software MUFOLD. The test result shows that our method outperforms any individual scoring function on both best model selection, and overall correlation between the predicted ranking and the actual ranking of structural quality.
Subsystem functional and the missing ingredient of confinement physics in density functionals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armiento, Rickard Roberto; Mattsson, Ann Elisabet; Hao, Feng
2010-08-01
The subsystem functional scheme is a promising approach recently proposed for constructing exchange-correlation density functionals. In this scheme, the physics in each part of real materials is described by mapping to a characteristic model system. The 'confinement physics,' an essential physical ingredient that has been left out in present functionals, is studied by employing the harmonic-oscillator (HO) gas model. By performing the potential {yields} density and the density {yields} exchange energy per particle mappings based on two model systems characterizing the physics in the interior (uniform electron-gas model) and surface regions (Airy gas model) of materials for the HO gases,more » we show that the confinement physics emerges when only the lowest subband of the HO gas is occupied by electrons. We examine the approximations of the exchange energy by several state-of-the-art functionals for the HO gas, and none of them produces adequate accuracy in the confinement dominated cases. A generic functional that incorporates the description of the confinement physics is needed.« less
Automotive Maintenance Data Base for Model Years 1976-1979. Part I
DOT National Transportation Integrated Search
1980-12-01
An update of the existing data base was developed to include life cycle maintenance costs of representative vehicles for the model years 1976-1979. Repair costs as a function of time are also developed for a passenger car in each of the compact, subc...
Models for Delivering School-Based Dental Care.
ERIC Educational Resources Information Center
Albert, David A.; McManus, Joseph M.; Mitchell, Dennis A.
2005-01-01
School-based health centers (SBHCs) often are located in high-need schools and communities. Dental service is frequently an addition to existing comprehensive services, functioning in a variety of models, configurations, and locations. SBHCs are indicated when parents have limited financial resources or inadequate health insurance, limiting…
An Interdisciplinary Model for Teaching Evolutionary Ecology.
ERIC Educational Resources Information Center
Coletta, John
1992-01-01
Describes a general systems evolutionary model and demonstrates how a previously established ecological model is a function of its past development based on the evolution of the rock, nutrient, and water cycles. Discusses the applications of the model in environmental education. (MDH)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hesheng, E-mail: hesheng@umich.edu; Feng, Mary; Jackson, Andrew
Purpose: To develop a local and global function model in the liver based on regional and organ function measurements to support individualized adaptive radiation therapy (RT). Methods and Materials: A local and global model for liver function was developed to include both functional volume and the effect of functional variation of subunits. Adopting the assumption of parallel architecture in the liver, the global function was composed of a sum of local function probabilities of subunits, varying between 0 and 1. The model was fit to 59 datasets of liver regional and organ function measures from 23 patients obtained before, during, andmore » 1 month after RT. The local function probabilities of subunits were modeled by a sigmoid function in relating to MRI-derived portal venous perfusion values. The global function was fitted to a logarithm of an indocyanine green retention rate at 15 minutes (an overall liver function measure). Cross-validation was performed by leave-m-out tests. The model was further evaluated by fitting to the data divided according to whether the patients had hepatocellular carcinoma (HCC) or not. Results: The liver function model showed that (1) a perfusion value of 68.6 mL/(100 g · min) yielded a local function probability of 0.5; (2) the probability reached 0.9 at a perfusion value of 98 mL/(100 g · min); and (3) at a probability of 0.03 [corresponding perfusion of 38 mL/(100 g · min)] or lower, the contribution to global function was lost. Cross-validations showed that the model parameters were stable. The model fitted to the data from the patients with HCC indicated that the same amount of portal venous perfusion was translated into less local function probability than in the patients with non-HCC tumors. Conclusions: The developed liver function model could provide a means to better assess individual and regional dose-responses of hepatic functions, and provide guidance for individualized treatment planning of RT.« less
Stergiopoulos, Vicky; Schuler, Andrée; Nisenbaum, Rosane; deRuiter, Wayne; Guimond, Tim; Wasylenki, Donald; Hoch, Jeffrey S; Hwang, Stephen W; Rouleau, Katherine; Dewa, Carolyn
2015-08-28
Although a growing number of collaborative mental health care models have been developed, targeting specific populations, few studies have utilized such interventions among homeless populations. This quasi-experimental study compared the outcomes of two shelter-based collaborative mental health care models for men experiencing homelessness and mental illness: (1) an integrated multidisciplinary collaborative care (IMCC) model and (2) a less resource intensive shifted outpatient collaborative care (SOCC) model. In total 142 participants, 70 from IMCC and 72 from SOCC were enrolled and followed for 12 months. Outcome measures included community functioning, residential stability, and health service use. Multivariate regression models were used to compare study arms with respect to change in community functioning, residential stability, and health service use outcomes over time and to identify baseline demographic, clinical or homelessness variables associated with observed changes in these domains. We observed improvements in both programs over time on measures of community functioning, residential stability, hospitalizations, emergency department visits and community physician visits, with no significant differences between groups over time on these outcome measures. Our findings suggest that shelter-based collaborative mental health care models may be effective for individuals experiencing homelessness and mental illness. Future studies should seek to confirm these findings and examine the cost effectiveness of collaborative care models for this population.
Watershed modeling at the Savannah River Site.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vache, Kellie
2015-04-29
The overall goal of the work was the development of a watershed scale model of hydrological function for application to the US Department of Energy’s (DOE) Savannah River Site (SRS). The primary outcomes is a grid based hydrological modeling system that captures near surface runoff as well as groundwater recharge and contributions of groundwater to streams. The model includes a physically-based algorithm to capture both evaporation and transpiration from forestland.
Spectral filtering of gradient for l2-norm frequency-domain elastic waveform inversion
NASA Astrophysics Data System (ADS)
Oh, Ju-Won; Min, Dong-Joo
2013-05-01
To enhance the robustness of the l2-norm elastic full-waveform inversion (FWI), we propose a denoise function that is incorporated into single-frequency gradients. Because field data are noisy and modelled data are noise-free, the denoise function is designed based on the ratio of modelled data to field data summed over shots and receivers. We first take the sums of the modelled data and field data over shots, then take the sums of the absolute values of the resultant modelled data and field data over the receivers. Due to the monochromatic property of wavefields at each frequency, signals in both modelled and field data tend to be cancelled out or maintained, whereas certain types of noise, particularly random noise, can be amplified in field data. As a result, the spectral distribution of the denoise function is inversely proportional to the ratio of noise to signal at each frequency, which helps prevent the noise-dominant gradients from contributing to model parameter updates. Numerical examples show that the spectral distribution of the denoise function resembles a frequency filter that is determined by the spectrum of the signal-to-noise (S/N) ratio during the inversion process, with little human intervention. The denoise function is applied to the elastic FWI of synthetic data, with three types of random noise generated by the modified version of the Marmousi-2 model: white, low-frequency and high-frequency random noises. Based on the spectrum of S/N ratios at each frequency, the denoise function mainly suppresses noise-dominant single-frequency gradients, which improves the inversion results at the cost of spatial resolution.
Flood loss modelling with FLF-IT: a new flood loss function for Italian residential structures
NASA Astrophysics Data System (ADS)
Hasanzadeh Nafari, Roozbeh; Amadio, Mattia; Ngo, Tuan; Mysiak, Jaroslav
2017-07-01
The damage triggered by different flood events costs the Italian economy millions of euros each year. This cost is likely to increase in the future due to climate variability and economic development. In order to avoid or reduce such significant financial losses, risk management requires tools which can provide a reliable estimate of potential flood impacts across the country. Flood loss functions are an internationally accepted method for estimating physical flood damage in urban areas. In this study, we derived a new flood loss function for Italian residential structures (FLF-IT), on the basis of empirical damage data collected from a recent flood event in the region of Emilia-Romagna. The function was developed based on a new Australian approach (FLFA), which represents the confidence limits that exist around the parameterized functional depth-damage relationship. After model calibration, the performance of the model was validated for the prediction of loss ratios and absolute damage values. It was also contrasted with an uncalibrated relative model with frequent usage in Europe. In this regard, a three-fold cross-validation procedure was carried out over the empirical sample to measure the range of uncertainty from the actual damage data. The predictive capability has also been studied for some sub-classes of water depth. The validation procedure shows that the newly derived function performs well (no bias and only 10 % mean absolute error), especially when the water depth is high. Results of these validation tests illustrate the importance of model calibration. The advantages of the FLF-IT model over other Italian models include calibration with empirical data, consideration of the epistemic uncertainty of data, and the ability to change parameters based on building practices across Italy.
Cook, J L; Rio, E; Purdam, C R; Docking, S I
2016-01-01
The pathogenesis of tendinopathy and the primary biological change in the tendon that precipitates pathology have generated several pathoaetiological models in the literature. The continuum model of tendon pathology, proposed in 2009, synthesised clinical and laboratory-based research to guide treatment choices for the clinical presentations of tendinopathy. While the continuum has been cited extensively in the literature, its clinical utility has yet to be fully elucidated. The continuum model proposed a model for staging tendinopathy based on the changes and distribution of disorganisation within the tendon. However, classifying tendinopathy based on structure in what is primarily a pain condition has been challenged. The interplay between structure, pain and function is not yet fully understood, which has partly contributed to the complex clinical picture of tendinopathy. Here we revisit and assess the merit of the continuum model in the context of new evidence. We (1) summarise new evidence in tendinopathy research in the context of the continuum, (2) discuss tendon pain and the relevance of a model based on structure and (3) describe relevant clinical elements (pain, function and structure) to begin to build a better understanding of the condition. Our goal is that the continuum model may help guide targeted treatments and improved patient outcomes. PMID:27127294
Zheng, Wenjun
2010-01-01
Abstract Protein conformational dynamics, despite its significant anharmonicity, has been widely explored by normal mode analysis (NMA) based on atomic or coarse-grained potential functions. To account for the anharmonic aspects of protein dynamics, this study proposes, and has performed, an anharmonic NMA (ANMA) based on the Cα-only elastic network models, which assume elastic interactions between pairs of residues whose Cα atoms or heavy atoms are within a cutoff distance. The key step of ANMA is to sample an anharmonic potential function along the directions of eigenvectors of the lowest normal modes to determine the mean-squared fluctuations along these directions. ANMA was evaluated based on the modeling of anisotropic displacement parameters (ADPs) from a list of 83 high-resolution protein crystal structures. Significant improvement was found in the modeling of ADPs by ANMA compared with standard NMA. Further improvement in the modeling of ADPs is attained if the interactions between a protein and its crystalline environment are taken into account. In addition, this study has determined the optimal cutoff distances for ADP modeling based on elastic network models, and these agree well with the peaks of the statistical distributions of distances between Cα atoms or heavy atoms derived from a large set of protein crystal structures. PMID:20550915
Biomimetic three-dimensional tissue models for advanced high-throughput drug screening
Nam, Ki-Hwan; Smith, Alec S.T.; Lone, Saifullah; Kwon, Sunghoon; Kim, Deok-Ho
2015-01-01
Most current drug screening assays used to identify new drug candidates are 2D cell-based systems, even though such in vitro assays do not adequately recreate the in vivo complexity of 3D tissues. Inadequate representation of the human tissue environment during a preclinical test can result in inaccurate predictions of compound effects on overall tissue functionality. Screening for compound efficacy by focusing on a single pathway or protein target, coupled with difficulties in maintaining long-term 2D monolayers, can serve to exacerbate these issues when utilizing such simplistic model systems for physiological drug screening applications. Numerous studies have shown that cell responses to drugs in 3D culture are improved from those in 2D, with respect to modeling in vivo tissue functionality, which highlights the advantages of using 3D-based models for preclinical drug screens. In this review, we discuss the development of microengineered 3D tissue models which accurately mimic the physiological properties of native tissue samples, and highlight the advantages of using such 3D micro-tissue models over conventional cell-based assays for future drug screening applications. We also discuss biomimetic 3D environments, based-on engineered tissues as potential preclinical models for the development of more predictive drug screening assays for specific disease models. PMID:25385716
Hamilton, Joshua J; Reed, Jennifer L
2012-01-01
Genome-scale network reconstructions are useful tools for understanding cellular metabolism, and comparisons of such reconstructions can provide insight into metabolic differences between organisms. Recent efforts toward comparing genome-scale models have focused primarily on aligning metabolic networks at the reaction level and then looking at differences and similarities in reaction and gene content. However, these reaction comparison approaches are time-consuming and do not identify the effect network differences have on the functional states of the network. We have developed a bilevel mixed-integer programming approach, CONGA, to identify functional differences between metabolic networks by comparing network reconstructions aligned at the gene level. We first identify orthologous genes across two reconstructions and then use CONGA to identify conditions under which differences in gene content give rise to differences in metabolic capabilities. By seeking genes whose deletion in one or both models disproportionately changes flux through a selected reaction (e.g., growth or by-product secretion) in one model over another, we are able to identify structural metabolic network differences enabling unique metabolic capabilities. Using CONGA, we explore functional differences between two metabolic reconstructions of Escherichia coli and identify a set of reactions responsible for chemical production differences between the two models. We also use this approach to aid in the development of a genome-scale model of Synechococcus sp. PCC 7002. Finally, we propose potential antimicrobial targets in Mycobacterium tuberculosis and Staphylococcus aureus based on differences in their metabolic capabilities. Through these examples, we demonstrate that a gene-centric approach to comparing metabolic networks allows for a rapid comparison of metabolic models at a functional level. Using CONGA, we can identify differences in reaction and gene content which give rise to different functional predictions. Because CONGA provides a general framework, it can be applied to find functional differences across models and biological systems beyond those presented here.
Hamilton, Joshua J.; Reed, Jennifer L.
2012-01-01
Genome-scale network reconstructions are useful tools for understanding cellular metabolism, and comparisons of such reconstructions can provide insight into metabolic differences between organisms. Recent efforts toward comparing genome-scale models have focused primarily on aligning metabolic networks at the reaction level and then looking at differences and similarities in reaction and gene content. However, these reaction comparison approaches are time-consuming and do not identify the effect network differences have on the functional states of the network. We have developed a bilevel mixed-integer programming approach, CONGA, to identify functional differences between metabolic networks by comparing network reconstructions aligned at the gene level. We first identify orthologous genes across two reconstructions and then use CONGA to identify conditions under which differences in gene content give rise to differences in metabolic capabilities. By seeking genes whose deletion in one or both models disproportionately changes flux through a selected reaction (e.g., growth or by-product secretion) in one model over another, we are able to identify structural metabolic network differences enabling unique metabolic capabilities. Using CONGA, we explore functional differences between two metabolic reconstructions of Escherichia coli and identify a set of reactions responsible for chemical production differences between the two models. We also use this approach to aid in the development of a genome-scale model of Synechococcus sp. PCC 7002. Finally, we propose potential antimicrobial targets in Mycobacterium tuberculosis and Staphylococcus aureus based on differences in their metabolic capabilities. Through these examples, we demonstrate that a gene-centric approach to comparing metabolic networks allows for a rapid comparison of metabolic models at a functional level. Using CONGA, we can identify differences in reaction and gene content which give rise to different functional predictions. Because CONGA provides a general framework, it can be applied to find functional differences across models and biological systems beyond those presented here. PMID:22666308
NASA Astrophysics Data System (ADS)
Butler, Samuel D.; Marciniak, Michael A.
2014-09-01
Since the development of the Torrance-Sparrow bidirectional re ectance distribution function (BRDF) model in 1967, several BRDF models have been created. Previous attempts to categorize BRDF models have relied upon somewhat vague descriptors, such as empirical, semi-empirical, and experimental. Our approach is to instead categorize BRDF models based on functional form: microfacet normal distribution, geometric attenua- tion, directional-volumetric and Fresnel terms, and cross section conversion factor. Several popular microfacet models are compared to a standardized notation for a microfacet BRDF model. A library of microfacet model components is developed, allowing for creation of unique microfacet models driven by experimentally measured BRDFs.
Functional helicoidal model of DNA molecule with elastic nonlinearity
NASA Astrophysics Data System (ADS)
Tseytlin, Y. M.
2013-06-01
We constructed a functional DNA molecule model on the basis of a flexible helicoidal sensor, specifically, a pretwisted hollow nano-strip. We study in this article the helicoidal nano- sensor model with a pretwisted strip axial extension corresponding to the overstretching transition of DNA from dsDNA to ssDNA. Our model and the DNA molecule have similar geometrical and nonlinear mechanical features unlike models based on an elastic rod, accordion bellows, or an imaginary combination of "multiple soft and hard linear springs", presented in some recent publications.
ERIC Educational Resources Information Center
Gomez, Rapson
2012-01-01
Objective: Generalized partial credit model, which is based on item response theory (IRT), was used to test differential item functioning (DIF) for the "Diagnostic and Statistical Manual of Mental Disorders" (4th ed.), inattention (IA), and hyperactivity/impulsivity (HI) symptoms across boys and girls. Method: To accomplish this, parents completed…
Semi-Parametric Item Response Functions in the Context of Guessing. CRESST Report 844
ERIC Educational Resources Information Center
Falk, Carl F.; Cai, Li
2015-01-01
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
Universal Verification Methodology Based Register Test Automation Flow.
Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu
2016-05-01
In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.
Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
XML-Based SHINE Knowledge Base Interchange Language
NASA Technical Reports Server (NTRS)
James, Mark; Mackey, Ryan; Tikidjian, Raffi
2008-01-01
The SHINE Knowledge Base Interchange Language software has been designed to more efficiently send new knowledge bases to spacecraft that have been embedded with the Spacecraft Health Inference Engine (SHINE) tool. The intention of the behavioral model is to capture most of the information generally associated with a spacecraft functional model, while specifically addressing the needs of execution within SHINE and Livingstone. As such, it has some constructs that are based on one or the other.